• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

AI learnt "something" from the Physical & Life Sciences Forum.

Hans Blaster

Raised by bees
Mar 11, 2017
21,951
16,541
55
USA
✟416,517.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
I think it also depends on who is asking the question and what assumptions they hold about the world and beyond. For example you could ask 'Is there a God' and I sure it will come up with the empiricle data that God cannot be verified.
If you train the LLM on the scientific literature, it won't have any idea what this "God" is. If you feed it creationist apologetics, it will respond with creationist apologitics. Train it on those "atheist/apoloogist" or "science/creationist" debates and it will likely either "bothsides" the argument "some people say x, others say not-x", or it will spit mangled garbage like "the genetic evidence of common descent demonstrates that their is a god because intelligent design is falsified."

But the same question asked philosophically the possibility of there being a God' it will make good arguements for belief in a God.
Same as above GIGO.
I think the bigger issue is integrating human consciousness into the equation. This is something science or logic cannot determine and would have to include the phenomenal experiences which may also give justification for proper belief in God or something like a consciousness beyond the material processes.

Which a machine or software could not determine without the ability to have human conscious experiences. As opposed to say mathmatical equations or physics which have objective measures.

"AI"s run on NVidia GPUs, not mind meat. If you want humans involved, humans are going to have to do all of the work. You can't just rely on some word-order predictor.
 
Upvote 0

stevevw

inquisitive
Nov 4, 2013
16,017
1,746
Brisbane Qld Australia
✟321,755.00
Gender
Male
Faith
Christian
Marital Status
Private
If you train the LLM on the scientific literature, it won't have any idea what this "God" is. If you feed it creationist apologetics, it will respond with creationist apologitics. Train it on those "atheist/apoloogist" or "science/creationist" debates and it will likely either "bothsides" the argument "some people say x, others say not-x", or it will spit mangled garbage like "the genetic evidence of common descent demonstrates that their is a god because intelligent design is falsified."
So just asking a simple question like 'Is there a God' is going to open up a can of worms. The answer could be anything depending on what you ask. Which is sort of the point. It is the subject who is determining what is being asked and thus what the answer will be.

If one is an atheist then the answers that conform such as empiricle science are valid. If you are a theist then the answers that make a case for God will be relevant. Both are possibilities but asked from different prior metaphysical ontologies which dictate epistemics.
Same as above GIGO.
Is that (God in) by the theists worldview and then its (God out) by the atheist and empiricle worldview lol. Then its put back in and taken back out again and again and never any resolution.
"AI"s run on NVidia GPUs, not mind meat. If you want humans involved, humans are going to have to do all of the work. You can't just rely on some word-order predictor.
Do you think it will be possible to perhaps mimick agency or consciousness. I think one of the videos I seen were at the point of trying to integrate real time responses. Though it was mimicked such as micro facial expressions of emotion and thought processes. As though really thinking about something.

But will it ever get to a point where it is so close that its near impossible to tell. Does not the naturalistic worldview claim that a machine could be made conscious. because consciousness is basically neurons and electrical signals and a Ai brain can be wored accordingly that it should produce consciousness.
 
Upvote 0

Hans Blaster

Raised by bees
Mar 11, 2017
21,951
16,541
55
USA
✟416,517.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
So just asking a simple question like 'Is there a God' is going to open up a can of worms. The answer could be anything depending on what you ask. Which is sort of the point. It is the subject who is determining what is being asked and thus what the answer will be.

If one is an atheist then the answers that conform such as empiricle science are valid. If you are a theist then the answers that make a case for God will be relevant. Both are possibilities but asked from different prior metaphysical ontologies which dictate epistemics.
Did you actually *read* my response? LLMs "know" what they are trained in. If you train an LLM with the scientific literature in physics and ask that question "does physics prove god?", then it will ask back something like: "I am not familiar with 'god' please clarify what you mean?" As for the more generally trained LLMs, you can go find YT videos of apologists getting it to say "God is real" and counter apologists getting it to say the opposite. The answers are meaningless. LLMs don't think, they have no descernment, and they have poor error correction (worse they tend to accept what ever "correction" you give it).

Don't let "AI"s do your thinking for you because they can't think.

Is that (God in) by the theists worldview and then its (God out) by the atheist and empiricle worldview lol. Then its put back in and taken back out again and again and never any resolution.
Do you not understand GIGO? That is the world of the ai.
Do you think it will be possible to perhaps mimick agency or consciousness. I think one of the videos I seen were at the point of trying to integrate real time responses. Though it was mimicked such as micro facial expressions of emotion and thought processes. As though really thinking about something.
An "ai" doesn't think with its "face" nor does it need to communicate like that. It is just fakery. Real computers think with blinking lights or spinning wheels.
But will it ever get to a point where it is so close that its near impossible to tell. Does not the naturalistic worldview claim that a machine could be made conscious. because consciousness is basically neurons and electrical signals and a Ai brain can be wored accordingly that it should produce consciousness.
This thread is about what LLMs can do. I'm not going to discuss phioloslophy with you.
 
Upvote 0

stevevw

inquisitive
Nov 4, 2013
16,017
1,746
Brisbane Qld Australia
✟321,755.00
Gender
Male
Faith
Christian
Marital Status
Private
Did you actually *read* my response? LLMs "know" what they are trained in. If you train an LLM with the scientific literature in physics and ask that question "does physics prove god?", then it will ask back something like: "I am not familiar with 'god' please clarify what you mean?" As for the more generally trained LLMs, you can go find YT videos of apologists getting it to say "God is real" and counter apologists getting it to say the opposite. The answers are meaningless. LLMs don't think, they have no descernment, and they have poor error correction (worse they tend to accept what ever "correction" you give it).
Ok I think I understand. I am not too familiar with LLM. It seems more interactive. I have heard of people manipulating the answers. But that only supports what I am saying. That its not the data but how its used and manipulated.

Can you have a rational arguement with Ai where it can articulate philosophical arguements.
Don't let "AI"s do your thinking for you because they can't think.
No I agree, it can become a lazy way of finding out. I only use it in conjunction with research from independent sources.
Do you not understand GIGO? That is the world of the ai.
Yes and this is what I was alluding to that information itself can be used to create a false narrative or even reality. By controlling the information. What is allowed or is emphasised over the other.

We already see this with standard ledia. Imagine the influence of such a powerful platform to control information.
An "ai" doesn't think with its "face" nor does it need to communicate like that. It is just fakery. Real computers think with blinking lights or spinning wheels.
Ok so if we are getting a machine to think in information terms like a human brain is a data base for information. Is the ultimate goal to create robotics as well as they seem to go hand in hand.

Ultimately an Ai robot that not only thinks but mimicks human agency and consciousness. It would seem if Ai and robotics is to replace humans that it will need to read faces and communicate that way. As mimicking the human brain to derive the answers is more than information. Ultimately it is to mimick humans.
This thread is about what LLMs can do. I'm not going to discuss phioloslophy with you.
I will have to do some research on LLM's. Thanks.
 
Upvote 0

Hans Blaster

Raised by bees
Mar 11, 2017
21,951
16,541
55
USA
✟416,517.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
Ok I think I understand. I am not too familiar with LLM. It seems more interactive. I have heard of people manipulating the answers. But that only supports what I am saying. That its not the data but how its used and manipulated.
ChatGPT is an LLM. I'm not sure what you think of yours I supported in my previous reply. The data is the stuff fed to the LLM, everything that it spits out is not data. It is output.
Can you have a rational arguement with Ai where it can articulate philosophical arguements.
I doubt it. It can only piece together arguments it has ingested.
No I agree, it can become a lazy way of finding out. I only use it in conjunction with research from independent sources.
A search engine can find them and you don't know what is "independent sources" if you don't know what data was fed to the bot. (And the "AI" companies are loath to tell you since they have effectively *stolen* vast amounts of copyrighted material to generate stuff from in the chat bots.)
Yes and this is what I was alluding to that information itself can be used to create a false narrative or even reality. By controlling the information. What is allowed or is emphasised over the other.

We already see this with standard ledia. Imagine the influence of such a powerful platform to control information.
How do you create reality from garbage input?
Ok so if we are getting a machine to think in information terms like a human brain is a data base for information. Is the ultimate goal to create robotics as well as they seem to go hand in hand.

Ultimately an Ai robot that not only thinks but mimicks human agency and consciousness. It would seem if Ai and robotics is to replace humans that it will need to read faces and communicate that way. As mimicking the human brain to derive the answers is more than information. Ultimately it is to mimick humans.
Frankly we aren't getting machines to think like humans and I don't think we have a reasonable path for anything like the near future.
I will have to do some research on LLM's. Thanks.
Excellent.
 
Upvote 0