• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

AI learnt "something" from the Physical & Life Sciences Forum.

Hans Blaster

On August Recess
Mar 11, 2017
21,946
16,540
55
USA
✟416,398.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
I think it also depends on who is asking the question and what assumptions they hold about the world and beyond. For example you could ask 'Is there a God' and I sure it will come up with the empiricle data that God cannot be verified.
If you train the LLM on the scientific literature, it won't have any idea what this "God" is. If you feed it creationist apologetics, it will respond with creationist apologitics. Train it on those "atheist/apoloogist" or "science/creationist" debates and it will likely either "bothsides" the argument "some people say x, others say not-x", or it will spit mangled garbage like "the genetic evidence of common descent demonstrates that their is a god because intelligent design is falsified."

But the same question asked philosophically the possibility of there being a God' it will make good arguements for belief in a God.
Same as above GIGO.

"AI"s run on NVidia GPUs, not mind meat. If you want humans involved, humans are going to have to do all of the work. You can't just rely on some word-order predictor.
 
Upvote 0

stevevw

inquisitive
Nov 4, 2013
16,017
1,745
Brisbane Qld Australia
✟321,644.00
Gender
Male
Faith
Christian
Marital Status
Private
So just asking a simple question like 'Is there a God' is going to open up a can of worms. The answer could be anything depending on what you ask. Which is sort of the point. It is the subject who is determining what is being asked and thus what the answer will be.

If one is an atheist then the answers that conform such as empiricle science are valid. If you are a theist then the answers that make a case for God will be relevant. Both are possibilities but asked from different prior metaphysical ontologies which dictate epistemics.
Same as above GIGO.
Is that (God in) by the theists worldview and then its (God out) by the atheist and empiricle worldview lol. Then its put back in and taken back out again and again and never any resolution.
"AI"s run on NVidia GPUs, not mind meat. If you want humans involved, humans are going to have to do all of the work. You can't just rely on some word-order predictor.
Do you think it will be possible to perhaps mimick agency or consciousness. I think one of the videos I seen were at the point of trying to integrate real time responses. Though it was mimicked such as micro facial expressions of emotion and thought processes. As though really thinking about something.

But will it ever get to a point where it is so close that its near impossible to tell. Does not the naturalistic worldview claim that a machine could be made conscious. because consciousness is basically neurons and electrical signals and a Ai brain can be wored accordingly that it should produce consciousness.
 
Upvote 0

Hans Blaster

On August Recess
Mar 11, 2017
21,946
16,540
55
USA
✟416,398.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
Did you actually *read* my response? LLMs "know" what they are trained in. If you train an LLM with the scientific literature in physics and ask that question "does physics prove god?", then it will ask back something like: "I am not familiar with 'god' please clarify what you mean?" As for the more generally trained LLMs, you can go find YT videos of apologists getting it to say "God is real" and counter apologists getting it to say the opposite. The answers are meaningless. LLMs don't think, they have no descernment, and they have poor error correction (worse they tend to accept what ever "correction" you give it).

Don't let "AI"s do your thinking for you because they can't think.

Is that (God in) by the theists worldview and then its (God out) by the atheist and empiricle worldview lol. Then its put back in and taken back out again and again and never any resolution.
Do you not understand GIGO? That is the world of the ai.
An "ai" doesn't think with its "face" nor does it need to communicate like that. It is just fakery. Real computers think with blinking lights or spinning wheels.
This thread is about what LLMs can do. I'm not going to discuss phioloslophy with you.
 
Upvote 0

stevevw

inquisitive
Nov 4, 2013
16,017
1,745
Brisbane Qld Australia
✟321,644.00
Gender
Male
Faith
Christian
Marital Status
Private
Ok I think I understand. I am not too familiar with LLM. It seems more interactive. I have heard of people manipulating the answers. But that only supports what I am saying. That its not the data but how its used and manipulated.

Can you have a rational arguement with Ai where it can articulate philosophical arguements.
Don't let "AI"s do your thinking for you because they can't think.
No I agree, it can become a lazy way of finding out. I only use it in conjunction with research from independent sources.
Do you not understand GIGO? That is the world of the ai.
Yes and this is what I was alluding to that information itself can be used to create a false narrative or even reality. By controlling the information. What is allowed or is emphasised over the other.

We already see this with standard ledia. Imagine the influence of such a powerful platform to control information.
An "ai" doesn't think with its "face" nor does it need to communicate like that. It is just fakery. Real computers think with blinking lights or spinning wheels.
Ok so if we are getting a machine to think in information terms like a human brain is a data base for information. Is the ultimate goal to create robotics as well as they seem to go hand in hand.

Ultimately an Ai robot that not only thinks but mimicks human agency and consciousness. It would seem if Ai and robotics is to replace humans that it will need to read faces and communicate that way. As mimicking the human brain to derive the answers is more than information. Ultimately it is to mimick humans.
This thread is about what LLMs can do. I'm not going to discuss phioloslophy with you.
I will have to do some research on LLM's. Thanks.
 
Upvote 0