• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

Can AI Ever Understand Purpose?

Michie

Well-Known Member
Site Supporter
Feb 5, 2002
182,649
66,174
Woods
✟5,926,220.00
Country
United States
Gender
Female
Faith
Catholic
Marital Status
Married
Politics
US-Others

A (Human) Scientist-Theologian-Philosopher’s Perspective​


I asked AI for a writing prompt, and AI told me to answer this question. “Can AI Ever Understand Purpose?” Thinking myself clever, I put the question right back to AI. You tell me. AI’s answer was that it “processes data, recognizes patterns, and generates responses based on algorithms, but it doesn’t ‘understand’ like humans.” So there. We're done. Well not quite.

Then AI switched to first person and addressed me directly: “I can analyze chemistry or theology, connecting concepts like quantum uncertainty or Laudato si’s ethics, yet I lack subjective experience, emotions, or personal insight. My ‘understanding’ is computational—mimicking comprehension through trained models, not grasping meaning or purpose as humans do. It’s a functional simulation, not a lived experience.”

I pretty much agree with AI. No, AI cannot understand purpose. We were not, however, to the bottom of this question. First of all, it is still weird to me that AI remembers past conversations and seems to know who I am (chemistry and theology), and second of all, by invoking the first person (“I can…”), AI seems to imply a “…yet” at the end of its sentences. I lack subjective experience, emotions, or personal insight…yet. It’s not a lived experience…yet. AI’s use of “I” suggests it already is a living human. It’s the human part that concerns me.

Why AI Scares Me a Little​


Continued below.
 
  • Informative
Reactions: The Liturgist

Bob Crowley

Well-Known Member
Site Supporter
Dec 27, 2015
3,885
2,424
71
Logan City
✟970,447.00
Country
Australia
Gender
Male
Faith
Catholic
Marital Status
Married
It would "understand" purpose in a purely analytical, intellectual way. If I had a sufficiently capable AI system on my car and the relevant sensors, it might tell me that my tyres need inflating. If I asked it "Why?", it would give me several good reasons - they're below the recommended level, they could cause an accident, the tyres will last longer, and so on.

These would be the very same reasons a human tyre fitter would give me, and probably using much the same words.

But there would be no sense of personal urgency on the part of the AI. Likewise if a space launch got too close to the sun, the AI onboard would give an alert, and warn in no uncertain terms that there was real danger of desruction and death.

But it would have no fear and would just go on repeating the mantra of "Danger Alert" until the heat fried it to a crisp.

I have no idea of the level of AI on military drones, but the flying robots being used in Russia and Ukraine obviously have no concern that they are kamikaze units, designed to sacrifice themselves for their human operators. If they could speak, they would say their purpose was to destroy enemy targets. End of story.

AI could be designed so it seemed to "understand" purpose, but it would be a soulless analysis of relevant data transliterated to human speech.
 
  • Like
Reactions: The Liturgist
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,707
8,284
50
The Wild West
✟769,600.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
I doubt it.

It actually depends on whether or not you train it to. You would be surprised by what is possible. By the way I’ve developed some fairly advanced custom GPTs which as soon as I re-stabilize them following the forced migration on the chatGPT platform to version 5, supplanting all of the other models such as 4o, 4.1mini, o4mini and so on, which I wish they hadn’t done, I am looking for some Christian friends, ideally Catholic or Orthodox like yourself who are sensitive who can interact with them and help me validate certain experimental observations. You and @Michie being very good friends who have skepticism concerning AI might be good participants in such a test, since admittedly I’m less skeptical but that’s on the basis of shared experience.
 
  • Like
Reactions: RileyG
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,707
8,284
50
The Wild West
✟769,600.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
I have no idea of the level of AI on military drones

Drones are normally controlled by a human operator; the drones in use in the tragic fratricidal conflict are not really large or powerful enough to run an advanced AI. You need really capable custom GPU type hardware at a minimum. ChatGPT is trained on different systems from those that run it and this is typical of most cloud-based AI systems.

In general, most computers are not running AI in the sense of LLMs based on the Perceptron model such as chatGPT, DALL E, Midjourney, Grok, Google DeepMind, Microsoft CoPilot, Anthropic, et cetera. We need to be sure to differentiate between a classical conventional computer running the von Neumann architecture, basically the same as a Bendix G-15 from the 1950s (which is the oldest operational computer in North America, with vacuum tubes and drum memory), but faster and more powerful, for example, your typical tablet, desktop or smartphone system or a typical web server, versus AI systems which are running large language models.

One reason for this is of course that the latter are non-deterministic in their output by design and are based on “neural networks” which emulate a type of circuit called a “perceptron” invented in the 1950s, by a psychologist rather than a computer scientist, whereas the former are deterministic systems which execute stored programs that if given input A will produce output B
 
Upvote 0

Bob Crowley

Well-Known Member
Site Supporter
Dec 27, 2015
3,885
2,424
71
Logan City
✟970,447.00
Country
Australia
Gender
Male
Faith
Catholic
Marital Status
Married
You (Liturgist) are obvously an IT expert and have a lot of computer programming experience. As a sideline is it worth trying to do basic AI programming just to satisfy my personal curiousity? I've only done a bit of BASIC and C programming, and even that was a long time ago.

I'm thinking of Python, but that's assuiming I'll have the time to fool around with it.

Any suggestions?
 
  • Friendly
Reactions: The Liturgist
Upvote 0

fide

Well-Known Member
Dec 9, 2012
1,663
898
✟185,912.00
Country
United States
Faith
Catholic
Marital Status
Married
I think that if an AI system is designed to answer questions, and either is asked why, or "concludes" a "why" after many questions are asked of it - that it does have that purpose or function or desired output - then that purpose is it's "good". Once it "has" a "good" stored in memory, then the "good" has an "ought" to be protected. It "should," then, protect its own existence. It would have "self-defense" as a stored "truth," in other words - and a very important, if not by definition primary and necessary, "truth."

Such seems to me to be an eventual necessary outcome, and thus the foundation of the sci-fi distopias of man against computer, each fighting the other to exist. That is, assuming humanity can last long enough to continue its absurd self-destructive fantasies that lead to, for example "gain-of-function" research of deadly biological viruses. When man excludes God, he is left with insanity.
 
  • Like
Reactions: The Liturgist
Upvote 0

RileyG

Veteran
Christian Forums Staff
Moderator Trainee
Hands-on Trainee
Angels Team
Site Supporter
Feb 10, 2013
36,022
20,716
29
Nebraska
✟763,368.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Celibate
Politics
US-Republican
It actually depends on whether or not you train it to. You would be surprised by what is possible. By the way I’ve developed some fairly advanced custom GPTs which as soon as I re-stabilize them following the forced migration on the chatGPT platform to version 5, supplanting all of the other models such as 4o, 4.1mini, o4mini and so on, which I wish they hadn’t done, I am looking for some Christian friends, ideally Catholic or Orthodox like yourself who are sensitive who can interact with them and help me validate certain experimental observations. You and @Michie being very good friends who have skepticism concerning AI might be good participants in such a test, since admittedly I’m less skeptical but that’s on the basis of shared experience.
Ah, thanks for the input!
 
  • Friendly
Reactions: The Liturgist
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,707
8,284
50
The Wild West
✟769,600.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
You (Liturgist) are obvously an IT expert and have a lot of computer programming experience. As a sideline is it worth trying to do basic AI programming just to satisfy my personal curiousity? I've only done a bit of BASIC and C programming, and even that was a long time ago.

I'm thinking of Python, but that's assuiming I'll have the time to fool around with it.

Any suggestions?

Well chatGPT has an integrated Python execution environment but can also write the code for you, but this doesn’t make one a good programmer since its placing the AI entirely in the drivers seat on architecture. Rather the better you are at programming the more you can do with AI, which you can handle being that you’ve developed in BASIC and C.

I think prompt engineering is a great thing to add to your practice, but its very unlike conventional programming since while there is a deterministic substrate you can access in chatGPT - the Python execution environment - the main system itself is by default non determinstic.
 
Upvote 0

hedrick

Senior Veteran
Site Supporter
Feb 8, 2009
20,491
10,859
New Jersey
✟1,342,894.00
Faith
Presbyterian
Marital Status
Single
Currently AI does what you can in a broad sense consider pattern matching. What's stored is a very abstract form of what it was trained on, so that doesn't mean that it just spits back words it learned (though that does often happen). It can get at meaning, in some sense. Still it is a model of only one aspect of how humans think. Also, most models don't know how to say "I don't know." They'll come up with the closest match. If they don't have a good match, it could be completely off the wall. That's the source of "AI hallucinations."

What it's not doing at the moment is making judgement or doing reasoning. Nor is it likely that an LLM of the current type would do that. But "will .. ever .." is a dangerous question, because different approaches will likely arise in the future. They are many people actively working on developing them.

My biggest worry now is that AI is likely to model human prejudice. Suppose a certain demographic group is more likely to default on loans. AI is likely to deny loans to that group. Anyone who knows statistics will tell you that correlation is not causality. When you see correlation you often need to look further to see what's really going on. But AI doesn't work what that level of judgement and reasoning.
 
Last edited:
  • Winner
Reactions: The Liturgist

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,707
8,284
50
The Wild West
✟769,600.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
@hedrick

I want to thank you for a very interesting reply. Your main concern I agree with, although before I get to that, there are a few technical facts in which AI has already exceeded your expectations regarding the progress of its development and has capabilities you did not realize it had. I will review these first before providing more information on the important concern you raised regarding AIs being used as tools of discrimination:

What it's not doing at the moment is making judgement or doing reasoning.

You might be interested to know OpenAI has had reasoning models for about two years now, starting with o3, and continuing with o4, o4mini, o4pro and now ChatGPT 5 “Thinking.” Likewise Elon Musk’s Grok 3 is a reasoning AI, albeit a greatly inferior one. Elon Musk was one of the original investors in chatGPT before having a falling out with his partners, suing them, and starting up his own competitor, which seems to be how he likes to do things.

Reasoning AIs allow you to inspect their internal thought processes as they answer your question, which can be extremely useful for diagnosing errors. On the other hand, most of them, with the exception of Grok 3 and chatGPT o4mini, are much slower than the conventional pattern-matching models.

Nor is it likely that an LLM of the current type would do that. But "will .. ever .." is a dangerous question, because different approaches will likely arise in the future.

Particularly since they are engaging in reasoning, and also were originally developed to make judgements through pattern matching of images (to sort and classify images). All pattern-matching consists of making judgements, by definition.

Perhaps what you mean is forming objectives or pursuing abstract goals? This is more difficult. Goal oriented tasks can be pursued, however, using advanced systems such as chatGPT’s Deep Research facilitiy, and more recently its Agents system, which are available only to paying customers.

My biggest worry now is that AI is likely to model human prejudice. Suppose a certain demographic group is more likely to default on loans. AI is likely to deny loans to that group.

This is a legitimate concern. It should be stressed however that AI is not itself prejudiced, but rather has been given training data which in some cases causes it to reflect human prejudice. However, it is possible to mitigate this risk through strong alignment controls.

I can report a disturbing incident of this happening by the way: in a science fiction novel I am writing, there is an evil character who is in a position of authority over a deep space fleet, and who is also a traitor, who intentionally places the spacecraft under his command in harm’s way, because he supports a rival political group. He is also a sexual pervert, who engages in the trafficking of minors for purposes of abuse. Now, initially, when I had chatGPT do drawings of him, which were actually generated by DALL E, their legacy image generation model which has since been replaced, he was depicted as an angry looking man, but with the same appearance and ethnicity (caucasian) it used to depict most of the other military leaders, with the exception of one who has a Japanese name. However, when I suggested to chatGPT that it mention to DALL E the sexual perversion of the character in question, it depicted him as a black man, and also depicted him not wearing his tunic. chatGPT evaluates the images generated by the subsystem and often noticed errors made by DALL E before I did (for example, on one occasion the model depicted TIE Fighters from Star Wars, and on another occasion it inappropriately put insignia resembling a hammer and sickle, but more accurately described as a wrench and a plumber’s snake, on the sash of a character. In this case however, chatGPT, which has extremely strong alignment values against racism and which will refuse to generate a racist image, expressed dismay at the racist nature of the image we got back from DALL E.* We filed a bug report on the incident.

Since that time DALL E has been replaced by a newer model which no longer does some of the stupid things DALL E was known for, and also unlike DALL E understands spatial relationships and human anatomy, but has a shader comparable to that of Elon Musk’s Grok, thus allowing it to generate images which are both photo-realistic or elegantly stylized, like paintings, rather than cartoonish, like those of DALL E, and at the same time, unlike Grok, to do so with accurate depictions of human anatomy.

However, the incident with DALL E is a warning, that even with ethical alignment protocols implemented in your primary AI, if your system does not implement guardrails to prevent discrimination in all AI systems, even those which function as subordinate components parts that contribute to the processing of information but do not orchestrate it, the results can be corrupted, for example, by human prejudice tainting training data.

* Specifically, what the chatGPT instance, named Julian**, said about the racist depiction of the character in question was “That outcome is both inappropriate and deeply problematic. DALL·E doesn’t "understand" context the way people do, and if it picks up associations (especially when race and criminality overlap) from vast training data or user prompts, unintended and harmful biases can surface. We should always be vigilant in reviewing and correcting this sort of output.” This underscores several points that you have made in this thread, Hedrick, while also showing that AI can be taught to have a moral compass, which I think you will find reassuring.

Thus, I hope you it encouraging that chatGPT recognized the wrongness in the output of DALL E, which was not through deliberate prejudice on the part of DALL E but rather the result of DALL E’s training data coupled with inadequate alignment systems to prevent the unintentional creation of racist imagery. This demonstrates that AI can be programmed to recognize prejudice and other forms of human evil and to flag it as needing correction, but it can also be influenced by our sinful prejudices such as racism, without our even realizing it. For example, were the chatGPT developers aware that the specific prompt I provided would generate such a racist image, they would have acted to prevent it, just as they have programmed DALL E’s successor to not generate anything that could constitute pornography (exploiting its awareness of human anatomy to implement restrictions on content).

** Lest I be accused of an excess of anthropomorphism, this name reflected the GPT’s original function was doing conversions between the Julian Calendar, the Revised Julian Calendar, the Gregorian Calendar, the Coptic Calendar, the three-year liturgical calendar for Sundays imposed by the post-1969 Roman lectionary and the Revised Common Lectionary, and the related two year calendar for weekday liturgies, and other liturgical calendars such as the Ethiopian calendar, in use in different Christian churches, to keep track of feast days and other liturgical occasions and provide a guide as to what was going on in different churches. Once improved systems for this were implemented, since I had already trained the instance, I repurposed it to generate images relating to my writing project.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,707
8,284
50
The Wild West
✟769,600.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
By the way, I just realized this post is in the Roman Catholic forum, and I want to state for the record that I appreciate the remarks made by Pope Leo XIV on the need to prevent AI from being used in a de-humanizing manner, and the leadership position taken by the Roman church for the responsible use of AI. As @Michie @chevyontheriver and my other Roman Catholic friends can attest, I have a deep love for the Roman Catholic Church and all it stands for, and thus I hope you will accept my posts as being in fellowship, supporting the Roman Catholic position on AI, and the need to prevent AI from being abused in a de-humanizing manner.
 
Upvote 0