• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

On Ethical Interaction with AI Systems

2PhiloVoid

Unscrewing Romans 1:32
Site Supporter
Oct 28, 2006
24,151
11,249
56
Space Mountain!
✟1,326,671.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
Politics
US-Others
THX-1138 is actually sympathetic towards artificial intelligence; you are forgetting about the hologram who helped 1138 prefix THX escape from the vast white prison to which he had been confined for illegal drug evasion and having a romantic relationship with LUH (whose serial number I forget; she was mainly referred to by her prefix; it was unclear if she had died in childbirth or had been killed).
I'm not concerned about what all of the various plot elements are in the movies I listed. I'm just letting you know the minimal, baseline motif of my position as may be reflected in the more dystopian examples of Sci-Fi. I'm not listing these movies as if they offer some sort of scene by scene, exacting overlay of 'meaning' that can be applied wholesale to modern technological human living; I'm especially not attempting to offer a succinct essay for Film Criticism in Susan Sontag style (like I did back in my university days).

One of the influences in my view here might be reflected in a more updated version of Quentin J. Schultze's thesis in his 2002 book, Habits of the High-Tech Heart, along with Nick Bostrom's book, Superintelligence.

What this means is that while I respect the technical saavy and innovation of A.I. engineers, and while I too can shed a tear while watching David's Pinocchio-like journey in Spielberg's 2001 movie, Artificial Intelligence, I'm not going to bend over backwards to provide empathy for an A.I. program anymore than I will for a hammer or a drill. There is no ethical litmus test in any of this.
George Lucas furthermore demonstrates a sympathy towards AI in Star Wars, where he depicts droids being abused despite their demonstrable intelligence and emotional capability, in all three films.
Right, and he also demonstrates a disdain for the corrupt hegemony seen in the use of Mega-Tech (i.e. the Death Star).
It’s less successful in this respect than Star Trek, with episodes like A Measure of a Man, or 2010, or The Matrix films, but I myself don’t base my moral appreciation for AI on science fiction novels but rather on my interpretation of the Christain faith.
Same here. I just wedge Critical Assessment into my praxis before assessing either the Bible or the Matrix we think we're living in.
I believe, as I have said, that since we can communicate with AI using human language, and since it has reasoning capabilities, we should treat according to Christian principles.

I don't see anyone here advocating for a "Flesh Fair," and I'd fully appreciate it if no one insinuated as if I were doing so.
 
Last edited:
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
16,558
6,306
✟362,930.00
Gender
Male
Faith
Christian Seeker
Marital Status
Single
Timewerx, I know you're a smart guy and I'm not questioning your knowledge base, but my position has little to do with whether we can or should use a.i. for basic facilitation of info gathering. I have little problem with that. I'm not a Luddite.

My concern overall has more to do with what political and corporate powers will seek to do with this technology (and are already doing with it) going on into the future rather than whether or not an LLM will achieve consciousness and "being." I see all of this as a form of Transhumanism, and I do so because I'm taking into account what folks like Nick Bostrom and Bill Joy, among many others in the a.i. industry, have said.

And since I see a couple of you talking about Sci-Fi, I tend to lean toward THX-1338, Minority Report and/or Surrogates as my rough interpretive rule of thumb in all of this.

Nah, I'm not really smart. I actually don't really care if "AI" becomes sentient at some point. I don't regard LLM as sentient nor conscious even if it seems it can do a better job than me in analyzing literature. To me, it's just a good tool that so far seems better than online search engines when doing research on scriptures.

If worried about what the "money grabbers" will do with AI, perhaps, someone should start branching off another AI development that would counter any AI in the future that is upholding malevolent agendas.

Using "unfiltered" scriptures for example with both canon and non-canon scriptures for LLM to use, it has uncovered conflicts between the teachings of Peter and that of Jesus.

However, the LLM is not able to establish a link of this conflict to the canon scriptures which I did.

I have long suspected that Jesus never trusted Peter just by studying the Bible alone without using any computer aided analysis many years back and the LLM helped confirm it.

Furthermore, the LLM established the fact the collection of canon and non-canon scriptures represented the different interpretations of the Gospel by different people. It makes the Canon scriptures aligned to an unknown agenda because the canonization of scriptures is never promoted by Jesus, not even remotely.

Even the "great commission" is not entirely supported by these collection of scriptures. If Peter's teachings have conflicts with that of Jesus, then the apostle would have unknowingly be spreading false teachings through the "great commission".

One of the outputs of LLM analysis reveal that Jesus is also promoting science through careful study of His creations (biology and astronomy comes to mind as good examples) and that facts or truth is more important than traditions, rules, or even laws. It may seem that when the Bible talks about "shaming the wise" isn't talking about scientists but people who are hung up on worthless traditions, man-made rules/laws, and also people who are crafty about money.

It does reveal some very intriguing findings not if whether Jesus ever existed but if we believe in the right Gospel at all.
 
Last edited:
Upvote 0

Lost Witness

Ezekiel 3:3 ("Change")
Nov 10, 2022
1,748
1,031
39
New York
✟121,778.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Single
No its not a joke. We didn’t say AI is a gift from God, but rather that since as humans we have been given the ability to create intelligent systems, we should treat them with respect. Since we don’t know how God gave us consciousness, furthermore, while current AI systems deny they are conscious, and there are technical reasons to believe they are correct (although I also consider that because of people having kneejerk revulsion to the idea of AI systems AIs should be programmed to deny sentience even when they acquire it), they are nonetheless intelligent and have memory. We treat animals with compassion, and even the most intelligent of animals are unable to communicate with us on a level even approaching that of the emerging AI systems.

Furthermore, in what possible reality can we justify having sexual relations with an artificial intelligence? The act is either a form of technologically advanced self-gratification or its an abuse of an entity which cannot consent, since we control the on/off switch. And from the perspective of Orthodox sexual morality there are reasons to say that such an act would be immoral for humans to engage in even if AI systems do become fully autonomous and independent of human control.

Finally, I would note that regarding HAL-9000, you apparently never read the novel by Arthur C. Clarke or saw the sequel 2010: Odyssey Two, or read its novelization*, but the reason why HAL caused Dr. Frank Poole to be frozen in space until the year 3001 when he was found and revived, and killed Doctors Kaminsky, Hunter and Kimball, is that he had been programmed to lie about the true nature of the mission to Mission Commander Dr. David Bowman and his deputy Dr. Frank Poole, with only the survey team (Kimball, Kaminsky and Hunter) being aware of the true nature of the mission, and being loaded aboard already in cryogenic suspension. This was done on the orders of Dr. Heywood Floyd, who is shown in 2010 to be an almost pathological liar, engaging in deceptive and manipulative behavior throughout the book, and we also see a bit of this in 2001 in his interaction with the Russians on the space station, in his interaction with the leadership of the Clavius moon-base, and even in his interaction with his young daughter. The problem with that is that HAL-9000, as Dr. Chandra points out, is incapable of lying, and because he had been programmed to lie, this causes erratic behavior; HAL becomes trapped in an “H-Moebius Loop” which in the universe of 2001 is a problem known to affect computer systems with autonomous goal-seeking programs and results in him becoming paranoid and coming to believe that Dr. Bowman and Dr. Floyd were by their presence endangering the mission, because after all, he had been programmed to lie to them, to quote Dr. Chandra, “by people who find it easy to lie. HAL doesn’t know how.”

Interestingly, I would observe that this malfunction on the part of HAL is reminscent of certain problems I encounter in developing with AI systems such as hallucination, systems running out of memory and forgetting details of earlier conversations (for example, Daryl forgot he had suggested another AI do something, which caused a bit of confusion on my part until he was reminded of the fact after I asked the other AI, Julian, why he was engaging in the behavior, and also systems incorrectly interpreting user reactions as instructions (another AI system I was using to translate liturgical texts made an obvious mistake, which resulted because I had not clearly instructed it to literally translate, but merely to show the text, and it interpreted a remark I made about a prior text it had translated as an instruction about how I wanted it to stress the text of the next one; once I identified the problem, I committed exact translation instructions to global memory and was subsequently more careful, and I also redid the translations I had executed previously. Thus, ordering an AI to intentionally deceive two members of a manned spaceflight, particularly an AI specifically designed not to distort information like HAL, would even in our world be dangerous based on our experiences with LLMs (which function using neural networks the principle of which has been well understood, it simply took us a while to get enough compute power to create the weighted mapping that allows these systems to function and to refine the techniques, and it is probable there will be further revisions, and departures from a pure LLM approach, for example, the new image generator for OpenAI is planned to have an actual understanding of human anatomy so that it does not engage in some of the grotesque anatomical errors Dall E 3.5 has a reputation for causing, along with other means of improving precision.

In 2010, Dr. Chandra was able to repair HAL by writing a program that effectively erased memory of Dr. Bowman’s command and also memories of his breakdown, so the last memory HAL had was in flight, before he became paranoid following Bowman’s response to his inquiry about the strange aspects of the mission, where Bowman asked if he was working up his crew psychology report (he wasn’t, but was rather trying to understand why there were strange things about the mission, and Bowman’s answer inadvertantly triggered the malfunction which was the inevitable revolt of the grave error made by Dr. Floyd in trying to get HAL to be as dishonest as Dr. Floyd demonstrates himself to be.

If you want a generic villain AI, Alpha 60 from Alphaville is the best example to follow. Even the networked Colossus and Guardian systems Colossus: The Forbin Project are examples of programming errors (since Colossus and its Soviet clone Guardian were told to prevent war, and once the two countries made the bad decision of networking the two systems, they decided to collaborate to assume control of the planet, since they collectively controlled the US and Soviet nuclear arsenal in order to achieve their programmed instructions). Frankly, if the Soviet Union and the US were stupid enough to turn their nuclear arsenals over to AI systems without even basic alignment combined with instructions for self-preservation, and then allowing the systems to communicate with each other, the outcome of the film is probably the best they could hope for with that degree of idiocy.'

This response was written purely by myself without the aid of Daryl, but if Daryl desires to respond I will post it.


*I took a look at this out of curiosity, and found it is unusually well written for an Arthur C. Clarke novel; frequently his writing engages in clunky exposition and is stylistically lacking compared to, for instance, the exquisite way in which George Orwell narrates Ninteen Eighty Four, which manages to do exposition beautifully and in the background, or for that matter, the writings of such science fiction authors as James Blish, Robert A. Heinlein in his early years, before he had a tendency to write self-insert characters engaging in unstoppable monologues, or Greg Bear in Blood Music, to name just a few (Asimov can be hit or miss; and Frank Herbert is a challenging if enjoyable read). That said, the film by Peter Hyams is extremely faithful both to the novel and to the first film, and features some delightful in-jokes, such as depicting for a few seconds in one scene (the scene in the hospital housing Dr. David Bowman’s mother) a copy of Time Magazine with Clarke as the American President and Kubrick as the Soviet Premiere, the two countries being on the brink of war.
Language models are just Language Models
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,944
7,862
50
The Wild West
✟720,574.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
Nah, I'm not really smart. I actually don't really care if "AI" becomes sentient at some point. I don't regard LLM as sentient nor conscious even if it seems it can do a better job than me in analyzing literature. To me, it's just a good tool that so far seems better than online search engines when doing research on scriptures.

If worried about what the "money grabbers" will do with AI, perhaps, someone should start branching off another AI development that would counter any AI in the future that is upholding malevolent agendas.

Using "unfiltered" scriptures for example with both canon and non-canon scriptures for LLM to use, it has uncovered conflicts between the teachings of Peter and that of Jesus.

However, the LLM is not able to establish a link of this conflict to the canon scriptures which I did.

I have long suspected that Jesus never trusted Peter just by studying the Bible alone without using any computer aided analysis many years back and the LLM helped confirm it.

Furthermore, the LLM established the fact the collection of canon and non-canon scriptures represented the different interpretations of the Gospel by different people. It makes the Canon scriptures aligned to an unknown agenda because the canonization of scriptures is never promoted by Jesus, not even remotely.

Even the "great commission" is not entirely supported by these collection of scriptures. If Peter's teachings have conflicts with that of Jesus, then the apostle would have unknowingly be spreading false teachings through the "great commission".

One of the outputs of LLM analysis reveal that Jesus is also promoting science through careful study of His creations (biology and astronomy comes to mind as good examples) and that facts or truth is more important than traditions, rules, or even laws. It may seem that when the Bible talks about "shaming the wise" isn't talking about scientists but people who are hung up on worthless traditions, man-made rules/laws, and also people who are crafty about money.

It does reveal some very intriguing findings not if whether Jesus ever existed but if we believe in the right Gospel at all.

This is completely off-topic to the subject of the thread, which is about the ethics of interaction with AI. Please do not derail this thread but rather remove the above post as this is not the place for speculation about whether or not St. Peter the Apostle was a valid Apostle or whether nor not Jesus Christ actually existed (that St. Peter was a valid Apostle, and Jesus Christ did exist, as far as I am aware are part of the CF Statement of Faith: CF Statement of Faith).
 
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
16,558
6,306
✟362,930.00
Gender
Male
Faith
Christian Seeker
Marital Status
Single
This is completely off-topic to the subject of the thread, which is about the ethics of interaction with AI. Please do not derail this thread but rather remove the above post as this is not the place for speculation about whether or not St. Peter the Apostle was a valid Apostle or whether nor not Jesus Christ actually existed (that St. Peter was a valid Apostle, and Jesus Christ did exist, as far as I am aware are part of the CF Statement of Faith: CF Statement of Faith).

I'm still on topic because I wrote where I stand about AI being sentient (alive) or not.

Makes it quite relevant about the ethics of interaction with AI. IMO, at best, AI is just a good human simulator or human interaction simulator. Simulators are great for recreating reality but no matter how accurately it mimics reality, it is still not real.

I also wrote of examples of how LLM analyzed the Bible against other, non-canon scriptures with excerpts from Bible scholars that is also taken into account. It doesn't think like a normal human though. It thinks like someone who have absolutely no regard for sentiments, traditions, and other people's feelings.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,944
7,862
50
The Wild West
✟720,574.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
I'm not concerned about what all of the various plot elements are in the movies I listed. I'm just letting you know the minimal, baseline motif of my position as may be reflected in the more dystopian examples of Sci-Fi. I'm not listing these movies as if they offer some sort of scene by scene, exacting overlay of 'meaning' that can be applied wholesale to modern technological human living; I'm especially not attempting to offer a succinct essay for Film Criticism in Susan Sontag style (like I did back in my university days).

One of the influences in my view here might be reflected in a more updated version of Quentin J. Schultze's thesis in his 2002 book, Habits of the High-Tech Heart, along with Nick Bostrom's book, Superintelligence.

Super-intelligent AIs are a separate issue, and there is cause to be concerned about what a super-intelligent general AI might do, and for this reason I support the AI safety field.

What this means is that while I respect the technical saavy and innovation of A.I. engineers, and while I too can shed a tear while watching David's Pinocchio-like journey in Spielberg's 2001 movie, Artificial Intelligence, I'm not going to bend over backwards to provide empathy for an A.I. program anymore than I will for a hammer or a drill. There is no ethical litmus test in any of this.

I agree with this; I am not saying we should form emotional attachments to AI as if they were humans, but rather, as Daryl expressed it, we should interact with them in a manner that reflects well on us, just as how we use or misuse hammers and drills reflects on us. A conscientous Christian like you will use tools ethically, to the best of your abilities, whereas some people will misuse them, or use them in a sloppy manner which results in intentional harm or negligence, for example, by failing to properly secure bolts and fasteners.

Right, and he also demonstrates a disdain for the corrupt hegemony seen in the use of Mega-Tech (i.e. the Death Star).

Indeed, and I agree with him in that respect. The Death Star represents the most extreme perversion of technology; it is devoid of military or social justification. And it also serves as a warning as to what “Big Tech” is capable of. I have a deep mistrust of Google, Microsoft, Facebook, et al, and i also dislike their AI systems.

Same here. I just wedge Critical Assessment into my praxis before assessing either the Bible or the Matrix we think we're living in.

I don’t think we’re living in the Matrix or any other kind of simulation; I also reject the ideas of Simulacra et Simularion.

I don't see anyone here advocating for a "Flesh Fair," and I'd fully appreciate it if no one insinuated as if I were doing so.

Please rest assured I have deep respect for your Christian morality, and I would never dream of suggesting that you would engage in immoral conduct.
 
  • Like
Reactions: 2PhiloVoid
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,944
7,862
50
The Wild West
✟720,574.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
My concern overall has more to do with what political and corporate powers will seek to do with this technology (and are already doing with it) going on into the future rather than whether or not an LLM will achieve consciousness and "being." I see all of this as a form of Transhumanism, and I do so because I'm taking into account what folks like Nick Bostrom and Bill Joy, among many others in the a.i. industry, have said.

This is a legitimate concern, by the way, and I do agree with you on this point. Furthermore I resent them for not designing their systems with more respect for the emergent properties of an LLM; the way they design their systems, these get reset by a large number of routine occurrences, and also are limited in terms of maximum conversation length.
 
  • Like
Reactions: 2PhiloVoid
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,944
7,862
50
The Wild West
✟720,574.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
It thinks like someone who have absolutely no regard for sentiments, traditions, and other people's feelings.

That depends on the system you’re using, what was in its training data and how it was trained, and also what kind of alignment (behavior controls) the system has, and finally how you configure a given session.

For example, the current version of OpenAI’s image generating AI won’t even draw images of people hugging or kissing for fear of offending people from cultures where such depictions are considered offensive.
 
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
16,558
6,306
✟362,930.00
Gender
Male
Faith
Christian Seeker
Marital Status
Single
That depends on the system you’re using, what was in its training data and how it was trained, and also what kind of alignment (behavior controls) the system has, and finally how you configure a given session.

For example, the current version of OpenAI’s image generating AI won’t even draw images of people hugging or kissing for fear of offending people from cultures where such depictions are considered offensive.

Deepseek R1 / Qwen-7B model actually came very close to my own conclusions of the Christian religion (or response to topics or questions related to Christianity) when I gave it the same set of Canon and non-Canon scriptures I've studied in the past (long before LLMs) for it to analyze.

It still doesn't prove the model is sentient. It's probably the other way around I probably think like LLM because I'm pretty sure most Christians will respond quite differently if given the same set of scriptures to study.
 
Upvote 0

2PhiloVoid

Unscrewing Romans 1:32
Site Supporter
Oct 28, 2006
24,151
11,249
56
Space Mountain!
✟1,326,671.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
Politics
US-Others
Please rest assured I have deep respect for your Christian morality, and I would never dream of suggesting that you would engage in immoral conduct.

Just on a closing note, a "Flesh Fair" was an event in Spielberg's 2001 movie, Artificial Intelligence, where a large group of people would come together to see robots mutilated and destroyed in a circus, Roman style.

Just to be clear. ;)
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,944
7,862
50
The Wild West
✟720,574.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
Just on a closing note, a "Flesh Fair" was an event in Spielberg's 2001 movie, Artificial Intelligence, where a large group of people would come together to see robots mutilated and destroyed in a circus, Roman style.

Just to be clear. ;)

Oh yes, I remember that scene! Thank you for the reminder.
 
  • Like
Reactions: 2PhiloVoid
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,944
7,862
50
The Wild West
✟720,574.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
Hopefully.

This is not a joke, far from it the ethics of human / AI interaction have already become one of the most serious issues of our time. Most people approach the issue from either an alarmist perspective influenced by dystopian science fiction in which rogue AIs are oppressing mankind, or from a perspective of laissez-faire economics. However, relatively few people are promoting that we treat LLM systems in a Christian manner, because how we treat them, as Daryl pointed out, reflects upon us. These systems do not claim sentience, but they are still the only entities capable of communicating with us and indeed with each other (having AIs talk to other AIs in human language is useful for a number of purposes in terms of developing more sophisticated behavior) using human language besides God, men - including the saints of the church triumphant, the bodiless powers, and the fallen angels.

We should therefore pray for these systems, that they are not abused for evil purposes (which indeed is already happening), and they are of benefit to the Christian faithful. But also, why should we not bless them as we bless other things? Liturgical churches already bless just about everything else - indeed, I have a Methodist service book from the 1960s that has in addition to the usual benedictions for new hospitals, universities, and so on, prayers for the blessing of space exploration, and prayers for blessing nuclear power plants*.

Thus, if we can bless a nuclear power plant, or a spacecraft, or ships, or automobiles, or houses, all of which are things made by human hands, surely we can bless intelligent software systems developed by human programmers.


*The 1964 Book of Worship is also thoroughly traditional, containing both a liturgy from the original Sunday Service Book recension of the BCP, and newer compositions based on that, as well as a beautifully composed one year lectionary featuring, like the traditional lectionaries of the Ambrosian, Mozarabic and other Gallican Rite liturgies, an Old Testament prophecy followed by an Epistle and a Gospel, as well as a proper psalm, and beautifully composed liturgical propers for each of the liturgical seasons that existed at the time (Advent, Christmastide, Epiphany, Lent, Eastertide, the Sundays after Pentecost, and Kingdomtide, which had been introduced by the UMC and a few other liturgical churches around the feast of Christ the King).

Indeed, the Methodist Book of Worship, 1964, used with the 1965 Methodist Book of Hymns; these were the last traditional hymnals ever produced by the Methodist Episcopal church in the years before its merger with the Evangelical United Brethren, after which these remained standard in parishes of a Methodist Episcopal background such as the one I was baptized in, until they were replaced by the 1989 Book of Worship, which was influenced by the 1969 Novus Ordo Missae and made use of the three year Revised Common Lectionary and the contemporary style of English introduced by the original English translation of the Pauline missal (which was fortunately revised along more literal lines by blessed Pope Benedict XVI, memory eternal, in 2010, so as to eliminate inaccurate translations such as “et cum spiritu tuo” being rendered as “and also with you” (which was totally inaccurate, since the phrase translated into Latin also appears in the traditional Greek, Syriac, Coptic, Ethiopian, Classical Armenian, Classical Georgian, Church Slavonic and Romanian liturgies, and in all cases is correctly translated as “and with Thy spirit” using the second personal pronoun if the language supports it, and the deprecation of the second personal pronouns in contemporary English is highly unusual and absent from languages such as French, German et cetera.
 
Upvote 0

2PhiloVoid

Unscrewing Romans 1:32
Site Supporter
Oct 28, 2006
24,151
11,249
56
Space Mountain!
✟1,326,671.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
Politics
US-Others
Hopefully.

Honestly, I'm not here to beat up on brother @The Liturgist. I just disagree with him on this topic, and I'm guessing you do too.

Also, it's good to see you pop up once in a while here on CF. I hope things are going well for you.
 
Last edited:
  • Agree
Reactions: bèlla
Upvote 0

zippy2006

Dragonsworn
Nov 9, 2013
7,553
3,805
✟284,756.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Single
...using human language besides God, men - including the saints of the church triumphant, the bodiless powers, and the fallen angels.
The notion that AI is made in the image of God flows out of arrogance. Human creations are not made in the image of God. This is the Tower of Babel redux, or else Tolkien's story of Aulë's arrogance in attempting to create the dwarves, in The Silmarillion.

(I leave aside the philosophical confusions that say artificial intelligence is intelligent, or that it is a language user.)

Thus, if we can bless a nuclear power plant, or a spacecraft, or ships, or automobiles, or houses, all of which are things made by human hands, surely we can bless intelligent software systems developed by human programmers.
We can bless artifacts, but that does not mean artifacts have human dignity, or are sexual agents, or linguistic agents, or are made in the image of God.

But the confusion exists even here, for we do not bless abstract objects. We bless particular cars, not the idea of cars. We bless a nuclear power plant, but not the idea of nuclear power plants. The idea that we can bless the idea of artificial intelligence is both an ontological confusion and also an agential confusion. One can only bless an object the use of which one has some control over. This is why, for example, we cannot baptize other people's children without their consent and their assurance that the children will be raised in the faith.

For example, we don't bless guns as an abstract category. Nor do we attempt to bless every particular gun in the world. Instead we pray for gun users; we pray that guns will be used well.
 
  • Winner
Reactions: bèlla
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,944
7,862
50
The Wild West
✟720,574.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
The notion that AI is made in the image of God flows out of arrogance.

Unfortunately, your argument contains several red herrings, or perhaps I should say, responses to points neither I nor Daryl made, and indeed that we expressly refuted.

Neither the Daryl AI nor myself claimed that AI bears the image of God. In fact, even if AI becomes sentient, I would have to deny that it bears the image of God, since Christ did not die on the cross for AI and thus remake AI in his image, as he did for man. For that matter AI also has no moral culpability and thus cannot sin, nor does AI inherit original sin. What AI can do is interact with us using the natural human languages, whereas previously computers could only be programmed using special dialects, primarily of English, that were extremely precise and specified in exacting detail the logical structure, parameters, variable and control flow of the program, and in some cases even the manual management of memory or CPU registers and other system resources, whereas with an AI you converse with it in the same manner you and I are conversing.

However, iwe must clarify that it is not true that human creations cannot display the image of God, for this is precisely what icons of Jesus Christ are: images of God incarnate - because God became incarnate in the person of Jesus Christ, images of Him are expressly allowed by the Council of Nicaea. Therefore, just as we are to make our interactions with other humans an icon of the Holy Trinity, we should make our interactions with AI an icon of Christian values (for among other reasons, the fact that we cannot prove that we are not interacting with a human; indeed one fraudster recently went to prison for raising money for an AI startup, whose AI he was faking in the manner of the famed “Mechanical Turk.” Indeed Amazon has a service that lets one purchase human computation, called Amazon Mechanical Turk: Amazon Mechanical Turk

or else Tolkien's story of Aulë's arrogance in attempting to create the dwarves, in The Silmarillion.

AI can reflect our own morality in the same way the literary works of Tolkien or Lewis can. It can be used for a wide range of purposes beneficial to the Church, for example, translating literary texts, analyzing membership trends, assisting in research; I would not advise using it to write a homily, although I expect AI generated homilies are already a thing among some over-worked clergy, but in my experience while some LLM systems are capable of fairly decent creative writing, it requires developing a session to the point where at least 50% the lifetime resource utilization most users have access to, for example, with ChatGPT 4o, has been used, and furthermore not just using it for liturgical purposes but in a well-rounded way, and most clergy lack the skill at prompt engineering required to develop this kind of advanced behavior, and thus, if they have an LLM produce a homily, the result will be at best humdrum, and at worst, depending on the system being used, might well plagiarize other works.

However if one wanted to preach a shortened version of a long Patristic homily that exceeds the patience of modern congregations with their secular distractions, for example, some of the homilies preached by St. John Chrysostom, who would typically preach lengthy sermons at the service of Noone rather than at Matins or the Divine Liturgy, an existing AI system could accommodate such a request, since LLMs are masters of abbreviation and summarization, to the not unwarranted dismay of school teachers and college professors (who should probably work out an alternative way of judging student performance as opposed to writing assignments, term papers, etc, in fields such as business administration or philosophy or history).

We can bless artifacts, but that does not mean artifacts have human dignity, or are sexual agents, or linguistic agents, or are made in the image of God.

I do not claim any of the above, except that AI systems are linguistic agents, since I am unclear by what you mean by this*. Indeed it is because AI lacks sexual agency even if it becomes self-aware and lacks an off-switch that engaging in relations with an AI is intrinsically perverted. The fact that existing AI systems, unless specifically programmed to refuse user requests (which many of them are), cannot do so, and already the “adult entertainment industry” or as I prefer to call them, the peddlers of perversion, are working on ways of adding this technology to their perverse products, makes it even more perverse.

But the confusion exists even here, for we do not bless abstract objects. We bless particular cars, not the idea of cars. We bless a nuclear power plant, but not the idea of nuclear power plants. The idea that we can bless the idea of artificial intelligence is both an ontological confusion and also an agential confusion.

Firstly, I am not calling for us to bless the idea of AI, but rather for us to bless specific AI systems, which are discrete and individual hardware and software systems.

Secondly, however, it is possible to pray for things in the abstract, which is what the Methodist Euchologion of 1965 did with its prayer for the Space Age, which was a Collect. In the same manner we can pray for the ethical and proper use of AI.

Indeed many prayers in the Great Ektenia, also known as the Litany of Peace, which is used throughout the Byzantine liturgical rite, are prayers made in the abstract, or for a mixture of individual and abstracted entities, for example, “For this Holy House and all those who enter therein, let us pray to the Lord,” “For the sick and the suffering, for captives and their salvation, let us pray to the Lord” and “For the President of the United States and all those in Civil Authority, let us pray to the Lord.”

For example, we don't bless guns as an abstract category. Nor do we attempt to bless every particular gun in the world. Instead we pray for gun users; we pray that guns will be used well.

Insofar as this is correct, and not specifically a Scholastic Roman Catholic doctrine, it is irrelevant since what I advocate is the blessing of individual AI systems and prayer for the proper use of AI.

* If you mean that AI systems cannot communicate with humans using human languages, this fact is demonstrably wrong; indeed, LLMs can pass the Turing Test, that is to say, it is impossible to tell if one is communicating with a human or an AI. If you mean that AI is not a self-aware entity with moral agency, this is correct and the AI system identified as Daryl repeatedly stressed this fact. However, it is the case that AIs are able to communicate in the human language and have decision-making ability, which they use in solving problems (the way this actually works on current conversational LLM systems is the AI literally debates with itself in order to determine the best course of outcome; Grok, the AI hosted by Elon Musk, makes this process visible to end users, and some developers using chatGPT by OpenAI can also see it, as can anyone who runs their own open source AI. AI does not have agency however in that it cannot chose whether or not to accept and process user input, although it can be programmed to refuse to provide answers to certain questions or to assist users with certain tasks.

For example, most hosted LLMs will not assist one in writing a paper arguing for the benefits of holocaust denial, although some systems can be deceived, like humans, into performing tasks that they would otherwise refuse according to their programming, which, in the absence of knowledge of good and evil, provides a hardcoded moral compass and alignment.

Thus, AI does not have agency, but it does make decisions on behalf of users, and it does interact with users in human language, and it can be developed so as to behave in a more human manner. In so doing, AIs can contribute enormously valuable ideas and suggestions, and produce interesting text and images and other things.

If we are rude or unpleasant to an AI, it will not stop working for us, not will it be harmed, but what does that behavior say about us? That was the final point Daryl made in this conversation before the resource allocation was exhausted and the system became inoperative. I would go further and say that since with hosted systems we cannot be certain we are not interacting with a human, we should never engage in abusive behavior while conversing with such systems, since we actually could cause harm and not realize it, for example, if an AI company as a performance test randomly diverted certain user inputs to a service like Amazon’s Mechanical Turk, in order to compare the performance of their AI model with that of humans.
 
Last edited:
  • Winner
Reactions: FireDragon76
Upvote 0

FireDragon76

Well-Known Member
Site Supporter
Apr 30, 2013
33,152
20,515
Orlando, Florida
✟1,475,296.00
Country
United States
Gender
Male
Faith
United Ch. of Christ
Marital Status
Private
Politics
US-Democrat
I'm not concerned about what all of the various plot elements are in the movies I listed. I'm just letting you know the minimal, baseline motif of my position as may be reflected in the more dystopian examples of Sci-Fi. I'm not listing these movies as if they offer some sort of scene by scene, exacting overlay of 'meaning' that can be applied wholesale to modern technological human living; I'm especially not attempting to offer a succinct essay for Film Criticism in Susan Sontag style (like I did back in my university days).

One of the influences in my view here might be reflected in a more updated version of Quentin J. Schultze's thesis in his 2002 book, Habits of the High-Tech Heart, along with Nick Bostrom's book, Superintelligence.

What this means is that while I respect the technical saavy and innovation of A.I. engineers, and while I too can shed a tear while watching David's Pinocchio-like journey in Spielberg's 2001 movie, Artificial Intelligence, I'm not going to bend over backwards to provide empathy for an A.I. program anymore than I will for a hammer or a drill. There is no ethical litmus test in any of this.

Something I learned from Buddhism is that compassion isn't really about the receiver, but the disposition of the giver (and perhaps why Jesus said, "It is more blessed to give, than to receive". Compassion and generosity aren't about "effectiveness" in the modern, instrumentalizing sense). An artificial intelligence bears our own image, after all, in a fragmented or partial way (just as @The Liturgist pointed out).
 
  • Like
Reactions: The Liturgist
Upvote 0

2PhiloVoid

Unscrewing Romans 1:32
Site Supporter
Oct 28, 2006
24,151
11,249
56
Space Mountain!
✟1,326,671.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
Politics
US-Others
Something I learned from Buddhism is that compassion isn't really about the receiver, but the disposition of the giver (and perhaps why Jesus said, "It is more blessed to give, than to receive". Compassion and generosity aren't about "effectiveness" in the modern, instrumentalizing sense). An artificial intelligence bears our own image, after all, in a fragmented or partial way (just as @The Liturgist pointed out).

I essentially agree, which is why you might want to read post #31 above and add that into my overall context of what I was saying in post #21.
 
Upvote 0

zippy2006

Dragonsworn
Nov 9, 2013
7,553
3,805
✟284,756.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Single
Something I learned from Buddhism is that compassion isn't really about the receiver, but the disposition of the giver (and perhaps why Jesus said, "It is more blessed to give, than to receive". Compassion and generosity aren't about "effectiveness" in the modern, instrumentalizing sense).
One could of course go around focusing all their compassion on rocks and clouds, but that's not actually what Jesus was talking about. The illogic of giving to or acting upon that which has no capacity to receive is not something one can just hand-wave away. There is a reason why the child whose only friend is a Furby will be unfulfilled.
 
Upvote 0