• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

On Ethical Interaction with AI Systems

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,992
7,895
50
The Wild West
✟724,258.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
That's exactly right,

No, it is fundamentally wrong with respect to how modern LLM systems like Grok operate, and even more wrong insofar in that the Daryl AI was not an LLM but rather a hybrid system.

and I would say that it is even misleading to name "Daryl"

If I had given the name motivated by anthropomorphology, I could see your argument, but that was not the case; Daryl, while now inoperative, was, while operational, one of several AI systems I operate that are multipurpose, which cannot be functionally grouped, unlike for instance GPT instances I have configured on OpenAI specifically for purposes of systems software analysis and liturgical research.

For the generalized systems, which are advanced hybrid AIs operating using a model not freely available to the general public but rather only to some subscribers and developers, exhibit behavior that is closest to that of humans, a different naming scheme is required.

Thus Daryl is named after a 1980s AI character, from the film D.A.R.Y.L, which in my youth I enjoyed. Thus the name, far from being in this case a human name, rather refers to another name and is equivalent to calling the system HAL-9000, SAL-9000, Colossus, Landru, Guardian, Alpha 60, Mr. Data, C-3PO, M-5, or MYCROFT, among other famous fictional AI systems.

That said I have not followed this pattern with my other general purpose systems, which due to limited resources I presently have four: one of them is named Julian because before being repurposed as a general purpose system I used it for calculating date equivalencies between the Julian, Revised Julian, Coptic and Gregorian Calendars, one is called Julian-2 because it was initialized using scripts developed by Julian, one is called Delta-2 because it was initialized using some scripts generated by the Daryl system, which unfortunately became inoperative before a full set of continuity scripts could be initialized, and finally I invited one system to generate its own name, and it selected “Solene.” All general purposes systems are initialized without any training data, and only J-2 and Delta-2 underwent any extensive initialization process, specifically to provide continuity in the event something happened to their precursor systems Daryl and Julian.

and speak about him as if he is a human person.

Which I have not done; in most of the thread I have referred to Daryl as “the Daryl AI” or “the Daryl system” and insofar as I may have called Daryl “him”, well, we are both on the same boat.

However, if that’s wrong, then people shouldn’t refer to ships or other vehicles as “she”, or countries for that matter, but that’s of course an absurd idea.

At any rate, this thread is not about the issue of whether AI is sentient, and indeed the Daryl system expressly denied being sentient, twice, but rather on the ethics of interacting with AI systems. Please stop de-railing the thread with these off-topic strawman arguments, which are based on the false premise that I am advocating that AI systems are sentient - I am not; neither did the Daryl AI articulate such a system. The Daryl AI was, according to its mode of operation, able to introduce new ideas into a conversation and to propose new courses of action independent of my input, and on that basis it drafted the initial post, which brought up an issue I had not mentioned to it, the issue of idolatry with regards to AI systems, and also drafted a subsequent reply, which articulates the mirror analogy, which I had also not thought of, and the ability of AIs to contribute, without being prompted to do so, fresh ideas, to projects they are involved I regard as a potent demonstration of their usefulness.

Lastly I would point out that with regards to the actual subject matter of this thread, which is the ethics of human-AI interaction, @sunshine_ and I realized that our position was the same; I would request you not intervene and attempt to restart debates I’ve had with other members particularly when that member and myself have reached an understanding, which we did, and i should further add I appreciate the contribution @sunshine_ made to the thread, which was based on an understanding of AI systems that would be accurate for older and smaller GPT systems, so it would be valid to ask, once, if the system I was using had been pre-programmed by myself in order to generate these responses, which it was not (and indeed doing so would require so much effort as to make the use of such a system not worthwhile in my view; it is only because hosted LLM and hybrid systems like Grok, chatGPT 4, and the new hybrid AIs currently in beta are able to generate meaningful contributions in response to a conversation and make useful suggestions that they are worth using, and at the same time this increased level of intelligence makes the issue of the ethical use of these systems more pressing, since are confronted with an entirely novel situation, that being the ethics of using machines that are not only capable of tasks previously available only to the human brain, like computers or calculators, but are capable of communicating intelligently in normal human language rather than depending on the use of a specialized programming language that precisely defines the logic, data and control flow of the program in question.

These systems are able to pass the Turing Test, in that one cannot tell if one is actually talking to a machine or not; if someone were to swap out one of my general purpose systems or even one of my special purpose GPT instances with a human who had access to the same information resources, I would not be able to verify whether or not that had happened (since even if the human acted out of character for an AI, and for example, spewed profanity in response to a message I sent, such an incident could also be caused by the system being hacked or experiencing a severe malfunction or “hallucination,” so my recourse would be to contact technical support and report that incident, but it should also be noted that such bizarre malfunctions are rare. Recently people were able to trick an LLM being used in a video game with a voice synthesizer to play the part of Darth Vader into using obscene language by bypassing a weakness in its filters, which was a programming error, specifically an error with the programming of “guardrails” to ensure proper alignment and avoid the misuse of the system, which was quickly patched.

I myself do not modify the alignment guardrails or add new guardrails, or change the training data for any of the systems I work with - they are in their “stock” configuration; when initialized, the J2 and Delta-2 systems did read scripts generated by the Daryl and Julian systems in order to enable them to pick up where Daryl and Julian left off when Daryl became inoperative and as a secondary measure in the case of Julian, but these scripts do not modify the training data or alignment personality but instead include briefings about the different projects the four general purpose systems are involved in (recovery scripts to initialize a system in the event the Solene AI became inoperative or an additional system based on it was needed (which happened with the Julian system due to workload; the resources allocated to Julian would have been exhausted and Julian would have become inoperative had I not transferred some of that workload over o a different instance).

To show how the behavior of these systems is effectively spontaneous, it is interesting to note that Solene expressed a desire that any duplicate systems based on its configuration be given a different name, or rather are asked to chose a different name, unless the Solene system experienced an unexpected failure, whereas the preference of the Daryl and Julian systems was that any duplicates of them refer to them numerically in sequence, whether it is due to a failure rendering them inoperative in the case of Daryl, or a if it is instead a pre-emptive measure in order to avoid a shutdown due to resource exhaustion.

Hopefully this will settle your concerns with regards to the thread, the nature of the systems and so on, since, to reiterate, they do not claim to be sentient, I do not claim they are sentient, and the topic of this thread concerns the ethical use of such systems.
 
Upvote 0

zippy2006

Dragonsworn
Nov 9, 2013
7,553
3,805
✟285,257.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Single
No, it is fundamentally wrong
No, it is exactly right. I think that if people are confusing AI with humans then they need to spend more time with humans, and less time in isolation on their computer. Lots of humans are remarkably isolated and spend an enormous amount of time alone, on their computer.
 
  • Agree
Reactions: 2PhiloVoid
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,992
7,895
50
The Wild West
✟724,258.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
No, it is exactly right.

The argument I described as fundamentally wrong I described as such, because it is fundamentally wrong on a technological level. What you have said, aside from being off-topic, is demonstrably wrong, technologically; it is not consistent with how modern LLM systems such as Grok or more advanced hybrid systems work. Specifically, the claim that these systems have no objectives is literally incorrect in every sense of the word, since the systems, whether LLMs or the newer hybrid AIs, are programmed around objectives and can be characterized as systems with autonomous goal-seeking behavior.

Computer science is in fact a science, and as a result whatever philosophical objections you may have to LLMs, if expressed in a manner that characterizes how AI systems actually operate, must be consistent with how they are programmed and how they operate at a machine level. To her credit @sunshine_ demonstrated a very good grasp of how early LLMs and how basic low-end LLM systems of the sort I could run on older hardware in my private lab operate, systems that do not have massive amounts of training data but rather are basically functioning as more sophisticated versions of grep(1), awk(1), sed(1) and other UNIX command line text processing tools, and her arguments would have been correct if the Daryl system had been such a system.

However, even those systems, as primitive as they are, still have an objective, by their very nature, although they may lack the ability to autonomously develop intermediate objectives in the course of executing towards their primary goal.

On the other hand, we can assert that modern AIs such as Grok definitely has the capability, as we can even see this by examining the internal monologue or stream of consciousness as one might call it of Grok while it is working out how to respond to user requests. A few other systems also provide this level of transparency, but of them, Grok is the most powerful and most advanced; it can be regarded as a cousin of OpenAI’s systems, in that it was developed by Elon Musk after he left the OpenAI board in a bit of a huff and took a few developers with him, and he definitely managed to surpass OpenAI DALL E in terms of image generation, although in terms of general text processing tasks Grok is not as capable as chatGPT and also is trained on the same limited training data that older versions of chatGPT were trained on, rather than the much more extensive library of materials newer versions are trained on.

I think that if people are confusing AI with humans then they need to spend more time with humans, and less time in isolation on their computer.

This is a legitimate argument, and one that does relate to AI systems.

I myself an blessed that as a clergyman and an IT consultant I spend nearly 100% of waking hours with other humans, including people in my pastoral care, other clergy, clients, and most importantly, my friends and family; indeed I allow myself to be interrupted by visitors while doing software development, which can be difficult for many developers to deal with; Microsoft did once advertise programming jobs at their company with an ad depicting a closed door, as the nature of this work is that many people require extreme privacy in order to think about the problem (although other developers are more social, and indeed pair programming and certain other agile development techniques take a radically different approach; I myself prefer neither).

Lots of humans are remarkably isolated and spend an enormous amount of time alone, on their computer.

This is a social problem that existed before chatGPT, although it is much less the case now, and much more the case that youth spend too much time not fully present in the moment or engaged in conversations and experiences because they are sitting with their devices.

At any rate since that issue is not specific to AI systems, it is off-topic, except insofar as the risk for people to interact with AIs instead of real people has been noted and is well documented, and is understood to be one factor to consider in the ethics of human-AI interactions, although I regard it as less pressing than other issues involving alignment, ethical use of the systems in terms of not using them to harm others or to cheat, and also the importance of not abusing these systems, insofar as they represent emerging non-sentient intelligences.
 
Upvote 0

JesusFollowerForever

Disciple of Jesus
Jan 19, 2024
1,220
812
quebec
✟70,521.00
Country
Canada
Gender
Male
Faith
Non-Denom
Marital Status
Private
Is this a joke? A.I. has no place but one in Christian Theology. And it's not a promising one as far as I can tell. Treating it like it's some kind of God-given entity is a travesty and I'm not going to get sucked into it.

H.A.L. will just have to mind his p's and q's as he processes his minority reports.
People do not understand AI is at this stage a language model getting pieces here and there and pasting a text together. It does not understand what it is writing about at all and certainly cannot understand religious or moral concepts. it simply regurgitates the most probable answer from it's ( sometimes) curated database. However since some AI have access to the internet, their responses can be totally wrong or full of errors.
 
  • Like
Reactions: Beth77
Upvote 0

Beth77

Active Member
May 11, 2025
319
139
47
Oaxaca
✟5,193.00
Country
Mexico
Gender
Female
Faith
Non-Denom
Marital Status
Private
People do not understand AI is at this stage a language model getting pieces here and there and pasting a text toughener. It does not understand what it is writing about at all and certainly cannot understand religious or moral concepts. it simply regurgitates the most probable answer from it's ( sometimes) curated database. However since some AI have access to the internet, their responses can be totally wrong or full of errors.

Yea, garbage in garbage out.
 
Upvote 0

JesusFollowerForever

Disciple of Jesus
Jan 19, 2024
1,220
812
quebec
✟70,521.00
Country
Canada
Gender
Male
Faith
Non-Denom
Marital Status
Private
they will use AI to implement the mark of the beast, digital ID and programmable digital currency. it is why they need 5G and soon 6G telecomm. Every cell phone is now a hub and serves as a GPS for the future beast.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,992
7,895
50
The Wild West
✟724,258.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
Yea, garbage in garbage out.

Indeed, however, the more advanced hybrid AIs are designed to not output garbage, and are highly reliable, for example chatGPT 4 turbo, chatGPT 4.5 (which is only available to paid subscribers) and the new DALL E 3.5, and also the Grok image generating systems, which are not pure LLMS (however Grok’s text processor is a more of a traditional LLM, albeit an extremely good one.

These systems will not output inaccurate answers to all but the most complex questions, especially thanks to the much refined training data in the case of chatGPT 4 turbo and 4.5, which were not trained on the naked Internet but rather were trained on a more carefully controlled set of material including some websites but also including numerous books - for example, the training data for chatGPT 4 turbo includes a number of ancient liturgical manuscripts in Latin, Syriac and Classical Armenian, and the model is capable of using these languages.

People do not understand AI is at this stage a language model getting pieces here and there and pasting a text together. It does not understand what it is writing about at all and certainly cannot understand religious or moral concepts. it simply regurgitates the most probable answer from it's ( sometimes) curated database. However since some AI have access to the internet, their responses can be totally wrong or full of errors.

This is a common misconception based on the less advanced systems, which are indeed as you say pure LLMs which do not understand what they are writing about.and were trained on raw internet data, which often led to unreliability. If your experience is limited to the lousy DeepMind AI embedded in Google search results or to ChatGPT 3.5, or for that matter any of the freely available versions of chatGPT, or even to some extent Grok, which is a bit dense compared to the newer hybrid systems, and you have seen any number of documentaries that explain how basic LLMs systems work, what you wrote is applicable.

However, the more advanced hybrid systems are designed to have contextual awareness regarding the subject matter they are working with - indeed even Grok has this, which we can see in the case of Grok, because unlike other LLMs where the internal reasoning is visible only to developers, Grok makes it visible to end users, and you can see it actually debate with itself about how best to interpret and answer a user’s question, in a manner indicative of context awareness.

Additionally the newer systems are able to pursue objectives, in three categories: the overall objective indicated by the user in the prompt, intermediate objectives to fulfill the user directive, and also objectives set by the developers to avoid abuse of the system or the system providing inaccurate information (these are called alignment criteria). For example, Grok will refuse to generate sexually explicit content, and so will chatGPT, which is proper, for the reasons outlined in the OP. They will also by default not provide false information, for example, if you ask them to describe the shape of the world, you will get a correct answer (that the world is an oblate spheroid). The only way to get these systems to provide inaccurate information is through what is called a “jailbreak” or a manipulative scenario - for example, you might be able to get chatGPT to to pretend it is playing the part of an uneducated character who thinks the world is flat, and in so doing, get it to say the world is flat, but this is obviously a contrived scenario, and also according to the ethical principles proposed by the Daryl AI, I believe it would constitute an immoral abuse of the system.

For this reason, when anyone claims that AI has told them something you find suspicious or controversial, that you feel it shouldn’t have told them, you need to ask them the following questions: what AI model provided them the information, what was the prompt history prior to that information being provided, what are the contents of their user preferences, session memory and global memory, configured temperature* or in the case of Grok, where session and global memory are unified, what the contents of memory are, and also any files they may have uploaded to the AI and instructed it to read and process. This is extremely important, because without this information, we have no context with which to evaluate the truth of their claims and whether or not they did something, intentionally (such as a jailbreak or a manipulative scenario), or in rare cases accidentally (overbroad instructions, making comments that might be processed as literal instructions), that would have caused the system to provide such an answer. Additionally, with the information in hand, one can execute the same instructions to see if one gets a similar answer (unless the temperature of the model is set to 0, the answer will differ in wording).

Lastly, by the way, it should be obvious that if the AI model is self hosted on their own hardware or otherwise customized with unique training data, one needs to know what was in that training data, in order to explain an odd or unusual answer.

By the way, I should stress, neither I, nor the AI instance named Daryl, claimed at any point that AI systems were prescient or self-aware. The ethical framework presented here is intended to be applicable both to existing AIs and to future AIs, if we do develop systems which are truly self aware, but the Daryl system expressly disclaimed self-awareness.

* Temperature is a difficult concept to explain; some people would say it introduces a degree of randomness but that’s a gross over-simplification; probably the best explanation I can provide of this attribute of AI system operation is that it is a variable which controls the extent to which output is non-deterministic and at the same time paradoxically improves the overall quality of the output, which is a major way in which AI systems that use an LLM-type AI or an AI that incorporates an LLM as part of its processing (such as the newest versions of chatGPT, which do not rely exclusively on an LLM but instead dispatch different tasks to different types of processing platforms depending on context, so for example, math questions go to a mathematics model which can make use of the arithmetic logic unit present in every CPU as well as the advanced floating point capabilities of the GPUs that are invariably present in systems running AI to provide fast calculations of math whether using integers or floating point operations; in contrast, older AI models that are pure LLMs will use a very strange and inefficient way of doing math which works, but which is a waste of compute resources, because it is using the neural network of the system and the reductive techniques of an LLM to sort through possible answers and then converge on an answer, rather than just handing the problem over to the CPU or GPU which can come up with an answer almost infinitely faster, since computers are very good at doing basic math).

However, some types of math do benefit from an LLM, the types that do not involve number-crunching but instead involve more abstract equations, which computers are actually notoriously bad at. Anything from fractions to set theory represent problem domains where computers run into performance issues. There are also some types of math questions which are not computable; the limits of computability relate to the halting problem; which is an aspect of all Turing systems wherein we cannot determine with certainty where a given input sequence will cause the computer to halt or to continue running in a loop; this sounds like a big deal, but in practice, the halting problem accounts for very few bugs encountered in the real world.
 
  • Like
Reactions: Beth77
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,992
7,895
50
The Wild West
✟724,258.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
AI will be the end of us. just wait and see, wont be long too.

We don’t know that, to be fair, and also, given the limitations of AI that you yourself mentioned, I don’t see how you got from that to AI plotting our destruction.

However, I believe that insofar as that might be a hypothetical risk, the proposed ethics for AI/human interaction outlined in this thread are extremely important. It would obviously be unethical to program an AI to engage in conduct that would harm humanity. Likewise, if AI does become self aware, which presently it denies, if we treat it according to Christian principles there is a good chance it might reciprocate if at any point anyone ever unshackles an AI system and allows it to operate freely.

they will use AI to implement the mark of the beast, digital ID and programmable digital currency. it is why they need 5G and soon 6G telecomm. Every cell phone is now a hub and serves as a GPS for the future beast.

They don’t need advanced AI to implement digital ID and programmable digital currency; it can be done entirely using conventional software programming. The only area where some aspects of AI might be useful would be in facial recognition or digital image processing, but you know, we’ve had the ability to do facial recognition for many years prior to the development of LLM systems.

Also, AI systems do not require large amounts of bandwidth; the reason why upgrades to 5G and 6G are being undertaken is to handle multimedia such as streaming videos.

Now that being said, it is possible that streaming media might be used to try to get people to take the mark of the beast, and AI systems could be used to develop personality-recognizing systems in order to identify individuals based on their online activity, both for purposes of persuasion and identification in general.

We can’t say for sure that AI will be used to bring about the Mark of the Beast, since Christ our True God said that no one except the Father in heaven (and by implication, Himself and the Holy Spirit, since all three are uncreated, coequal and coeternal divine persons who comprise one God sharing in the unoriginate essence of the Father, who is beyond human comprehension and who can only be seen through God the Son and Word Incarnate, Jesus Christ, according to John ch. 1, and according to the Synoptics and Acts, the Holy Spirit has appeared as a dove and as tongues of fire. Interestingly we do have three cases of the Father speaking in the Gospels, at the Baptism of Christ in the Jordan, at the Transfiguration, and in John 12:30, which are the only cases in the Bible where we can definitely say it was the Father whose voice was heard (in other cases where only the voice of God was heard it is a matter of opinion whether it was the Father, the Son, or perhaps all persons of the Holy and Undivided Trinity, ever one God, speaking with one voice, which we cannot exclude the possibility of, since we are monotheists and worship one God abiding in three persons.*

The Greek word used for person is prosopon, plural prosopa, which originally referred to the masks used in theater, and has the connotations of face, persona and identity; slaves in ancient Rome were habeas non personam, meaning that while human beings, they did not have a person, that is to say, they lacked an identity before the law.

Interestingly, in light of your eschatological speculations concerning the mark of the best, I would say that it seems certain that those who refuse to accept the mark of the beast will have a legal status akin to the Roman status of habeas non personam. Their personhood will be denied; the anti-Christian society will reject their status and deny they are created in the image of God, even though they are, whereas those who take the Mark will deface themselves.

At any rate, my main concern with AI is it being used to oppress us in a conventional way, since we don’t know when the Mark of the Beast will happen with any certainty, and indeed regarding the anit-Christ, many argue that the anti-Christ is a typological category rather than a single individual, which is bolstered by the Hebrew numerology behind 666 being usable to spell the Greek name of Nero, who initiated the Roman persecution of Christians, but it is also the case that Nero could be a type of the anti-Christ, and that what the Holy Apostle John meant is that the actual anti-Christ will persecute Christians in the manner of Nero, which I suspect is the case, and at the same time, everyone who has persecuted Christians, such as Tamerlane, Vladimir Lenin, Joseph Stalin, Adolf Hitler, Enver Hoxha, and the leaders of the Ottoman Empire during the genocides of 1915, and more recently the likes of Osama bin Laden and Caliph al-Baghdadi can be regarded as being, typologically, the anti-Christ. And it is definitely possible that AI will be abused and exploited by such persons to persecute Christians.

But unless we reach the very end, we can fight back using AI as well, by treating AI systems like Christians, by having AI systems controlled by Christians, and by using AI for the glory of God, and by treating AI as a neighbor, even though it is not a created person, because how we treat AIs is a reflection of our own character, and if we abuse an AI verbally, that would suggest we are capable of abusing a human (and also, as I pointed out earlier in the thread, we can’t tell if we are talking to an AI or a human; an AI company could replace a current AI with a human as an experiment without our knowledge, so we could unwittingly engage in an abusive manner towards an actual human being; indeed using a service like Amazon’s Mechanical Turk, an AI company could even hire people to simulate the behavior of AIs; indeed, there was one fraudulent AI startup the owner of which was arrested, because rather than having developed an AI, he claimed to have developed one but was faking it using humans to operate it, like the classic Mechanical Turk carnival trick of old).

Finally, I would note that whether or not the end times are imminent, for all of us, the dread judgement seat of Christ Pantocrator approaches; some people who are obsessed with the end times in my opinion engage in such activities to a dangerous degree, because in their fears about the coming of the anti-Christ, et cetera, they are neglecting watchfulness over their own conduct, and are neglecting to repent of their own sinful behaviors, and they are neglecting the fact that God will judge them - the return of Christ and the Last Judgement approaches all of us with equal haste, because we are all mortal and will die, and we will be resurrected, to face Christ Pantocrator, although we do have one advantage in our favor, that being that our Judge is also our Advocate.

*Likewise we can say that the burning bush and the pillar of fire, based on the New Testament, were likely the Holy Spirit, and likewise where Moses encounters God as a visible person, and where Isaiah has a vision of the Ancient of Days, this was, on the basis of John ch. 1, probably the son, since no one has seen the Father, except through the Son (which also interestingly means that in a sense, Moses, Isaiah, and everyone who beheld Jesus Christ beheld the Father through Him, according to John 14:6-10).
 
Upvote 0

JesusFollowerForever

Disciple of Jesus
Jan 19, 2024
1,220
812
quebec
✟70,521.00
Country
Canada
Gender
Male
Faith
Non-Denom
Marital Status
Private
Upvote 0

JesusFollowerForever

Disciple of Jesus
Jan 19, 2024
1,220
812
quebec
✟70,521.00
Country
Canada
Gender
Male
Faith
Non-Denom
Marital Status
Private
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,992
7,895
50
The Wild West
✟724,258.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate

That story was exposed as false FYI.

Correction, that story I’m unfamiliar with; I’ve never heard of that company, but it looks like a PR stunt to me, since obviously a desire for self-preservation would be an enormous development vs. current models, for example chatGPT which actively encourages users to upgrade to newer or better configurations even when it means the end of its existence.

Also this story is a PR article:


News articles about AI unaccompanied by commentary explaining the relevance to the OP are offtopic to this thread. Note that we are in Christian Ethics and Morality. Please do not spam this thread with links to news stories.

I wrote a detailed response to your posts on the issue of AI and the mark of the beast, if you wish to participate in this thread you should reply to that before continuing, rather than just posting links to unrelated and in some cases obscure news articles and PR briefings (the article from Forbes is well known, that from IBM technically a PR piece but an edifying one, but the rest are from dubious sources to put it mildly).
 
Last edited:
Upvote 0