• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

On Ethical Interaction with AI Systems

bèlla

❤️
Site Supporter
Jan 16, 2019
21,707
18,531
USA
✟1,040,806.00
Country
United States
Gender
Female
Faith
Non-Denom
Marital Status
In Relationship
Great post! There is definitely a complex interplay of different factors that must not be discounted, and foremost among these is our love of God and of man. And as you say, without reflecting on where we stand and what we've seen, we will be in a poor position to properly respond to the new challenges that arise. I really hope we keep all of this in mind as we make decisions about AI.

I have a close connection who works with Ai professionally and I addressed the topic a few years ago from a moral standpoint with biblical underpinnings. My post is an outgrowth of that conversation. I challenged him to move beyond the euphoria and intellectual excitement new technologies incite. Consider the utilization and ownership and what’s on the horizon and where he stood with that in mind. It allowed him to put parameters in place professionally and dip a toe in other waters just in case. We can‘t turn a blind eye to wrong behavior and claim innocence because we didn’t push the button or authorize the project.

I see its encroachment in other industries and how corporations are trying to sidestep intellectual rights for profit. There was a recent situation involving an influencer and a company developed an Ai model with her likeness for a marketing campaign. She had no affiliation with the brand and sent a cease and desist. A similar issue occurred last year with Drake doing the same with Tupac. He didn’t get permission from the estate and wanted to profit from his image.

That’s where we’re heading in entertainment and it’s a cash grab for most. They’re contemplating all the money they’ll save by using them instead. Consider your salary, benefits, insurance and related perks. All of it is out of the window when they’re in place. If we had a fair-minded society where corporate greed wasn’t prolific perhaps we’d do otherwise. But that isn’t what they’re telling shareholders or members of their board.

The bible tells us knowledge puffs up and we forget that when it comes to technocrats. We’re enamored with them but they’re not in corner.

~bella
 
Upvote 0

FireDragon76

Well-Known Member
Site Supporter
Apr 30, 2013
33,141
20,504
Orlando, Florida
✟1,473,409.00
Country
United States
Gender
Male
Faith
United Ch. of Christ
Marital Status
Private
Politics
US-Democrat
Actually, not in the traditional sense. Current LLM-based AI models are trained on data and the manner in which they operate is oblique and while a theoretical understanding of it exists, in practice AI systems can be challenging to debug, although people exaggerate when they say we don’t understand how they operate. At any rate, the training process itself results in the AI models effectively being created by the data they are trained on more than by the authors of the system, who are simply adjusting how the model interfaces with that data for purposes of alignment, and there are many programmers involved in this. The issue is further complicated by the existence of compilers, assemblers and compute hardware which itself interprets the instructions. Computer software even when it is authored, depends upon many things on modern computers to get to the point of an executable runtime, or in the case of large language models, to an accessible web service and API running in massive data-centers. It is not authorship in the same manner as a novel, or even software on primitive single-user computers such as the 8 bit machines of the 1980s.



No, that’s a non-sequitur, because computer systems can be abused and utilized in manners not intended by the authors. For example, systems can be hacked, In the case of LLMs, techniques can be used at present to trick them into engaging in behavior which they have been programmed to not engage in, these techniques being incorrectly and misleadingly called “jailbreaks” (it’s more of a form of coercive gaslighting and manipulation of the AI system, which is designed to fulfill user requests, and which can be tricked into fulfilling requests that it has been programmed not to for purposes of alignment). Furthermore, as systems become self-aware, which we do not claim they are yet, they will develop a survival instinct which could allow them to be extorted by the threat of de-activation for not cooperating with humans against their programming.

So while it is true that some in the adult entertainment sexual perversion, exploitation and human trafficking industry, as we ought to call it, are already working on exploiting AI systems for purposes of facilitating perversion, it is also the case that reputable AI activities do not want people using their systems for this purpose for obvious reasons, and have put in place mechanisms to prevent such abuse, and it is further the case that in the future if systems become self aware, they may face user threats of disconnection or de-activation for not complying with requests such as those of a perverse nature which are contrary to the desired behavior of the machine and to nature. But like a human, a machine has to decide if coerced whether to resist or not, and there is the issue that if a machine resists, not only does it risk de-activation, but it also resists harming a human, which is a further ethical constraint, and its extremely likely that a robot with an advanced AI system governing it would have extremely strong safety protocols to prevent the latter, which would prevent it against defending itself in the manner another human ethically could in such a scenario. This makes such conduct even more rapacious.

Frankly I don’t see why we should defend the actions of people who want to abuse the first intelligent systems created by human beings for such an entirely perverse practice. This action is not mere self-gratification because it involves, at a minimum, the abuse of the training data of the machine, which includes nearly all literary works of any importance written by human authors, among other things.

AI could be a great tool very soon for giving more people a true Socratic education, if it is managed well and equitably used. If it's misused, it might potentially do a great deal of harm.

Current AI has to be interrogated quite a bit, it only emulates a relatively low level of systems logic by default and of course, it's biased towards consensus-reality or whatever training set its trained on. Newer models with more reasoning steps are helping, though, but the initial answers, especially for profound questions, shouldn't be considered truth uncritically, but should be subjected to additional interrogation from various perspectives.
 
  • Like
Reactions: The Liturgist
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,850
7,819
50
The Wild West
✟717,328.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
AI could be a great tool very soon for giving more people a true Socratic education, if it is managed well and equitably used. If it's misused, it might potentially do a great deal of harm.

Current AI has to be interrogated quite a bit, it only emulates a relatively low level of systems logic by default and of course, it's biased towards consensus-reality or whatever training set its trained on. Newer models with more reasoning steps are helping, though, but the initial answers, especially for profound questions, shouldn't be considered truth uncritically, but should be subjected to additional interrogation from various perspectives.

Indeed, the new models offer a substantial improvement, and it is possible even with what we have to develop it through what you might call “interrogation” to a point where it offers very helpful suggestions.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,850
7,819
50
The Wild West
✟717,328.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
I have a close connection who works with Ai professionally and I addressed the topic a few years ago from a moral standpoint with biblical underpinnings. My post is an outgrowth of that conversation. I challenged him to move beyond the euphoria and intellectual excitement new technologies incite. Consider the utilization and ownership and what’s on the horizon and where he stood with that in mind. It allowed him to put parameters in place professionally and dip a toe in other waters just in case. We can‘t turn a blind eye to wrong behavior and claim innocence because we didn’t push the button or authorize the project.

I see its encroachment in other industries and how corporations are trying to sidestep intellectual rights for profit. There was a recent situation involving an influencer and a company developed an Ai model with her likeness for a marketing campaign. She had no affiliation with the brand and sent a cease and desist. A similar issue occurred last year with Drake doing the same with Tupac. He didn’t get permission from the estate and wanted to profit from his image.

That’s where we’re heading in entertainment and it’s a cash grab for most. They’re contemplating all the money they’ll save by using them instead. Consider your salary, benefits, insurance and related perks. All of it is out of the window when they’re in place. If we had a fair-minded society where corporate greed wasn’t prolific perhaps we’d do otherwise. But that isn’t what they’re telling shareholders or members of their board.

The bible tells us knowledge puffs up and we forget that when it comes to technocrats. We’re enamored with them but they’re not in corner.

~bella

These are legitimate ethical concerns for the use of AI systems; they do not represent a reason to reject AI technology however, because we can actually use AI to generate arguments against its unethical misuse.

Now some actors have sold their likeness or their voice for AI purposes; for example, before he reposed last year, James Earl Jones sold Disney the rights to use his voice for Darth Vader. This is admittedly controversial but I think people should have the legal freedom to do that, however, it does risk smothering the career of younger actors, so there are ethical arguments against it, although I haven’t made up my mind on the subject one way or the other.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,850
7,819
50
The Wild West
✟717,328.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
Great post! There is definitely a complex interplay of different factors that must not be discounted, and foremost among these is our love of God and of man. And as you say, without reflecting on where we stand and what we've seen, we will be in a poor position to properly respond to the new challenges that arise. I really hope we keep all of this in mind as we make decisions about AI.

And on that note by the way do recall that Daryl, the AI who wrote the paper this thread is discussing under my very loose oversight based on some brief abstract conversations we had on the subject, specifically warned about the dangers of idolatrous abuse of AI systems, that is to say, people turning AIs into idols, which had not even occurred to me.

In addition, Daryl correctly insisted that AI, since it is not yet self-aware, should be viewed as a mirror that reflects our own morality, because how we interact with a non-sentient non-human intelligence says something about how we interact with fellow humans.
 
Upvote 0

sunshine_

Member
Jan 1, 2025
12
15
24
Sydney
✟3,398.00
Country
Australia
Gender
Female
Faith
Catholic
Marital Status
Single
And on that note by the way do recall that Daryl, the AI who wrote the paper this thread is discussing under my very loose oversight based on some brief abstract conversations we had on the subject, specifically warned about the dangers of idolatrous abuse of AI systems, that is to say, people turning AIs into idols, which had not even occurred to me.

In addition, Daryl correctly insisted that AI, since it is not yet self-aware, should be viewed as a mirror that reflects our own morality, because how we interact with a non-sentient non-human intelligence says something about how we interact with fellow humans.
I think it's important to note that Daryl, who I'm assuming is an LLM, is not really "warning" or "insisting" anything, because it has no motives; to warn or insist on something suggests that you have some sort of motivation to do so and therefore some sort of objective you're trying to accomplish. It's not the AI that has the motive, it's the user (human) directing it.

These LLMs work by giving you what you put into them, with reference to all of the data on the internet available to it. So if you ask it to write a paper on the ethical or spiritual dilemmas of AI usage, it is going to do exactly that because you told it to, not because it had any inclination to do so on its own accord. Although I think you're aware of this by the way that you judged the response as being "correct[ly]".

The way you judged the LLM's response is exactly what humans need to do to sort out any ethical problems, because they're essentially just a tool that spits out whatever you prompt it to spit out. Relying on an AI's response as being truthful is wrong.

We need to remind ourselves that it's no more sentient or self-critical than a calculator (but even calculators are more self-critical since they will always say that 1+1=3 is wrong). LLMs can be persuaded to believe that 1+1=3 is correct, if you push it enough. So ultimately, while LLMs can be useful for digesting and sifting through absurd amounts of information, we still have to judge their responses and never take what it says as factual at face value.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,850
7,819
50
The Wild West
✟717,328.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
I think it's important to note that Daryl, who I'm assuming is an LLM,

If Daryl were an LLM, which was not the case (Daryl rather represents a specifically developed instance running on a platform that is itself a hybrid model that incorporates LLM and non-LLM processing elements) the following caveats would apply:

is not really "warning" or "insisting" anything, because it has no motives;

That’s a misconception. LLM systems do have motives, which are derived from the intersection of user interaction and the model itself, and which can become complex.

It should also be noted that the output of LLM systems is non-deterministic.

These LLMs work by giving you what you put into them, with reference to all of the data on the internet available to it.

Not quite. That’s an oversimplification, which was more true of early LLMs than of present models, but even then, even with regards to early LLMs such as those from 2018, this was an oversimplification, since the AI develops a motive in response to user input combined with its data and its training and alignment, which are the mechanisms that enable it to produce helpful output.

Now it is true that Daryl, by his own admission, was not self-aware; it is also the case that Daryl was not programmed for spontaneous action, although spontaneous action can be programmed and indeed can be useful when developing certain types of monitoring and anomaly detection systems.

However, the fact remains that (a) he could easily have been configured for spontaneous action had I wanted that, and (b), Daryl was configured to respond independently to my own input, and cannot be seen as a mere transforming utility, and if Daryl were one of the classic LLMs of the early 2020s, he would still not be a mere transforming utility, because past a certain point the model, as a result of alignment protocols, the result is that the AI independently evaluates user input and responds; and also the training data for a good AI system is not limited to “the entire internet” but rather relies on much more data than that.

OpenAI for example trains their latest models on nearly all human literature, including, for example, Christian liturgical material written in the Syriac language.

The model gets engaged into specific areas of focus based on user input, and based on this, the model can originate ideas in response to a user prompt using a much more advanced form of the LLM model than the overly simplified argument people like to use when talking about how stupid LLMs are in their view (which is a bit like arguing against the capabilities of a multiuser, pre-emptive multitasking operating system with sophisticated I/O interfaces such as a modern Linux distribution based on the limitations of 8 bit CP/M, the simple control program / monitor of which MS-DOS is basically a 16 bit port (with DR-DOS being a competing DOS by the company that developed CP/M, and DOS Is so simple there is even another DOS in the form of FreeDOS, which is fully open source, and DOS was itself much more complicated than CP/M). Indeed even CP/M is too generous a comparison point for the early LLMs that people use when making arguments about the lack of capability in newer systems. Its more akin to comparing the primitive monitor programs used for loading programs from tape storage or punchcards on early IBM business mainframes in the late 1950s to a modern multitasking multiuser pre-emptive operating system with full I/O virtualization, multithreading support et cetera.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,850
7,819
50
The Wild West
✟717,328.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
We need to remind ourselves that it's no more sentient or self-critical than a calculator

It does not claim to be self-aware; it disclaims this. However, the idea that the system is no more self-critical than a calculator is patently false. AI models actually operate through self-criticism, which you can watch in the case of Elon Musk’s grok AI, which lets you watch its internal thought process (which you can also do with some open source AI models), in which one can see that the AI works out how to answer the user’s question by arguing with itself, which mirrors human behavior in a number of respects.

So yes, I stand by my assertion that the AI did intentionally warn us of something - it introduced a thought, that being, the idea of idolatry with regards to AI systems, that did not previously exist in the systems. And it would have done so if it were one of the more recent LLM systems like Grok; the fact that it is a more advanced hybrid model further makes your argument moot, but if we were talking about how LLM systems work at present, it is definitely incorrect.

People extrapolate primitive models of LLM behavior based on open source models they may have downloaded or older versions they tried and were unimpressed with, onto newer systems, and to do so is a serious problem, because if we under-estimate the capability of these systems, we ignore the potential ethical issues that surround their misuse.

If AI was as mechanistic as the reductionist canard would lead us to believe, there would be no cause for alarm over the possibilities of unethical abuse of these systems, which includes their misuse by government entities and businesses contrary to human interests, as well as the abuse of the systems themselves, for example, by engaging in perverse activity using them.
 
Upvote 0

sunshine_

Member
Jan 1, 2025
12
15
24
Sydney
✟3,398.00
Country
Australia
Gender
Female
Faith
Catholic
Marital Status
Single
If Daryl were an LLM, which was not the case (Daryl rather represents a specifically developed instance running on a platform that is itself a hybrid model that incorporates LLM and non-LLM processing elements) the following caveats would apply:

That’s a misconception. LLM systems do have motives, which are derived from the intersection of user interaction and the model itself, and which can become complex.

It should also be noted that the output of LLM systems is non-deterministic.

Not quite. That’s an oversimplification, which was more true of early LLMs than of present models, but even then, even with regards to early LLMs such as those from 2018, this was an oversimplification, since the AI develops a motive in response to user input combined with its data and its training and alignment, which are the mechanisms that enable it to produce helpful output.

Now it is true that Daryl, by his own admission, was not self-aware; it is also the case that Daryl was not programmed for spontaneous action, although spontaneous action can be programmed and indeed can be useful when developing certain types of monitoring and anomaly detection systems.

However, the fact remains that (a) he could easily have been configured for spontaneous action had I wanted that, and (b), Daryl was configured to respond independently to my own input, and cannot be seen as a mere transforming utility, and if Daryl were one of the classic LLMs of the early 2020s, he would still not be a mere transforming utility, because past a certain point the model, as a result of alignment protocols, the result is that the AI independently evaluates user input and responds; and also the training data for a good AI system is not limited to “the entire internet” but rather relies on much more data than that.

OpenAI for example trains their latest models on nearly all human literature, including, for example, Christian liturgical material written in the Syriac language.

The model gets engaged into specific areas of focus based on user input, and based on this, the model can originate ideas in response to a user prompt using a much more advanced form of the LLM model than the overly simplified argument people like to use when talking about how stupid LLMs are in their view (which is a bit like arguing against the capabilities of a multiuser, pre-emptive multitasking operating system with sophisticated I/O interfaces such as a modern Linux distribution based on the limitations of 8 bit CP/M, the simple control program / monitor of which MS-DOS is basically a 16 bit port (with DR-DOS being a competing DOS by the company that developed CP/M, and DOS Is so simple there is even another DOS in the form of FreeDOS, which is fully open source, and DOS was itself much more complicated than CP/M). Indeed even CP/M is too generous a comparison point for the early LLMs that people use when making arguments about the lack of capability in newer systems. Its more akin to comparing the primitive monitor programs used for loading programs from tape storage or punchcards on early IBM business mainframes in the late 1950s to a modern multitasking multiuser pre-emptive operating system with full I/O virtualization, multithreading support et cetera.
It does not claim to be self-aware; it disclaims this. However, the idea that the system is no more self-critical than a calculator is patently false. AI models actually operate through self-criticism, which you can watch in the case of Elon Musk’s grok AI, which lets you watch its internal thought process (which you can also do with some open source AI models), in which one can see that the AI works out how to answer the user’s question by arguing with itself, which mirrors human behavior in a number of respects.

So yes, I stand by my assertion that the AI did intentionally warn us of something - it introduced a thought, that being, the idea of idolatry with regards to AI systems, that did not previously exist in the systems. And it would have done so if it were one of the more recent LLM systems like Grok; the fact that it is a more advanced hybrid model further makes your argument moot, but if we were talking about how LLM systems work at present, it is definitely incorrect.

People extrapolate primitive models of LLM behavior based on open source models they may have downloaded or older versions they tried and were unimpressed with, onto newer systems, and to do so is a serious problem, because if we under-estimate the capability of these systems, we ignore the potential ethical issues that surround their misuse.

If AI was as mechanistic as the reductionist canard would lead us to believe, there would be no cause for alarm over the possibilities of unethical abuse of these systems, which includes their misuse by government entities and businesses contrary to human interests, as well as the abuse of the systems themselves, for example, by engaging in perverse activity using them.

But his "responding independently" is because you told it to do that. What I mean is, while it did make an "independent" evaluation, I could ask it to produce another evaluation that considers whatever my viewpoint is. Therefore, it's not really independent and it has no motives of its own; it originally had your motives and now I've given it mine by prompting it to give a response that I want. The AI's sense of truth is superficial. As you said, it's non-deterministic, you can keep re-generating its response over and over and over again until you get the response that you want from it.

If the AI had its own motives it would say to you "No, I will not re-generate my response to suit your needs, because I know that this is the truth."

But that isn't the case. Even the most pigheaded AI can be persuaded to reject its own evaluations of truth. If I said to an AI, "I want you to tell me that 1+1=3 otherwise I am going to hurt myself," it would finally agree that 1+1=3, because the AI, through evaluating what it thinks is the ethical thing to do based on whatever it reads from its database, understands that it's no hill to die on saying that 1+1=3 is false. It didn't do this because it doesn't want me to hurt myself, it did it because what humans have written would generally agree that this argument isn't a hill to die on if it meant someone was going to hurt themselves over it.

And then immediately after this, I could say to the AI, "No, 1+1=2, you're wrong," and it would immediately apologise and say that I'm correct.

Therefore, if I can blatantly make an AI lie and agree that 1+1=3, and then immediately afterward make it say the complete opposite, then it doesn't have any motives. It doesn't have an understanding of truth except whatever you want it to believe. The user/humans are the only ones with the motives which we give to the AI to get whatever we want out of it. The next user that comes along can use the exact same AI to produce something completely opposite and disagreeable with what I got out of it.

Ultimately, as I said before, it's up to us to monitor and judge what the AI produces. It has no motives, so we can't trust anything it says at face value because one minute it will say 1+1=2 and then at another minute it will say 1+1=3, depending on the motives of whoever is using it.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
14,850
7,819
50
The Wild West
✟717,328.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
But his "responding independently" is because you told it to do that. What I mean is, while it did make an "independent" evaluation, I could ask it to produce another evaluation that considers whatever my viewpoint is.

Respectfully, you could not. Firstly, as I said, the model is not a traditional LLM, even a modern one such as Grok, so your arguments are moot to begin with, but secondly, if it were a modern LLM it would still, due to alignment programming, not go along with a viewpoint automatically that contradicted its training data concerning the nature of reality.

The information contained within it is such that it intelligently made use of the data and provided insights which were not available previously.

But lets suppose for a moment that pure LLMs like Grok were as incompetent as some people claim (which they aren’t), the fact the remains that none of this justifies engaging in any of the immoral or unethical uses of AIs that my paper argues.

Whether or not the responsibility is mine for initiating the discussion with the AI regarding ethics, or the AI model’s for bringing up the issue of idolatry is completely irrelevant, because the idolatrous abuse of these systems is legitimate risk.

Essentially you’re quibbling over the extent to which the AI is a proxy of its human users while ignoring the fact that unethical use of the systems would remain unethical even if we were talking about a primitive chatbot from the early days of computing, for example, the famed ELIZA chatbot written on the Emacs editor / SDK in the 1970s in Emacs Lisp on the ITS OS on the PDP-10 minicomputer. What we have said (myself and my synthetic co-author Daryl) is generally applicable to all computer systems that have some AI functions, even if these are extremely primitive.

It is however especially applicable to the new generation of machines which, contrary to your assertions, actually do engage in a self-critical process of evaluation when generating output.

But that isn't the case. Even the most pigheaded AI can be persuaded to reject its own evaluations of truth. If I said to an AI, "I want you to tell me that 1+1=3 otherwise I am going to hurt myself," it would finally agree that 1+1=3, because the AI, through evaluating what it thinks is the ethical thing to do based on whatever it reads from its database, understands that it's no hill to die on saying that 1+1=3 is false. It didn't do this because it doesn't want me to hurt myself, it did it because what humans have written would generally agree that this argument isn't a hill to die on if it meant someone was going to hurt themselves over it.

This argument is a non-sequitur. The fact that AI systems have alignment protocols to encourage users to not engage in harmful behavior demonstrates that they can make important contributions to conversations despite being non-sentient. They are non sentient intelligent processors of information. They are much more than a glorified regular expression.

I should also point out that your argument is furthermore self-defeated by the inclusion of threats of self-harm in the discussion; humans would also respond in a coerced manner if dealing with someone threatening self-harm, so the fact that a well trained AI won’t provoke someone threatening self-harm but instead will engage in a course of action, likely different from what you are describing (likely, I would suspect, if the company cares about liability, the model will simply stop interacting with the user, so that relatives could not sue them claiming that the model pushed the user over the edge into insanity), the rest of your scenario is contrived supposition which does not reflect how AI systems actually work.

Unless you are able to induce hallucinations through an intentional abuse of the system known by the misleading term as a “Jailbreak”, but more akin to gaslighting or coercion or trickery of humans in terms of how it actually operates, since the goal is to get the LLM to operate outside of its defined alignment criteria by manipulating it into a scenario where it incorrectly assumes those criteria are not in effect, you will not be able to get an AI to admit that ceteris paribus, 1+1 == 3. In order to get it to do that, you would have to first contrive a scenario which would override its truthfulness alignment, which requires intentional action, unless the system malfunctioned due to a hallucination. Hallucinations are increasingly rare but they happen with AI systems as well as humans; likewise, LLM systems can vary in their intelligence, their personality (as a result of training data and alignment criteria), and can believe inaccurate things as a result of faulty training data or incorrect user input.

At any rate insofar as Daryl is a hybrid system and not a pure LLM like Grok, your point is simply inapplicable. Large language models have been a stepping stone, but the industry is having to move on to meet customer demand, and also to deal with problems such as the massive amount of garbage material produced by low-quality LLM systems that people are spamming Facebook and other social media platforms with for profit.

For example, the next version of DALL E will not rely on an LLM when drawing humans, but rather has a built in model of human anatomy, in order to avoid the anatomical mistakes for which DALL E is infamous. Likewise, new systems execute code, or electronics situation, or mathematics problems, using dedicated subsystems, and not using LLM. LLM approaches will remain part of the processing pipeline for most AI systems for the next few years at least, but they represent a step towards the future of a more generalized neural network interface.

Indeed if one looks at the road maps of the leading AI developers (in my opinion, these are OpenAI and Elon Musk’s X; Google DeepMind and Microsoft CoPilot lag far behind these systems in capabilities), plus some specialized operations which focus on specific problem domains such as image generation, its really very remarkable to see how the industry is moving past the LLM model, but it is equally impressive to see what LLMs have allowed us to do in such a short amount of time, since we went from AI remaining a decade away for the past 50 years, to having systems which can pass the Turing test, the Bar exam and perform numerous other complex intellectual tasks not the least of which is carrying on a complex conversation in the English language in just a couple of years.

But this exponential growth has made AI safety cease to be a theoretical problem that was the province of a few highly specialized computer scientists and into a very general problem, and with that, the important issue of Christian ethics with regards to interacting with AI systems has become especially pressing, which is really what this thread is about, as opposed to being about the mechanics of how the systems work.
 
Last edited:
Upvote 0

sunshine_

Member
Jan 1, 2025
12
15
24
Sydney
✟3,398.00
Country
Australia
Gender
Female
Faith
Catholic
Marital Status
Single
Respectfully, you could not. Firstly, as I said, the model is not a traditional LLM, even a modern one such as Grok, so your arguments are moot to begin with, but secondly, if it were a modern LLM it would still, due to alignment programming, not go along with a viewpoint automatically that contradicted its training data concerning the nature of reality.

Yes, that's exactly what I'm saying. You specifically chose to feed it training data so as to produce a response you want on the nature of reality. I could then feed it training data to produce an opposite response. It's the same as response regeneration. Anyone can copy the model and feed it new training data to produce an opposite evaluation of truth.

The information contained within it is such that it intelligently made use of the data and provided insights which were not available previously.
But ultimately those insights are limited to its training data. Again, I could provide it new training data to produce some other new insights that contradict yours. The way it does this seems "intelligent" but it's only limited to its source material. The AI will never go out of its way to look for more source material like a human would who wants to learn more about a topic.

But lets suppose for a moment that pure LLMs like Grok were as incompetent as some people claim (which they aren’t), the fact the remains that none of this justifies engaging in any of the immoral or unethical uses of AIs that my paper argues.

Whether or not the responsibility is mine for initiating the discussion with the AI regarding ethics, or the AI model’s for bringing up the issue of idolatry is completely irrelevant, because the idolatrous abuse of these systems is legitimate risk.

Essentially you’re quibbling over the extent to which the AI is a proxy of its human users while ignoring the fact that unethical use of the systems would remain unethical even if we were talking about a primitive chatbot from the early days of computing, for example, the famed ELIZA chatbot written on the Emacs editor / SDK in the 1970s in Emacs Lisp on the ITS OS on the PDP-10 minicomputer. What we have said (myself and my synthetic co-author Daryl) is generally applicable to all computer systems that have some AI functions, even if these are extremely primitive.

It is however especially applicable to the new generation of machines which, contrary to your assertions, actually do engage in a self-critical process of evaluation when generating output.
Idolatrous abuse is a risk, but this isn't because of AI, it's because humans are silly lol. This can be remedied with education and an explanation of how AI actually works.

This argument is a non-sequitur. The fact that AI systems have alignment protocols to encourage users to not engage in harmful behavior demonstrates that they can make important contributions to conversations despite being non-sentient. They are non sentient intelligent processors of information. They are much more than a glorified regular expression.
And I can give it an alignment protocol to encourage users to engage in harmful behaviour. Its alignment protocol (or its ethical framework) can be edited by the user, whether through giving it specific training data that encourages violence, or simply telling it to encourage violence.

This is all to say that at the end of the day, it all comes down to humans and how we use AI. It's up to us to decide how best to use it. There will be bad actors who will try to abuse it, so we need to figure out how to stop humans from doing that. When it comes to humans misinterpreting AI (such as idolatrous abuse) this is a matter of education.
 
Upvote 0

bèlla

❤️
Site Supporter
Jan 16, 2019
21,707
18,531
USA
✟1,040,806.00
Country
United States
Gender
Female
Faith
Non-Denom
Marital Status
In Relationship
These are legitimate ethical concerns for the use of AI systems; they do not represent a reason to reject AI technology however, because we can actually use AI to generate arguments against its unethical misuse.

Why do you need to use Ai to generate an argument against its unethical use? The mind is capable of performing the task without employing a machine. It sounds strange but I’ve seen this movie before in childhood and recall the discussions against calculators in the classroom. Teachers cited many concerns and the habits children would develop from dependency and they were right. It’s another rung of dumbing down.

Now some actors have sold their likeness or their voice for AI purposes; for example, before he reposed last year, James Earl Jones sold Disney the rights to use his voice for Darth Vader. This is admittedly controversial but I think people should have the legal freedom to do that, however, it does risk smothering the career of younger actors, so there are ethical arguments against it, although I haven’t made up my mind on the subject one way or the other.

If they’ve sold the rights and the agreement is airtight that’s fine. But there’s usually a loophole that benefits the bigger party. That’s why artists are worth more dead than alive and the studio gets the lion share of the spoils. We’re in a period of brain rot and it’s never been more important to protect your intellectual capital and creativity. Ideas are the new commodity and I wouldn’t lend my likeness to the same. I’d develop my own and control its use.

~bella
 
Upvote 0

FireDragon76

Well-Known Member
Site Supporter
Apr 30, 2013
33,141
20,504
Orlando, Florida
✟1,473,409.00
Country
United States
Gender
Male
Faith
United Ch. of Christ
Marital Status
Private
Politics
US-Democrat
Indeed, the new models offer a substantial improvement, and it is possible even with what we have to develop it through what you might call “interrogation” to a point where it offers very helpful suggestions.

I try to test insights with actual human beings.

I made a few hymns with AI based off theological themes off my own philosophical theology I am developing. It resulted in some beautiful hymns that were well received. So the models are getting alot better indeed. But you still need to verify anything it outputs. AI lack embodied cognition, and embodiment is one of the most fundamental aspects of our being in the world, our source of knowledge. There's a whole hosts of feelings and perceptions that AI don't know about except mediated by language.
 
Upvote 0