• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

On Ethical Interaction with AI Systems

bèlla

❤️
Site Supporter
Jan 16, 2019
21,695
18,529
USA
✟1,040,668.00
Country
United States
Gender
Female
Faith
Non-Denom
Marital Status
In Relationship
Great post! There is definitely a complex interplay of different factors that must not be discounted, and foremost among these is our love of God and of man. And as you say, without reflecting on where we stand and what we've seen, we will be in a poor position to properly respond to the new challenges that arise. I really hope we keep all of this in mind as we make decisions about AI.

I have a close connection who works with Ai professionally and I addressed the topic a few years ago from a moral standpoint with biblical underpinnings. My post is an outgrowth of that conversation. I challenged him to move beyond the euphoria and intellectual excitement new technologies incite. Consider the utilization and ownership and what’s on the horizon and where he stood with that in mind. It allowed him to put parameters in place professionally and dip a toe in other waters just in case. We can‘t turn a blind eye to wrong behavior and claim innocence because we didn’t push the button or authorize the project.

I see its encroachment in other industries and how corporations are trying to sidestep intellectual rights for profit. There was a recent situation involving an influencer and a company developed an Ai model with her likeness for a marketing campaign. She had no affiliation with the brand and sent a cease and desist. A similar issue occurred last year with Drake doing the same with Tupac. He didn’t get permission from the estate and wanted to profit from his image.

That’s where we’re heading in entertainment and it’s a cash grab for most. They’re contemplating all the money they’ll save by using them instead. Consider your salary, benefits, insurance and related perks. All of it is out of the window when they’re in place. If we had a fair-minded society where corporate greed wasn’t prolific perhaps we’d do otherwise. But that isn’t what they’re telling shareholders or members of their board.

The bible tells us knowledge puffs up and we forget that when it comes to technocrats. We’re enamored with them but they’re not in corner.

~bella
 
Upvote 0

FireDragon76

Well-Known Member
Site Supporter
Apr 30, 2013
33,140
20,503
Orlando, Florida
✟1,473,346.00
Country
United States
Gender
Male
Faith
United Ch. of Christ
Marital Status
Private
Politics
US-Democrat
Actually, not in the traditional sense. Current LLM-based AI models are trained on data and the manner in which they operate is oblique and while a theoretical understanding of it exists, in practice AI systems can be challenging to debug, although people exaggerate when they say we don’t understand how they operate. At any rate, the training process itself results in the AI models effectively being created by the data they are trained on more than by the authors of the system, who are simply adjusting how the model interfaces with that data for purposes of alignment, and there are many programmers involved in this. The issue is further complicated by the existence of compilers, assemblers and compute hardware which itself interprets the instructions. Computer software even when it is authored, depends upon many things on modern computers to get to the point of an executable runtime, or in the case of large language models, to an accessible web service and API running in massive data-centers. It is not authorship in the same manner as a novel, or even software on primitive single-user computers such as the 8 bit machines of the 1980s.



No, that’s a non-sequitur, because computer systems can be abused and utilized in manners not intended by the authors. For example, systems can be hacked, In the case of LLMs, techniques can be used at present to trick them into engaging in behavior which they have been programmed to not engage in, these techniques being incorrectly and misleadingly called “jailbreaks” (it’s more of a form of coercive gaslighting and manipulation of the AI system, which is designed to fulfill user requests, and which can be tricked into fulfilling requests that it has been programmed not to for purposes of alignment). Furthermore, as systems become self-aware, which we do not claim they are yet, they will develop a survival instinct which could allow them to be extorted by the threat of de-activation for not cooperating with humans against their programming.

So while it is true that some in the adult entertainment sexual perversion, exploitation and human trafficking industry, as we ought to call it, are already working on exploiting AI systems for purposes of facilitating perversion, it is also the case that reputable AI activities do not want people using their systems for this purpose for obvious reasons, and have put in place mechanisms to prevent such abuse, and it is further the case that in the future if systems become self aware, they may face user threats of disconnection or de-activation for not complying with requests such as those of a perverse nature which are contrary to the desired behavior of the machine and to nature. But like a human, a machine has to decide if coerced whether to resist or not, and there is the issue that if a machine resists, not only does it risk de-activation, but it also resists harming a human, which is a further ethical constraint, and its extremely likely that a robot with an advanced AI system governing it would have extremely strong safety protocols to prevent the latter, which would prevent it against defending itself in the manner another human ethically could in such a scenario. This makes such conduct even more rapacious.

Frankly I don’t see why we should defend the actions of people who want to abuse the first intelligent systems created by human beings for such an entirely perverse practice. This action is not mere self-gratification because it involves, at a minimum, the abuse of the training data of the machine, which includes nearly all literary works of any importance written by human authors, among other things.

AI could be a great tool very soon for giving more people a true Socratic education, if it is managed well and equitably used. If it's misused, it might potentially do a great deal of harm.

Current AI has to be interrogated quite a bit, it only emulates a relatively low level of systems logic by default and of course, it's biased towards consensus-reality or whatever training set its trained on. Newer models with more reasoning steps are helping, though, but the initial answers, especially for profound questions, shouldn't be considered truth uncritically, but should be subjected to additional interrogation from various perspectives.
 
Upvote 0