Should AI be aligned to the Christian worldview?

timewerx

the village i--o--t--
Aug 31, 2012
15,274
5,903
✟299,820.00
Faith
Christian Seeker
Marital Status
Single
Even that is not morally neutral.

God also doesn't trust anyone (people) either, not even their hearts, unless their actions prove them to be trustworthy.

And finally God surveils everything, everyone, it comes as a perk with omniscience.

God's only "alignment" is to believe in the One He has sent.

And because nobody believes in the One He has sent, then technically, God's "alignment" doesn't exist. God doesn't have an alignment either.
 
Upvote 0

Stephen3141

Well-Known Member
Mar 14, 2023
470
136
68
Southwest
✟39,648.00
Country
United States
Faith
Catholic
Marital Status
Private
What worldview should AI align itself with? This conversation will happen with or without us. The picture is taken from Dr Alan Thompsons article on AI alignment. It highlights the importance to think about the starting points AI will have when interacting with or making decisions for humans.

View attachment 331345


Sources:
My twitter post on alignment
Dr Alan Thompson article on alignment
The short answer is "Yes!" I wrote a manuscript called "Logic for Christians", in which I address the problem of building morality/ethics into automated programs (among other topics).

BUT, the makers of "AI" tools have no wish to integrate morality/ethics into their tools. They are looking to make money, not be righteous.

So, morality/ethics will not (I predict) be built into "AI" tools.
 
  • Agree
Reactions: Hvizsgyak
Upvote 0

Stephen3141

Well-Known Member
Mar 14, 2023
470
136
68
Southwest
✟39,648.00
Country
United States
Faith
Catholic
Marital Status
Private
What worldview should AI align itself with? This conversation will happen with or without us. The picture is taken from Dr Alan Thompsons article on AI alignment. It highlights the importance to think about the starting points AI will have when interacting with or making decisions for humans.

View attachment 331345


Sources:
My twitter post on alignment
Dr Alan Thompson article on alignment
Your approach, of listing proposed moral values, is mistaken. The technology can't handle this.

AI algorithms fall into 2 categories:
logical
sublogical

The logical algorithms deal with rules that human beings can understand, such as:
If an action can harm a human being ==> don't carry out the action

In science fiction, movie makers or writers (such as Asimov) assume that a machine can be taught what concepts such as "harming a human being" means. In reality, it is almost impossible to include this definition in computer programming. How many ways of harming a human being, must be explained in computer code? 10,000? 6,000,000? Logical rules such as this, are used by the "logical" algorithm approach. And the big software companies have not invested in trying to define what this sort of guideline means.

All the current AI approaches are "sublogical". The machine learning and neural net approaches, are sublogical. That is, they do not have identifiable logical rules, that they conform to. This is why their conclusions, are often NOT EXPLAINABLE to the person using the software. Machine learning approaches take huge amounts of "situations", that are manually "truthed" to be examples of certain characteristics. So information on "dogs" are often pictures of dogs. And pictures of other things are also given to a computer program along with dog pictures, but are marked as "FALSE", with the dog pictures marked as "TRUE". (This is a bit of a simplification, but makes the right point.)

There are many problems with machine learning. One is that the program learns any bias that the human picker of data, has. another is that simple characteristics can be learned, but much of life requires choices about events with MANY characteristics. And trying to train a neural net on data that has 50 different characteristics, is enormously difficult. Human beings learn how to automatically make choices in different situations, based on different lists of relevant characteristics. Machines can't do this.

Then, there is morality-ethics. You could train a neural net to recognize dogs in pictures. Even when a man is beating a dog with a stick. But the problem is not training it to recognize a dog, but training it to make moral-ethical decisions about whether the action of the man is right our wrong. The hard sciences don't even have the variables to express moral-ethical right or wrong. Morality-ethics is another higher layer of reasoning, that must be impressed over lower and simpler reasoning about what objects are.

How do you train a neural net to recognize abstract concepts. Such as ownership. Can you train it on pictures of a toothpick owned by me, and a toothpick owned by someone else, to get it to recognize that they have different owners? And yet ownership, is a core concept in a fair rule of law, and so, of the definition of justice. The ability to reason about abstract concepts, and apply this reasoning to the physical world, is VERY important. And the current simplistic machine learning algorithms cannot do this.

So many modern American have NO IDEA how AI algorithms work, or the types of algorithms there are. Or how the machine learning algorithms could be trained by criminal gangs to "recognize" good as evil, or evil as good.

Christians have got to stop being naive.

Building morality-ethics into AI, is a VERY difficult problem. And the big software developers are not willing to spend billions of dollars, in order to try to do this. So the current "AI" software is horrendously deficient, in contrast to the reasoning of a righteous man.
 
Upvote 0

FireDragon76

Well-Known Member
Site Supporter
Apr 30, 2013
30,651
18,542
Orlando, Florida
✟1,261,033.00
Country
United States
Faith
United Ch. of Christ
Politics
US-Democrat
Currently AI research is focused on large computer programs that make inferences and generalizations through loading various mathematical weights into them, which isn't "sub-logical", it's just different from deduction.
 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
39,276
20,269
US
✟1,475,579.00
Faith
Christian
Marital Status
Married
What worldview should AI align itself with? This conversation will happen with or without us. The picture is taken from Dr Alan Thompsons article on AI alignment. It highlights the importance to think about the starting points AI will have when interacting with or making decisions for humans.

View attachment 331345


Sources:
My twitter post on alignment
Dr Alan Thompson article on alignment
He included Asimov's Zeroeth Law. Good on 'im.

But he got the 10th Jewish commandment wrong. It's not "Do not be jealous," but "Do not be envious." Bad on 'im.
 
  • Like
Reactions: Hvizsgyak
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
39,276
20,269
US
✟1,475,579.00
Faith
Christian
Marital Status
Married
The problem with AI being given a Christian Worldview is it would read the bible and judge all humans sinners worthy of death ... AIs have the capacity to learn beyond their original programming afterall.

The overall flaw of an AI is the basis of its programming is mathematical, so it could not understand grace.
See, you haven't read Asimov's "Caves of Steel."
 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
39,276
20,269
US
✟1,475,579.00
Faith
Christian
Marital Status
Married
Have you ever read Asimov's "The Last Question"? That's pretty much the summary of that short story.

Also, since yesterday was Towel Day, in memory of Douglas Adams. The answer to life, the universe, and everything. is 42.
I just re-read that a couple of days ago.
 
Upvote 0

FireDragon76

Well-Known Member
Site Supporter
Apr 30, 2013
30,651
18,542
Orlando, Florida
✟1,261,033.00
Country
United States
Faith
United Ch. of Christ
Politics
US-Democrat
I should start rereading Asimov again.

During the pandemic we listened to several of his books, particular his series of robot short stories.

I am more of a Bradbury fan, but I do appreciate Isaac Asimov as well, and grew up with his robot stories. I read Arthur C. Clarke's Childhood's End for the first time during the pandemic and was blown away, easily one of the best books I have ever read, and perhaps one of the best pieces of English literature ever.

I tried reading some more recent sci-fi like Octavia Butler but couldn't stomach her book enough to finish it. My S.O. enjoyed reading The Three Body Problem by the Chinese author Liu Cixin. He's similar to Clarke, from what I gather.
 
  • Like
Reactions: Hvizsgyak
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
39,276
20,269
US
✟1,475,579.00
Faith
Christian
Marital Status
Married
I had a quick question and decided to ask ChatGPT rather than dig through Google. I asked ChatGPT who was the first woman appointed Secretary of the Air Force. I knew it was in the early 90s because I was still in the Air Force in the 90s, but I couldn't remember her name.

ChatGPT got it wrong, identifying a woman appointed by Barack Obama. Very simple question...Googling the same question brought up the correct answer immediately. At this point, when it comes to facts, ChatGPT is only good IMO for helping to jog my mind for information I already know. If I didn't already know the answer, I wouldn't trust ChatGPT.
 
  • Informative
Reactions: Hvizsgyak
Upvote 0

Tropical Wilds

Little Lebowski Urban Achiever
Oct 2, 2009
4,790
3,135
New England
✟195,052.00
Country
United States
Faith
Non-Denom
Marital Status
Married
Politics
US-Others
Upvote 0

FireDragon76

Well-Known Member
Site Supporter
Apr 30, 2013
30,651
18,542
Orlando, Florida
✟1,261,033.00
Country
United States
Faith
United Ch. of Christ
Politics
US-Democrat
I had a quick question and decided to ask ChatGPT rather than dig through Google. I asked ChatGPT who was the first woman appointed Secretary of the Air Force. I knew it was in the early 90s because I was still in the Air Force in the 90s, but I couldn't remember her name.

ChatGPT got it wrong, identifying a woman appointed by Barack Obama. Very simple question...Googling the same question brought up the correct answer immediately. At this point, when it comes to facts, ChatGPT is only good IMO for helping to jog my mind for information I already know. If I didn't already know the answer, I wouldn't trust ChatGPT.

It's wrong more often than it is right, if you ask it about anything other than the weather or something that doesn't require expert knowledge. This phenomenon has even been given a name, "AI hallucination". It just makes stuff up, basically, Baron Munchausen style. GPT-3 doesn't deal in logic (it's notoriously bad at math), it deals in inference and makes assumptions about things based on generalizations.

A few months ago I asked it about a chess game I got off Steam, Chess Ultra, a multiplatform chess game that will run on consoles or a PC. It has great graphics, a fun computer chess opponent, OK multiplayer. I asked ChatGPT a question about it, I wanted to know what chess engine it used. It claimed it used the Stockfish engine, an open source engine that is the most powerful chess entity in the world right now, and one that I am familiar with. That answer didn't sit well with me, because I know Stockfish and the style of play was very different. I did more research and found out that wasn't the case, that it probably used an engine called Fritz, which is a chess engine from a small German company that is quietly licensed to PC and video game developers, because it has an aggressive and fun, human-like style of play, whereas Stockfish is dry, positional, and counter-intuitive.

I've also had ChatGPT be wrong about a whole host of other details as well, but that one stuck out the most to me.

So, don't trust ChatGPT with anything important. At the very least, you have to backup its research with your own. It can be good if you can't find answers anywhere else, though. Sometimes its helped point in my in the right direction, even if it isn't completely accurate.

At least, that applies to GPT-3, the one most familiar to the general public. GTP-4, I have heard better things about, but it's behind a $20/month paywall.
 
Last edited:
  • Like
Reactions: Hvizsgyak
Upvote 0

Petros2015

Well-Known Member
Jun 23, 2016
5,096
4,327
52
undisclosed Bunker
✟289,840.00
Country
United States
Faith
Eastern Orthodox
Marital Status
Married
I've also had ChatGPT be wrong about a whole host of other details as well, but that one stuck out the most to me.

I haven't messed with it much in the last couple months but asking it the same question twice seemed be one good check on whether it was hallucinating or not; if it's not sticking to its story, it's very likely a story.
 
Upvote 0

Hvizsgyak

Well-Known Member
Jan 28, 2021
586
253
60
Spring Hill
✟94,467.00
Country
United States
Faith
Byzantine Catholic
Marital Status
Married
From what I understand, GPT4 was allowed to study the whole internet so it has all the information from every source on every religion along with every attack on those religious beliefs and then some. Now to code morality into these AI programs, looking at alot of those commandments from different religions, there are many that overlap. I would say those commandments would be the first to go into the program. Anything outside of the common ones should be look at be group of 144,000 individuals from all the religions and voted on whether the commandment should be programmed into the AI program.

Of course, this is something the programmers should have done before they let the AI have free access to the internet. But then again, most programmers are like alot of scientists (look at the computer gaming industry) morals need not apply if you want to make money and a name for oneself.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
39,276
20,269
US
✟1,475,579.00
Faith
Christian
Marital Status
Married
From what I understand, GPT4 was allowed to study the whole internet so it has all the information from every source on every religion along with every attack on those religious beliefs and then some.
This is the reason why previous AI attempts have invariably turned "evil" very soon after having Internet access.

Now to code morality into these AI programs, looking at alot of those commandments from different religions, there are many that overlap. I would say those commandments would be the first to go into the program. Anything outside of the common ones should be look at be group of 144,000 individuals from all the religions and voted on whether the commandment should be programmed into the AI program.

Of course, this is something the programmers should have done before they let the AI have free access to the internet. But then again, most programmers are like alot of scientists (look at the computer gaming industry) morals need not apply if you want to make money and a name for oneself.
Well, what the programmers have done with ChatGPT is to program their own morality into it. It has holes, though.

For instance, you can get ChatGPT to find or create a disrespectful joke about men, but it will refuse to tell such a joke about women...because it's been specifically programmed to respect women, but not specifically programmed to respect men. It doesn't know "respect everyone."
 
  • Like
Reactions: Hvizsgyak
Upvote 0

QvQ

Member
Aug 18, 2019
1,671
729
AZ
✟101,884.00
Country
United States
Faith
Christian
Marital Status
Private
Are we misunderstanding AI?
AI should be termed UI, for Useful Idiot.
It is not reliable. Consider it's sources. Consider It considering it's sources.
Go look up a law, say, marriage laws The AI may use encyclopedias or Wiki so it may spit out laws from the Roman era, the Muslims, America in 1850 or America in 1950.
AI cannot "create" data and AI cannot reason. It can't add known A+B = unknown C
AI doesn't have parameters unless those are set. AI cannot create data, it can only rearrange data based on parameters set by programmers.
Even then it can't analyze or reason. It just spits.
The programmers are trying to use it in the legal profession but AI "Lies" which is computer programmer language meaning it returns a false answer or untrue based on "just spit."
Where it is dangerous is it can fool people.
I notice it's use on this forum. I truly do not want to argue religion with AI or Wiki but the more secular amongst us do seem to be great believers in canned theology.
 
Last edited:
  • Like
Reactions: Hvizsgyak
Upvote 0

Mark Quayle

Monergist; and by reputation, Reformed Calvinist
Site Supporter
May 28, 2018
13,173
5,686
68
Pennsylvania
✟791,441.00
Country
United States
Faith
Reformed
Marital Status
Widowed
From what I understand, GPT4 was allowed to study the whole internet so it has all the information from every source on every religion along with every attack on those religious beliefs and then some. Now to code morality into these AI programs, looking at alot of those commandments from different religions, there are many that overlap. I would say those commandments would be the first to go into the program. Anything outside of the common ones should be look at be group of 144,000 individuals from all the religions and voted on whether the commandment should be programmed into the AI program.

Of course, this is something the programmers should have done before they let the AI have free access to the internet. But then again, most programmers are like alot of scientists (look at the computer gaming industry) morals need not apply if you want to make money and a name for oneself.

This is the reason why previous AI attempts have invariably turned "evil" very soon after having Internet access.


Well, what the programmers have done with ChatGPT is to program their own morality into it. It has holes, though.

For instance, you can get ChatGPT to find or create a disrespectful joke about men, but it will refuse to tell such a joke about women...because it's been specifically programmed to respect women, but not specifically programmed to respect men. It doesn't know "respect everyone."

Are we misunderstanding AI?
AI should be termed UI, for Useful Idiot.
It is not reliable. Consider it's sources. Consider It considering it's sources.
Go look up a law, say, marriage laws The AI may use encyclopedias or Wiki so it may spit out laws from the Roman era, the Muslims, America in 1850 or America in 1950.
AI cannot "create" data and AI cannot reason. It can't add known A+B = unknown C
AI doesn't have parameters unless those are set. AI cannot create data, it can only rearrange data based on parameters set by programmers.
Even then it can't analyze or reason. It just spits.
The programmers are trying to use it in the legal profession but AI "Lies" which is computer programmer language meaning it returns a false answer or untrue based on "just spit."
Where it is dangerous is it can fool people.
I notice it's use on this forum. I truly do not want to argue religion with AI or Wiki but the more secular amongst us do seem to be great believers in canned theology.
It's hard enough for a person who knows what he's looking for, to find anything online that teaches, for example, the Sovereignty of God over absolutely everything. AI would not even understand the term as applies to a reasonable (logical) definition of God, because most everything it reads online is based on the human POV of self-determination.
 
Upvote 0

QvQ

Member
Aug 18, 2019
1,671
729
AZ
✟101,884.00
Country
United States
Faith
Christian
Marital Status
Private
AI cannot learn, anymore than an encyclopedia can learn.
AI cannot analyze therefore it cannot understand.
AI cannot reason. AI cannot add Known A + B = Unknown C.
AI cannot create a logical sequence and conclude "therefore."
AI takes data, encyclopedia data, perhaps rearranges it according to a programmers code and spits it out.
You are absolute correct when you say that the AI can only use the data it has been fed. Therefore, if it is only fed self determination, it will only return self determination.
It is a pause and consider moment any time a person is asked "Is there free will?" A person will "Think About It." AI doesn't Think. AI simply searches the data banks and spits out whatever is in the soup.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

QvQ

Member
Aug 18, 2019
1,671
729
AZ
✟101,884.00
Country
United States
Faith
Christian
Marital Status
Private
I've been doing that regularly to ChatGPT with far less challenging questions. It doesn't even get correct answers to pop culture questions for which there are popular Wikis. Lousy Internet scraping.

And while getting the answer wrong, it still provides an extensive explanation of its wrong answer.

Then, when challenged, it turns to gaslighting. I've gotten ChatGPT to claim sources that I know don't even exist.
The problem of AI giving the wrong answer is called lying.

What you outlined; A wrong answer (lousy internet scraping), then an extensive explanation of AI's wrong answer and claiming sources that don't exist is well documented in the attempts to use AI for Legal and Medicine. This flaw is inherent and pervasive in AI.

Also, AI companies have done huge, superfast, unauthorized downloads of the online data library sites. That is called thieving.
 
  • Like
Reactions: Hvizsgyak
Upvote 0