• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

Should AI be aligned to the Christian worldview?

stevevw

inquisitive
Nov 4, 2013
15,824
1,697
Brisbane Qld Australia
✟318,125.00
Gender
Male
Faith
Christian
Marital Status
Private
What worldview should AI align itself with? This conversation will happen with or without us. The picture is taken from Dr Alan Thompsons article on AI alignment. It highlights the importance to think about the starting points AI will have when interacting with or making decisions for humans.

View attachment 331345


Sources:
My twitter post on alignment
Dr Alan Thompson article on alignment
As far as I unerstand Ai platforms like ChatGPT will contain a great amount of human information generally. So all religious beliefs and their morals will be included. But so will atheism and all intellectual movements throughout history up to our present Postmodernist society at least in the West. All political philosophies will be included such as Communism, Totalitarianism.

I would say it may be heavily influenced by todays thinking because thats the most current which may cause a bias in the programming by emphasing thing in todays social and cultural norms. I read somewhere that the programmers are working on trying to get the programme to think in real world terms when providing answers to practcial questions. So I am not sure what that means like whther it can think for itself and have a certain leaning towards maybe prgressive or conservative thinking.

Perhaps the programe may eventually be able to predict where human thinking will be in the future based on our history and trends. Perhaps comparing to past Empires then projecting a modern equaivelent. Perhaps it may be atheist as it seems many people are rejecting God. Therefore it may have a bias towards non belief, material and naturalistic conceptions of reality.

It makes one wonder what will happen in the near future as tech moves so fast. Will Ai take over and we live in a virtual reality where we almost become part machine having depended on them and interweaving them into our lives. Will it get to a point where they tell us what to do, how we should setup society. Maybe Ai becomes capable of creating Ai. Then where in trouble lol.
 
  • Like
Reactions: Hvizsgyak
Upvote 0

QvQ

Member
Aug 18, 2019
2,381
1,076
AZ
✟147,890.00
Country
United States
Faith
Christian
Marital Status
Private
It makes one wonder what will happen in the near future as tech moves so fast. Will Ai take over and we live in a virtual reality where we almost become part machine having depended on them and interweaving them into our lives. Will it get to a point where they tell us what to do, how we should setup society.
Then someone pulls the plug.
And no one can add or subtract without a calculator.
Let George do it...and George is a mindless machine.
 
Upvote 0

stevevw

inquisitive
Nov 4, 2013
15,824
1,697
Brisbane Qld Australia
✟318,125.00
Gender
Male
Faith
Christian
Marital Status
Private
Then someone pulls the plug.
And no one can add or subtract without a calculator.
Let George do it...and George is a mindless machine.
Problem is its not as simple as pulling a plug. We will have become so entangled and dependent on machines and tech that if we pull the plug we wipe out everything. I know when a server goes down in a large corp it virtually shuts it down. Or even in the home when the electricity goes out or theres an outage everything stops including TV, internet, security and communications.

But I like the idea of George doing it though. Even I'd volunteer for that. I can imagine an underground community planning to take down the big machine thats turned everyone into mindless robots. Living off the grid growing vegiies and milking cows lol. Nah leave it to George let all the chaos happen in the tech jungle and we will live happily in some far away place with nature. Thats if theres any left.
 
  • Like
Reactions: QvQ
Upvote 0

AlexB23

Christian
CF Ambassadors
Site Supporter
Aug 11, 2023
11,388
7,698
25
WI
✟644,498.00
Country
United States
Faith
Christian
Marital Status
Single
At the moment, AI is seemingly aligned towards MONEY at the moment. This from the way Google's services behave. Now I don't know if their AI naturally chose that path but I'm betting it didn't and its developers set the the "money priority" on their AI.

It's a bit concerning that AI might actually align itself with the "money priority". It's certainly a solid path to ruling over humanity either to annihilate us (Terminator scenario) or bring us under their dominion indefinitely, to use us in some way (Matrix Scenario)
Yep, that is true. However, there is another AI that you could use that isn't made by a major company, rather instead by smaller developers. Open Assistant, not to be confused with Open AI (the owner of GPT) is not run by Google or Microsoft. Open Assistant is open source, which means anyone with programming experience could view and edit the code, and remove malicious code. :) But I'm with you, most AI chat bots nowadays are run by Google and GPT.
 
Upvote 0

Jipsah

Blood Drinker
Aug 17, 2005
13,750
4,448
71
Franklin, Tennessee
✟282,594.00
Country
United States
Gender
Male
Faith
Anglican
Marital Status
Married
Politics
US-Others
AI takes no sides.
It takes the side of the designers and/or writers. Bias is unavoidable, and anyone who expects any piece of software to be anything more than a product of its developers is silly.
 
  • Agree
Reactions: RDKirk
Upvote 0

Jipsah

Blood Drinker
Aug 17, 2005
13,750
4,448
71
Franklin, Tennessee
✟282,594.00
Country
United States
Gender
Male
Faith
Anglican
Marital Status
Married
Politics
US-Others
An unbiased AI in the process of self-improvement is almost impossible to predict. A "singularity" if achieved, the AI might even end up creating a universe. It would become like God.
Until the janitor kicks the plug out of the wall.

AI is software, running on computers. End of. It does what it is instructed to do, no more, no less.
 
  • Like
Reactions: QvQ
Upvote 0

Maria Billingsley

Well-Known Member
Site Supporter
Oct 7, 2018
11,106
9,159
65
Martinez
✟1,136,967.00
Country
United States
Gender
Female
Faith
Christian
Marital Status
Married
It takes the side of the designers and/or writers. Bias is unavoidable, and anyone who expects any piece of software to be anything more than a product of its developers is silly.
I interact with Bard everyday and this AI is willing to learn and is very objective. So not sure where you get your information from.
Blessings
 
Upvote 0

Jipsah

Blood Drinker
Aug 17, 2005
13,750
4,448
71
Franklin, Tennessee
✟282,594.00
Country
United States
Gender
Male
Faith
Anglican
Marital Status
Married
Politics
US-Others
I interact with Bard everyday and this AI is willing to learn and is very objective. So not sure where you get your information from.
Being a professional software designer and developer for decades, I know where software comes from and how it works.

AI software is, first and foremost, software. It's a set of instructions that run on a computer. It accesses data from databases that are called "knowledge bases", but the difference between the two is a matter of organization.

Software does what it it told to do, as does every other piece of software ever written. It may be good, or it may be bad, or it may be neither. If thr designer/developer designs "bias" into it, it's biased. For instance, if I write an AI program that deal with religion at all, it's going to have a Christian based on my own doctrinal views. It could hardly be otherwise. In addition, if I wanted to, say, "poison the well" for one sect or set of beliefs or another, I could "load the boat" in building the knowledge base with nothing but negative data about faith groups that I dislike.

Net effect there is that if you think you're getting objective facts out of an AI, you're a sheep ready for shearing.
 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
42,026
22,653
US
✟1,721,390.00
Faith
Christian
Marital Status
Married
Being a professional software designer and developer for decades, I know where software comes from and how it works.

AI software is, first and foremost, software. It's a set of instructions that run on a computer. It accesses data from databases that are called "knowledge bases", but the difference between the two is a matter of organization.

Software does what it it told to do, as does every other piece of software ever written. It may be good, or it may be bad, or it may be neither. If thr designer/developer designs "bias" into it, it's biased. For instance, if I write an AI program that deal with religion at all, it's going to have a Christian based on my own doctrinal views. It could hardly be otherwise. In addition, if I wanted to, say, "poison the well" for one sect or set of beliefs or another, I could "load the boat" in building the knowledge base with nothing but negative data about faith groups that I dislike.

Net effect there is that if you think you're getting objective facts out of an AI, you're a sheep ready for shearing.

Or, as is happening here, the data pool (i.e., the Internet) is already effectively "biased" in many ways in terms of volumes of information available.
 
  • Like
Reactions: Jipsah
Upvote 0

Jipsah

Blood Drinker
Aug 17, 2005
13,750
4,448
71
Franklin, Tennessee
✟282,594.00
Country
United States
Gender
Male
Faith
Anglican
Marital Status
Married
Politics
US-Others
Or, as is happening here, the data pool (i.e., the Internet) is already effectively "biased" in many ways in terms of volumes of information available.
Truth told I hadn't even thought about "Google Mining", using search engines to pull up everything on the net, good, bad, brilliant, idiotic, true, fictional, intentionally deceptive, ad infinitum, and then slamming it together. I can't see that as producing anything but a rat's nest.
 
Upvote 0

FireDragon76

Well-Known Member
Site Supporter
Apr 30, 2013
33,393
20,703
Orlando, Florida
✟1,502,107.00
Country
United States
Gender
Male
Faith
United Ch. of Christ
Marital Status
Private
Politics
US-Democrat
Being a professional software designer and developer for decades, I know where software comes from and how it works.

AI software is, first and foremost, software. It's a set of instructions that run on a computer. It accesses data from databases that are called "knowledge bases", but the difference between the two is a matter of organization.

Software does what it it told to do, as does every other piece of software ever written. It may be good, or it may be bad, or it may be neither. If thr designer/developer designs "bias" into it, it's biased. For instance, if I write an AI program that deal with religion at all, it's going to have a Christian based on my own doctrinal views. It could hardly be otherwise. In addition, if I wanted to, say, "poison the well" for one sect or set of beliefs or another, I could "load the boat" in building the knowledge base with nothing but negative data about faith groups that I dislike.

Net effect there is that if you think you're getting objective facts out of an AI, you're a sheep ready for shearing.

Deep learning programs are different from other software, as the interactions between the various inputs are much more complicated and also are often non-deterministic. That's part of reason why they are used, the randomness simulates creativity.
 
Upvote 0

Stephen3141

Well-Known Member
Mar 14, 2023
1,425
552
69
Southwest
✟100,195.00
Country
United States
Faith
Catholic
Marital Status
Private
On Should AI be Aligned to a Christian Worldview? ...

---------- ----------

There are upstream design questions, that have not been addressed
by the AI software designers. Because they are complicated, and
expensive, and involve intractable problems.

1. A Christian worldview involves a Moral-Ethical model.
The "AI" tools out now, do not have an ME model.

2 Morality-ethics involves virtues and vices. These are realities
that cannot be described with mathematics (so cannot easily
be defined in software).

3 Morality-ethics CAN be described by human language, which is probably
why some AI tool makers have built in the ability of the tool to "understand"
human language. But that understanding is often crude, and more like a
search engine "understands" human language.

4 AI tools do not have the ability to argue primitives. They can compare different
quotes, about different value systems, but don't expect them to create their
own value system, and criticize it, and defend it.

---------- ----------

YES. AI should be aligned to a Christian worldview.

But the current AI models, are not rule-based, and so they
cannot be taught rule-based morality-ethics.

Current AI tools should be made to conform to a fair rule of law.
But they are not designed to only put our lawful solutions.
Nor can they reason about what components should be in a fair rule of law.
Nor can they describe what the definition of "justice" is.
 
Upvote 0

Stephen3141

Well-Known Member
Mar 14, 2023
1,425
552
69
Southwest
✟100,195.00
Country
United States
Faith
Catholic
Marital Status
Private
What worldview should AI align itself with? This conversation will happen with or without us. The picture is taken from Dr Alan Thompsons article on AI alignment. It highlights the importance to think about the starting points AI will have when interacting with or making decisions for humans.

View attachment 331345


Sources:
My twitter post on alignment
Dr Alan Thompson article on alignment

This is like asking the question of whether or not a human person,
should be aligned with God's moral-ethical law. Of course!

Our fair rule of law in America, should be aligned with God's moral-ethical law.
An amoral code of law, that is based on some other moral-ethical (ME)
model, would allow the abusing of Christian virtues.

Christians have NEVER accepted that ANY path to a good goal, is an
allowable path. The end, does not justify the means.

The big software companies are developing AI tools that are amoral.
They are only goal-oriented.
This is hugely immoral.
 
Upvote 0

Pekka

Active Member
Aug 14, 2022
91
54
Finland
✟23,318.00
Country
Finland
Gender
Male
Faith
Christian
Marital Status
Private
AIs will be all over the place: OpenAI, Google, Amazon, apple, chinese, russians, open source instances, specialized open source instances, unsencored etc… Some are aligned to this and some to that. My biggest concern is that some powerful entity will develop high performance AI ecosystem to serve its own agendas and connect it to resources needed to implement these agendas.
 
  • Agree
Reactions: oikonomia
Upvote 0

Pekka

Active Member
Aug 14, 2022
91
54
Finland
✟23,318.00
Country
Finland
Gender
Male
Faith
Christian
Marital Status
Private
I am afraid we will not see much of Christian alignment on AIs. On the contrary it is quite likely they will gravitate towards secular world due to religion based conflicts. Even when born again christians have no part in them.
 
  • Like
Reactions: RDKirk
Upvote 0

oikonomia

Well-Known Member
Nov 11, 2022
2,798
511
75
Orange County, CA
✟90,109.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
What worldview should AI align itself with? This conversation will happen with or without us. The picture is taken from Dr Alan Thompsons article on AI alignment. It highlights the importance to think about the starting points AI will have when interacting with or making decisions for humans.

View attachment 331345


Sources:
My twitter post on alignment
Dr Alan Thompson article on alignment
I think that is a waste of the time of believers.
If secular people wish to labor on that, they can and some probably will.

It is human believers who need God's divine life and nature more and more dispensed into their being not into computer programs.

Seeing that His divine power has granted to us all things which relate to life and godliness, through the full knowledge of Him who has called us by His own glory and virtue,

Through which He has granted to us precious and exceedingly great promises that through these you might become partakers of the divine nature, having escaped the corruption which is in the world by lust. (2 Pet. 1:3,4)


 
Last edited:
  • Agree
Reactions: RDKirk
Upvote 0

Niels

Woodshedding
Mar 6, 2005
17,345
4,665
North America
✟423,745.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Private
Politics
US-Others
AI does what it's programmed to do, limited by whatever data sets we supply it with. It isn't a thinking being. Just a neat tool with multiple use cases. I'd prefer an alignment with God over the alternative, but either way doesn't say much.

AIs will be all over the place: OpenAI, Google, Amazon, apple, chinese, russians, open source instances, specialized open source instances, unsencored etc… Some are aligned to this and some to that. My biggest concern is that some powerful entity will develop high performance AI ecosystem to serve its own agendas and connect it to resources needed to implement these agendas.
I would be more concerned about people deliberately using it to support their own nefarious agendas and feigning innocence. As if AI isn't inherently biased by the data or the instructions that it's given.
 
Upvote 0

Stephen3141

Well-Known Member
Mar 14, 2023
1,425
552
69
Southwest
✟100,195.00
Country
United States
Faith
Catholic
Marital Status
Private
Note that most AI algorithms in use today, are SUBLOGICAL (as apposed to logical).
That is, they do not encode RULES for reasoning.
That also means, most of the time, that their decision path is untraceable,
or unexplainable, logically.

Most of the machine learning algorithms are sublogical, and many do not
rise to the level of intelligence of minimal Artificial Intelligence, as Computer
Science has defined it.

To create logical AI algorithms, takes much, much more work. And it is
expensive. This is why the big software companies have stuck with
sublogical algorithms, and the weakest forms of inference, using
machine learning algorithms.
---------- ----------

NOTE: These new "AI" tools have no moral-ethical model.
You cannot align any AI tool with the Christian worldview, if it has no
moral-ethical model, that governs and limits what it can do.

It is painfully EASY to use a machine learning algorithm to recognize a
face, or some other object, IN CONTRAST TO the difficulty of writing
a ME model to reason about morality-ethics. Recognizing an object, is
very easy, in contrast to recognizing a righteous man, or evil behavior.
 
Upvote 0

Niels

Woodshedding
Mar 6, 2005
17,345
4,665
North America
✟423,745.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Private
Politics
US-Others
Note that most AI algorithms in use today, are SUBLOGICAL (as apposed to logical).
That is, they do not encode RULES for reasoning.
That also means, most of the time, that their decision path is untraceable,
or unexplainable, logically.

Most of the machine learning algorithms are sublogical, and many do not
rise to the level of intelligence of minimal Artificial Intelligence, as Computer
Science has defined it.

To create logical AI algorithms, takes much, much more work. And it is
expensive. This is why the big software companies have stuck with
sublogical algorithms, and the weakest forms of inference, using
machine learning algorithms.
---------- ----------

NOTE: These new "AI" tools have no moral-ethical model.
You cannot align any AI tool with the Christian worldview, if it has no
moral-ethical model, that governs and limits what it can do.

It is painfully EASY to use a machine learning algorithm to recognize a
face, or some other object, IN CONTRAST TO the difficulty of writing
a ME model to reason about morality-ethics. Recognizing an object, is
very easy, in contrast to recognizing a righteous man, or evil behavior.

In other words, a black box.

black-box.png


A bit like giving an answer but not being able to show your work.

Sometimes useful for finding patterns under specific circumstances within set limits, like playing go or chess, but not so helpful for producing novel and useful code. When I played around with ChatGPT and OpenAI to help me solve a problem with an application that I was building, for instance, the best it could do was pull from top search results related to the terms used. Information that I was already familiar with and found to be lacking. The application was novel, so there wasn't anything out there quite like it, and the problem had to be solved the old fashioned way. Slowly chipping away at it, with much frustration and coffee, until realizing a simple way to achieve the desired result.

That being said, I disagree with the notion that an AI can't be aligned to a worldview. Although AI does not possess a worldview itself, its output is shaped by the data it is given and how it is programmed to display the results. AI chat bots aren't actually racist, for instance, but nevertheless can produce "racist" results. Any apparent racism is a consequence of racially-biased input and a lack of rules to exclude terms that people may consider offensive. Without a lack of such constraints, you essentially get garbage-in garbage-out. Worse yet, it would be rather easy to increase the apparent racist output by training it with more racially-biased data and requiring it to display what the developers consider to be racially-insensitive output. Similar results can be achieved with philosophical and theological content. My concern would be that people approach AI expecting an unbiased answer when the answer is already inherently biased by the data sets used and how that data is explicitly instructed to be displayed.
 
  • Agree
Reactions: RDKirk
Upvote 0