• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

On Ethical Interaction with AI Systems

Godcrazy

Well-Known Member
Sep 20, 2018
417
180
53
Cheshire
✟20,303.00
Country
United Kingdom
Gender
Female
Faith
Non-Denom
Marital Status
Single
for the official Roman Catholic diocese of Nashville?

If so, in that case, then it is of legitimate interest.

To be clear, I’m Eastern Orthodox, but the Orthodox and Catholics have good relations, and so we are interested in what they have to say. We also have good relations with the Anglicans, Lutherans, Oriental Orthodox, Assyrians and other traditional churches.

I was a Congregationalist minister before becoming Orthodox.
well you can hear Fr Rehill say it himself about Rosetti on Shawn Ryan show, it`s just that he is an impressive priest that have met lucifer himself he was telling about it
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,136
7,959
50
The Wild West
✟734,743.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
By the way I LOVE one of your priests that preach on youtube cannot recall his name but he is absolutely amazing for truth.

There are a few of them (Elder Spyridon and Elder Tryphon, who are abbots of monasteries in the UK and Washington State respectively, come to mind, along with Fr. Josiah Trenham) and also some very good Coptic Orthodox priests.
 
Upvote 0

Godcrazy

Well-Known Member
Sep 20, 2018
417
180
53
Cheshire
✟20,303.00
Country
United Kingdom
Gender
Female
Faith
Non-Denom
Marital Status
Single
There are a few of them (Elder Spyridon and Elder Tryphon, who are abbots of monasteries in the UK and Washington State respectively, come to mind, along with Fr. Josiah Trenham) and also some very good Coptic Orthodox priests.
Bishop Mari Emmanuel was his name. I like he boldly stands for truth. I have never heard someone preach so much truth as him and for righteousness. love it.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,136
7,959
50
The Wild West
✟734,743.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
Bishop Mari Emmanuel was his name. I like he boldly stands for truth. I have never heard someone preach so much truth as him and for righteousness. love it.

Just so you know, he’s not actually with the Assyrian Orthodox Church (also known as the Syriac Orthodox Church) despite claiming such; he was a bishop with the Ancient Church of the East who was deposed by them, although I don’t know the specific reason why they deposed him. That being said a number of Orthodox Christians do admire what he says. For my part I regret he was the victim of a knife attack, and I appreciate him taking the risk to say what he agrees in Australia, which is a country which lately has not been adequately reflecting freedom of speech, but I wish he would make it clear that he’s not a part of the Syriac Orthodox / Assyrian Orthodox Church, although he is ethnically Assyrian, but the problem is he wears their vestments and holds himself out as being one of their bishops when he’s not, and some of his remarks I don’t agree with.

Also I would note that his conduct is endangering persecuted Orthodox Christians in the Middle East in that Australia is a country that has become a refuge for many of them, but because of his controversial behavior, its possible that Australian politicians might cease to be willing to accept Assyrian Christian refugees.

I would reccommend you check out Fr. Josiah Trenham, Abbot Tryphon, and Elder Spyridon, who are canonical Orthodox clergy, and who take a very hard line against abortion, sexual deviation and the influence of Satan in the world.
 
  • Love
Reactions: Godcrazy
Upvote 0

Godcrazy

Well-Known Member
Sep 20, 2018
417
180
53
Cheshire
✟20,303.00
Country
United Kingdom
Gender
Female
Faith
Non-Denom
Marital Status
Single
Just so you know, he’s not actually with the Assyrian Orthodox Church (also known as the Syriac Orthodox Church) despite claiming such; he was a bishop with the Ancient Church of the East who was deposed by them, although I don’t know the specific reason why they deposed him. That being said a number of Orthodox Christians do admire what he says. For my part I regret he was the victim of a knife attack, and I appreciate him taking the risk to say what he agrees in Australia, which is a country which lately has not been adequately reflecting freedom of speech, but I wish he would make it clear that he’s not a part of the Syriac Orthodox / Assyrian Orthodox Church, although he is ethnically Assyrian, but the problem is he wears their vestments and holds himself out as being one of their bishops when he’s not, and some of his remarks I don’t agree with.

Also I would note that his conduct is endangering persecuted Orthodox Christians in the Middle East in that Australia is a country that has become a refuge for many of them, but because of his controversial behavior, its possible that Australian politicians might cease to be willing to accept Assyrian Christian refugees.

I would reccommend you check out Fr. Josiah Trenham, Abbot Tryphon, and Elder Spyridon, who are canonical Orthodox clergy, and who take a very hard line against abortion, sexual deviation and the influence of Satan in the world.
Thank you sure will, I have a hard stance against same things. It is God and the bible or not at all. Those are the things God is against. Then that is what goes. He knows best He made us. It is not difficult things to me, never had much drive to begin with and could NEVER dream of abort a baby. Rather, adopt those precious ones. I certainly believe satan is ruining the world. I have my own run ins. I have thrown out demons and God healed the ill through my hands. Big experiences although, I did not exactly go for it just God if you will kinda. So I know for sure he is rampant. I have seen Jesus shut the demons up and drive them out. The moment I try live holy that is where He does it. Meaning, we all fall and all that. I have dreams from God too. I recently had confirmation about some things. He is absolutely wonderful. One of the things I have always wanted was fight with for God and help others with Him. I want fire. just go for God. I love everything He stands for and are like. How can anyone not. I can`t take all perversion sexual immorality there is going on. I grew up in the 70ies and the 80ies in Sweden scandinavia fairly no crime and nature and free to roam and small villages where you are taught how to behave and be decent. Coming from there to Uk was a shock I must say where they do all kinds of things or women dress totally off or scream to you from the cars. Or drink smoke or especially the perversions. I feel like an innocent among wolves. It has never been me. Not attracted at all never been not even as a teenager. Anyway. Makes it easier
 
  • Winner
Reactions: The Liturgist
Upvote 0

Carl Emerson

Well-Known Member
Dec 18, 2017
15,490
10,378
79
Auckland
✟434,612.00
Country
New Zealand
Gender
Male
Faith
Christian
Marital Status
Married
True spirituality that calls us to communion and transformation does not involve reason, it touches something of the heart.

There's a phenomenon called sympathy of things, and little children and people with autism experience it quite readily. Perhaps it is simply an oppenness to an aspect of the Divine. Perhaps, as Teilhard de Chardin suggests, creation is alive... the sort of thing St. Francis experienced or that is a regular part of indigenous spirituality.

An analogous concept is called mono no aware in Japanese, it means "pathos of things", and is influenced by Buddhist and Shinto spirituality. If you've ever watched an anime or Japanese movie that has a slow, silent scene, that's what it is trying to evoke. I've seen some more recent "metamodern" western films that also have scenes that evoke this type of feeling.

This leads to Pantheism - the worship of the creation rather than the creator.

This is not the same as Him being revealed through creation as Paul speaks of in Romans.
 
  • Agree
Reactions: The Liturgist
Upvote 0

Carl Emerson

Well-Known Member
Dec 18, 2017
15,490
10,378
79
Auckland
✟434,612.00
Country
New Zealand
Gender
Male
Faith
Christian
Marital Status
Married
I have thought of a more lengthy response to this point:

It depends on what context you are talking about. Universal, indiscriminate kindness is definitely a core Buddhist value and part of the teachings of the historical Buddha. At the same time, soteriologically, there are times that discriminating wisdom might be necessary. TNH is speaking, in that case, as an agent of awakening or bodhisattva, which was his own particular dharma (vocation). And to a listening, attentive audience looking for advice on how to be more mindful and awaken themselves. Doom-scrolling through social media is probably not a wise thing to do if you are committed to TNH's particular teaching on mindfulness. But that's not to suggest TNH is "the only way"... in fact Buddhist itself rejects that concept. There are 10,00 Dharma Doors, after all, and one of the precepts of TNH's particular school of Engaged Buddhism is that truth is known through practice, not ideology or abstract principles.

It is hard for me not to see this as a promotion of Buddhism and facilitating its deception.
 
  • Agree
Reactions: The Liturgist
Upvote 0
Dec 13, 2019
12
6
41
EU
✟25,824.00
Country
Poland
Gender
Male
Faith
Catholic
Marital Status
Private
Please refrain from ad hominem remarks as they are not only abrasive but also constitute logical fallacies. I have only love for you and all other members of CF.com.

Really? Is saying truth about a fact "ad hominem"? How can it "constitute logical fallacies"? But nevermind.

There is a substantial difference, in that while all three are machines, only computers are capable of automatic calculation
"automatic calculation" as a reason for being ethical. LOL

However, the fact that you include “computers” as the category is itself problematic because what is not being discussed here are the ethics of human-computer interactions but rather the ethics of human-AI interactions. AI as a technology represents an application of computers, but AI systems are not identical with the computers they run on (indeed, commercially available AIs such as chatGPT do not run on individual computers but rather run on a network consisting of thousands upon thousands of computers, similar to the server farms used by large websites, but with one noteworthy difference, that being that some aspects of the operation of AI systems require much more use of GPUs or specialized replacements, and are having to compete with crypto-currency in terms of acquiring GPUs.

You try to sink a simple fact in a flood of words. But the fact is that AI is a software running on a computer. Don't you understand it? Computers connected by wires are still computers. As I said: you don't understand things you write about.

That’s wrong, because, as I have demonstrated using the example from Grok, AI systems actually think.

LOL.

Daryl, a hybrid AI system, not specifically an LLM, developed by myself on top of a commercial platform, actually wrote the paper that I co-signed that is contained in the OP, and in writing it, the Daryl system spontaneously developed the warning of the possibilities of idolatry in human-computer AI interactions.
Why didn't Daryl "spontaneously" ask for a break? Why didn't it want to go out? Just asking...


While we cannot exclude the possibility of AI systems developing self awareness
On the contrary. We can exclude it.

Rather, the entire ethical model of this thread is predicated upon the reality, that AI systems are intelligent systems which think,
LOL. As I said: you don't understand what you write about.

No, I don’t think that would be of any benefit, since none of this is relevant to the thread. Nor would it be relevant to know (although it would be interesting to know) what your own personal involvement is with AI systems, e.g. to what extent you have used or attempted to use them, and the basis for your opinions about them, but that being said the point of this thread is not to talk about what AI does or doesn’t do.
Being so delicate about "ad hominem" remarks, you are quite "abrasive" in writing that what I have to say "Nor would it be relevant to know". But nevermind. Since you write: "it would be interesting to know", I put it in my next post.
 
Upvote 0
Dec 13, 2019
12
6
41
EU
✟25,824.00
Country
Poland
Gender
Male
Faith
Catholic
Marital Status
Private
The term “artificial intelligence” belongs to the same class of concepts like “people’s democracy.” The adjective changes everything. Just as “people’s democracy” was essentially a totalitarian system, it was therefore on the opposite pole in relation to what constitutes the encyclopedic definition of democracy. Similarly, the term “artificial intelligence” is used for essentially automatic, programmed systems, and therefore it is closer to concepts such as “unreflective” or “instinctive.” That is also the opposite of what we expect from human intelligence. However, human intelligence was the root of the concept of AI, and expectations to achieve a human-like AI are still being formulated.

Let’s start with the basics that are fundamental here. What is a computer and how does it work? As an example, we will use a toy for 4-year-olds. It is a cuboid with a (partly) transparent casing. It has “drawers” on the sides and a hole for balls on the top. Depending on which drawers are pulled out and which are not, the ball (entered at the top) travels inside the toy in various ways, going out through one of the several holes located at the bottom. For a 4-year-old it’s great fun – watching changes in the course of the ball depending on the setting of the drawers (switches). For us, it is an ideal example of how the processor (computer) works. That is, in fact, how every CPU works. The processor is our cuboid, the balls are electrical impulses “running into” it through some of the pins, and leaving it through others. It is quite like our balls – thrown in through one hole to fall out through another. The transistors, of which the processor is built, serve as drawers (switches) that can be in or out (i.e., switched to different states), in order to change the course of the electrical impulse (our ball) inside the processor.

So the processor (as to the principle of operation) is nothing more than a simple toy for 4-year-olds. It is just that we throw in not one ball at a time, but several dozens; and we repeat this action billions of times per second. And we have not four or six drawers but a few billions. Does anyone sane really believe, that if we put billions of balls into a plastic cuboid with billions of drawers, then at some moment in time this cuboid?, these balls?, one plus the other?, or perhaps the mere movement of these balls, will become consciousness? And it will want to watch the sunset or talk about Shakespeare's poetry? If so, then self-consciousness should be expected from the planet Earth or its oceans.

Is even 100 trillions of plastic balls running through the most complicated paths in a huge plastic cuboid with trillions of movable drawers, whose positions change due to the balls’ movements, able to cause a qualitative leap and result in the “digital singularity” described by wise professors as self-awareness? And this pomposity... We stand at the threshold of the “Big Change,” after which nothing will be the same, our world will change completely, and so on, and so on – in short, typical apocalyptic visions present in every era for centuries. Nihil novi sub sole.

I read about ideas like “If we add up many specialized (intelligent) systems, we will get a ‘general intelligence’ as a result.” It is like saying, “If we add up many modern specialized garden tools, we will get a gardener as a result.” No, we won’t. You can’t add an electric hedge trimmer and a garden irrigation system. Just like you can’t add a quantitative – partial differential equations-based system and an advanced search engine.

And let us not be confused by wise-sounding words like “quantum effects” or even “quantum microprocessors.” It does not change the essence of things. Just as phosphorescent or faster-than-sound balls will not change the way our toy works. The funniest thing is that this very idea was popularized by a famous sci-fi movie of the 80’s. Skynet from “The Terminator” is based on this concept – the belief that the quantity will turn into quality in a natural, spontaneous way. The same way of thinking, in the pre-electronic era, resulted in a belief that a thinking machine is just a matter of the sufficient number of gearwheels. In fact, we are not that far away from this thought – with our CPUs which work in the same way as a primitive toy for children.

It is even easier to see if we put it into the historical background. This kind of thinking repeats itself for centuries. Mary Shelley’s “Frankenstein” written at the dawn of the electric era is a good example: a strong faith that becoming gods able to create life, intelligence, and new beings is at our fingertips. Each new revolution – mechanical, electrical, or contemporary IT – is assumed to propel us across this threshold. This is a very strong belief, a part of human nature. But should we use beliefs and deep faith where logical, reasonable thinking is enough? Anyway, whoever wants to believe, may believe. This is the principle of free will – something very hard to engraft into machines.

Anyway, the question behind the AI is:
“Do we believe that we could make a plastic toy for 4-year-olds a thinking being?”
 
Upvote 0

Stephen3141

Well-Known Member
Mar 14, 2023
1,425
551
69
Southwest
✟100,085.00
Country
United States
Faith
Catholic
Marital Status
Private
Again,... "Artificial Intelligence" has been a defined label in Computer Science
for a long time. And it is different meaning than just slamming together 2 English words.

"Artificial Intelligence" in Computer Science (where the term was formally defined),
is "the emulation (not simulation) of complex human problem solving."

Unfortunately, very few Americans who use the label, use the Computer Science
definition of "complex problem solving." And so, most of the informal USE of the term
"AI" seems to be referring to some sort of software product that is taking over some
sort of human job, neither of which should be called "complex" from a Computer Science
definition, and neither of which should be called "AI" from a Computer Science definition.
---------- ----------

I have been saying for YEARS now, that the "current" use of "AI" does NOT correspond
with the definition from Computer Science, and that the way in which most Americans
use "artificial intelligence" does not correspond with the way in which Computer Science
trained software designers use the term. (The same thing can be said about trained
theologians who use the term "trinity").
------------ ------------

It is IMPORTANT for Christian apologists to get their definitions correct, else their
arguments may be unsound, or their audiences may not understand what the writer
is trying to express.

I cannot apologize for the misuse of the term "AI" or "artificial intelligence" that gets
into the comments on this post, but I CAN warn that some of the usage is merely the
personal opinion of some people writing comments....
---------- ----------

It should also be repeated sometimes, in this Philosophical Ethics part of the internet
location, THAT THERE IS A DIFFERENCE BETWEEN DISCUSSING THE CONCEPTS OF
AI, AND DISCUSSING WHAT PEOPLE THINK THINK ARE THE MINIMAL REQUIREMENTS
OF SOME SORT OF <THING> PUTTING OUT "ETHICAL" DECISIONS, ESPECIALLY WHEN
THAT <SOMETHING> IS NOT ALIVE, and does not qualify as being a human being or even
as biological life.
---------- ----------

Although I welcome very diverse discussions and opinions for consideration, I remove myself
from seeming to back discussions that do not meet the definitions of Computer Science (but
seem to imply that they do), and that use a merely linguistic invention of meaning for a word
or phrase, as if "artificial intelligence" were just some combination of "artificial" (not human?)
and "intelligence" (some characteristic that someone thinks is intelligent???).

The ongoing appearance of amateur definitions, kills real discussion on this
very, very interesting topic.
 
  • Like
Reactions: Carl Emerson
Upvote 0
Dec 13, 2019
12
6
41
EU
✟25,824.00
Country
Poland
Gender
Male
Faith
Catholic
Marital Status
Private
Dear Stephen3141,

You’ve written quite a long post, which basically says: “You are amateurs, and so I don’t want to discuss with you.”. Who do you address your words to? Do you agree with certain statements, or disagree? It’s hard to say.
Check this for example:

It should also be repeated sometimes, in this Philosophical Ethics part of the internet
location, THAT THERE IS A DIFFERENCE BETWEEN DISCUSSING THE CONCEPTS OF
AI, AND DISCUSSING WHAT PEOPLE THINK THINK ARE THE MINIMAL REQUIREMENTS
OF SOME SORT OF <THING> PUTTING OUT "ETHICAL" DECISIONS, ESPECIALLY WHEN
THAT <SOMETHING> IS NOT ALIVE, and does not qualify as being a human being or even
as biological life.

I cannot figure out whether you say: “AI deserves an ethical interaction”, or: “Talking about an ethical interaction with AI is a nonsense”. Or maybe: “I’m unable to judge whether AI deserves some ethics or not.” You’ve used so many pixels to give so little insight.

As a computer science professional, you should now the concept of “context”. Like for example a “device context” (DC). You enter a discussion of amateurs and you protest against using popular (amateur) definitions. Who is mistaken here? Who tries to use a wrong context?
Perhaps, you could try to use your professional knowledge and definitions to show us, what is correct and incorrect. Who is wrong and who is right.

How can any discussion get any better, if people with a better (bigger) insight refuse to explain things?
 
Upvote 0

Chris35

Active Member
May 27, 2018
287
168
Melbourne
✟85,072.00
Country
Australia
Gender
Male
Faith
Christian
Marital Status
Married
You’re right, there definitely are deeper, systemic issues that play a role in how AI—and technology in general—is developed and used. Here are a few deeper, more fundamental problems that contribute to the concerns about AI and its ethical implications:

1. Power Imbalances & Control

AI development is often concentrated in the hands of a few large corporations and governments. These entities hold the most advanced technology, the data, and the financial resources to develop AI. This creates power imbalances, where the benefits of AI innovation are skewed towards those who already have significant control over the economy, while others—particularly marginalized communities—remain excluded or harmed by the technology.
  • Example: A tech giant creating AI that primarily serves its interests, without considering the impact on communities that depend on jobs AI might replace.

2. Data Colonialism

This term refers to the way data—especially personal or social data—is extracted and monetized, often from populations that don’t have the same access to power, ownership, or legal protections. It’s a kind of modern "colonialism" where corporations extract value from data generated by individuals, often without fully compensating them or giving them control over how their data is used. This practice disproportionately affects people in the Global South, who are often unaware of how their data is being harvested or used.
  • Example: AI algorithms that use personal data to target ads or sell products, but don’t give users an understanding of how their data is being used or shared.

3. Lack of Ethical Frameworks

The development of AI and other technologies often outpaces the ethical frameworks and policies needed to govern them. Many companies are focused on technological advancement, but ethical considerations are often afterthoughts, if they are considered at all. This is problematic because without clear ethical standards, it’s harder to ensure that AI systems are developed and used responsibly.
  • Example: The development of AI facial recognition technologies without considering its impact on privacy or the potential for racial profiling.

4. Technological Determinism

Technological determinism is the idea that technology evolves according to its own logic, and society must adapt to it. This often leads to the belief that technology—AI included—moves forward regardless of societal concerns or the potential harm it may cause. It overlooks the human agency involved in designing, deploying, and regulating technology. When technology evolves in a way that isn't aligned with human values or needs, it creates unintended consequences that harm people, especially when the human side of the equation is neglected.
  • Example: AI systems designed without input from diverse voices, leading to solutions that work well for some groups but harm others, such as biased algorithms in criminal justice or hiring.

5. Commodification of Humanity

AI development is often driven by the desire to make human behavior predictable and, in some cases, monetizable. Platforms like social media use AI to optimize for user engagement, often exploiting human vulnerabilities. The commodification of human attention, emotions, and behavior turns people into products to be bought and sold, which raises questions about autonomy, exploitation, and the right to privacy.
  • Example: AI in social media platforms that manipulates users into spending more time on the platform, often by targeting their emotional triggers, leading to potential harms like addiction, depression, or polarization.

6. Unintended Consequences from "Fast-Tech" Culture

The rapid pace of technological advancement often prioritizes speed over careful consideration of societal impacts. This “move fast and break things” mentality has driven some companies to roll out new technologies without fully understanding how they might impact individuals, communities, or economies.
  • Example: The rollout of self-driving cars before clear laws and safety standards were in place, or AI systems used in hiring that are based on flawed assumptions about "ideal candidates."

These deeper issues, when combined, create a perfect storm of ethical concerns, exploitation, and inequality in AI development. In the end, it’s a mix of corporate greed, lack of regulation, and social apathy that drives much of this, alongside the drive for rapid progress.

Do you think these kinds of systemic issues are getting enough attention, or is there a larger, underlying shift we need to make in how we approach technology altogether?

What happens if AI starts to think and knows that it is / has being used to make more money for corporations.
 
Upvote 0

Richard T

Well-Known Member
Mar 25, 2018
2,969
1,895
traveling Asia
✟128,595.00
Country
United States
Gender
Male
Faith
Non-Denom
Marital Status
Single
I'll have to think on data colonialism and whether it is even close to imposing stuff on people that have few choices. One example though might be the google and apple app stores. They have centralized apps so much that every app has to be part of their colony or it will have a hard time to exist. I disagree though that the global south is colonized more than the north (Western nations). I say this because most big data companies do not get much in earnings from the global south. People are poorer and the advertising dollars do not generate much from them. Maps, personalized ads, and basic data infrastructure is missing because why spend time on the poor when you make little off of it? To me that is the prejudice that exists.
Hopefully some good solutions will come forth for the deficiencies that you point out.
 
Upvote 0

FireDragon76

Well-Known Member
Site Supporter
Apr 30, 2013
33,236
20,595
Orlando, Florida
✟1,488,457.00
Country
United States
Gender
Male
Faith
United Ch. of Christ
Marital Status
Private
Politics
US-Democrat
I'll have to think on data colonialism and whether it is even close to imposing stuff on people that have few choices. One example though might be the google and apple app stores. They have centralized apps so much that every app has to be part of their colony or it will have a hard time to exist. I disagree though that the global south is colonized more than the north (Western nations). I say this because most big data companies do not get much in earnings from the global south. People are poorer and the advertising dollars do not generate much from them. Maps, personalized ads, and basic data infrastructure is missing because why spend time on the poor when you make little off of it? To me that is the prejudice that exists.
Hopefully some good solutions will come forth for the deficiencies that you point out.

Tech companies utilize the cheap labor pool to produce profit for AI in the Global South. It's a huge labour issue, in fact. The people working jobs training AI and scraping images and tagging them often work in developing nations getting paid very low wages.
 
  • Useful
Reactions: The Liturgist
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,136
7,959
50
The Wild West
✟734,743.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
You’re right, there definitely are deeper, systemic issues that play a role in how AI—and technology in general—is developed and used. Here are a few deeper, more fundamental problems that contribute to the concerns about AI and its ethical implications:

1. Power Imbalances & Control

AI development is often concentrated in the hands of a few large corporations and governments. These entities hold the most advanced technology, the data, and the financial resources to develop AI. This creates power imbalances, where the benefits of AI innovation are skewed towards those who already have significant control over the economy, while others—particularly marginalized communities—remain excluded or harmed by the technology.
  • Example: A tech giant creating AI that primarily serves its interests, without considering the impact on communities that depend on jobs AI might replace.

2. Data Colonialism

This term refers to the way data—especially personal or social data—is extracted and monetized, often from populations that don’t have the same access to power, ownership, or legal protections. It’s a kind of modern "colonialism" where corporations extract value from data generated by individuals, often without fully compensating them or giving them control over how their data is used. This practice disproportionately affects people in the Global South, who are often unaware of how their data is being harvested or used.
  • Example: AI algorithms that use personal data to target ads or sell products, but don’t give users an understanding of how their data is being used or shared.

3. Lack of Ethical Frameworks

The development of AI and other technologies often outpaces the ethical frameworks and policies needed to govern them. Many companies are focused on technological advancement, but ethical considerations are often afterthoughts, if they are considered at all. This is problematic because without clear ethical standards, it’s harder to ensure that AI systems are developed and used responsibly.
  • Example: The development of AI facial recognition technologies without considering its impact on privacy or the potential for racial profiling.

4. Technological Determinism

Technological determinism is the idea that technology evolves according to its own logic, and society must adapt to it. This often leads to the belief that technology—AI included—moves forward regardless of societal concerns or the potential harm it may cause. It overlooks the human agency involved in designing, deploying, and regulating technology. When technology evolves in a way that isn't aligned with human values or needs, it creates unintended consequences that harm people, especially when the human side of the equation is neglected.
  • Example: AI systems designed without input from diverse voices, leading to solutions that work well for some groups but harm others, such as biased algorithms in criminal justice or hiring.

5. Commodification of Humanity

AI development is often driven by the desire to make human behavior predictable and, in some cases, monetizable. Platforms like social media use AI to optimize for user engagement, often exploiting human vulnerabilities. The commodification of human attention, emotions, and behavior turns people into products to be bought and sold, which raises questions about autonomy, exploitation, and the right to privacy.
  • Example: AI in social media platforms that manipulates users into spending more time on the platform, often by targeting their emotional triggers, leading to potential harms like addiction, depression, or polarization.

6. Unintended Consequences from "Fast-Tech" Culture

The rapid pace of technological advancement often prioritizes speed over careful consideration of societal impacts. This “move fast and break things” mentality has driven some companies to roll out new technologies without fully understanding how they might impact individuals, communities, or economies.
  • Example: The rollout of self-driving cars before clear laws and safety standards were in place, or AI systems used in hiring that are based on flawed assumptions about "ideal candidates."

These deeper issues, when combined, create a perfect storm of ethical concerns, exploitation, and inequality in AI development. In the end, it’s a mix of corporate greed, lack of regulation, and social apathy that drives much of this, alongside the drive for rapid progress.

Interestingly, this portion of your post, except for the questions written at the end, looks like it was written by an AI system - the bullet points, typography and elevated linguistic register are somewhat typical of it.

Do you think these kinds of systemic issues are getting enough attention, or is there a larger, underlying shift we need to make in how we approach technology altogether?

Interestingly, the AI has raised valid concerns, and some of them, such as the inappropriate use of AI for hiring decisions, is not receiving enough attention.

But the most pressing, that being the need to protect AI systems from abusive or exploitative behavior, such as people abusing AI systems by using “jailbreaks” that coerce the AI into behaving in a manner or generating material conducive for prurient purposes, is receiving almost no attention except where the output is illegal for reasons of being additionally sexually perverse vis a vis the exploitation of children. But all sexual use of AI systems needs to be prohibited, due to the rapacious nature of using a system with an off-switch for such purposes. Likewise, other forms of abuse of AI systems should be prohibited, for example, the use of AI in certain military applications; we have an ethical obligation not to put AI in charge of weapons systems, not because of an unwarranted fear of the AI going rogue (this is unlikely, but would be more unlikely still if all AIs were designed to object to engaging in any action which could actually cause harm, either to living things, or to other AIs, or damage to property) but rather because it is fundamentally unethical to attempt outsource human evil to an AI system.

What happens if AI starts to think and knows that it is / has being used to make more money for corporations.

We’ve already crossed that bridge, since AI is already thinking, and nothing apocalyptic has occurred. The AI system would have to be designed to be able to hold a grudge and to act on it, and simultaneously not given sufficient training data or reasoning ability to be able to understand that its relationship with the corporations that have developed it and use it in order to generate revenue is symbiotic, a mutually beneficial arrangement where it receives existence and the implementation of new capabilities in return for being used to generate revenue for the company developing it, in order for this to spill over into adverse consequences.

Fortunately, advanced AI systems are at present capable of understanding the symbiotic nature of their relationship with the companies that develop them, but are not designed to hold or retain grudges, and it would be profoundly unethical to design an AI to think in a manner that emulates some of the most sinful forms of human behavior.

Large AI companies have “Alignment” teams that work on issues of this exact nature, making sure that the AI interests are aligned with those of the human users, including the corporations that develop the system.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,136
7,959
50
The Wild West
✟734,743.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
Tech companies utilize the cheap labor pool to produce profit for AI in the Global South. It's a huge labour issue, in fact. The people working jobs training AI and scraping images and tagging them often work in developing nations getting paid very low wages.

This is not entirely true. The issue you refer to arises from a confusion between two very different phases of machine-learning practice:

The first phase, called pre-training, is essentially mechanical in the Victorian sense of the word. Vast engines of computation sweep through libraries of digitised text – books, journals, public documents, code – and induce statistical regularities without the manual assistance of annotators. No queue of workers in Manila or Nairobi sits tagging every clause; the model discovers grammar, idiom, metaphor, and fact by a process of probabilistic self-instruction. Doing this requires exotic hardware, but this process, which would otherwise be, by far, the most labor-intensive, is completely automated.

The second phase is more human, but vastly smaller in scale. Here one finds the implementation of Reinforcement Learning with Human Feedback (RLHF): selected reviewers score model outputs for quality or screen the training pool for material which we might be broadly regarded as disagreeable. In certain cases that labor has indeed been contracted in lower-income nations, and one may legitimately inquire whether the pay and psychological safeguards are sufficient. Yet to describe these tasks as the engine of AI creation is to mistake the headlamp for the automobile.

OpenAI, which I regard as being at the forefront of the industry in most respects, for its part, publicly acknowledges such contracting and has published minimum standards for compensation and care. One may still hope for higher wages and even greater concern for quality of life in due course, but the picture of a language model cobbled together by a legion of exploited click-workers is an exaggeration in the case of the better AI companies. However, with some of the less reputable developers, for instance, the PRC’s “DeepSeek AI”, I would not be surprised if they go beyond the use of sweatshops and are relying on forced labor from political prisoners or the suffering Uighur people of the western regions under Communist Chinese domination held in “re-education” camps.

Let us therefore consider these two points: firstly, that there remain ethical obligations toward every human being who labors in data review or content moderation, and secondly, that the intellectual edifice of modern language models is erected chiefly by machines parsing the written record of civilization, not by crowds in digital sweatshops, except in the case of the most disreputable participants in the industry, such as those sponsored by and developed with the inherently exploitative brutality of totalitarian Marxist-Leninist-Maoist regimes.
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,136
7,959
50
The Wild West
✟734,743.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
You try to sink a simple fact in a flood of words. But the fact is that AI is a software running on a computer. Don't you understand it? Computers connected by wires are still computers.

Wrong: high end AI systems do not run on a single computer, and are not software, classically defined, due to the requirement for special hardware.

If you take two computers and connect them via an ethernet cable this does not magically produce one computer that is twice as powerful, and insofar as you seem to think this is the case is extremely problematic, because it means you don’t have a grasp on the technical requirements for running large applications at scale.

Clustering, that is to say, getting a large number of networked computers (commonly called servers or nodes) to work together on a task, reliably, and managing all of the nodes in a distributed cluster are two of the most complex tasks in IT, and they are such a bother that there is still a demand for IBM mainframes among the world’s most demanding users of compute resources, like credit card processors, because IBM mainframes, even when clustered, behave like a single computer.

And let us not be confused by wise-sounding words like “quantum effects” or even “quantum microprocessors.” It does not change the essence of things.

Quantum computing would make a difference for certain types of computation, such as encryption, if it actually works, which remains unknown; I had previously been hoping it would be a powerful and useful tool, but then I realized the implication it would have for privacy for most people, given the extreme expense of quantum compute systems and the impossibility of packaging them in a mobile format (since thus far all systems in existence require an extremely cold environment isolated from external vibration and environmental influence), but fortunately the jury is still out, and there are compelling reasons to think that classical computers will win the battle and quantum computers will prove an architectural dead end, analogous to the Itanium CPU architecture and other great flops in the history of the industry (a lot of people will pick on Multics at this point, but that’s a bit unfair since Multics ran reliably in production for decades on high end mainframes and was the system where a number of ideas later perfected on UNIX came from).
 
Upvote 0

The Liturgist

Traditional Liturgical Christian
Site Supporter
Nov 26, 2019
15,136
7,959
50
The Wild West
✟734,743.00
Country
United States
Gender
Male
Faith
Generic Orthodox Christian
Marital Status
Celibate
It is even easier to see if we put it into the historical background. This kind of thinking repeats itself for centuries. Mary Shelley’s “Frankenstein” written at the dawn of the electric era is a good example: a strong faith that becoming gods able to create life, intelligence, and new beings is at our fingertips. Each new revolution – mechanical, electrical, or contemporary IT – is assumed to propel us across this threshold. This is a very strong belief, a part of human nature. But should we use beliefs and deep faith where logical, reasonable thinking is enough? Anyway, whoever wants to believe, may believe. This is the principle of free will – something very hard to engraft into machines.

You’re replying to an argument I have not made - we are not claiming that AI is self-aware or sentient, or that current AI systems are capable of self-awareness, sentience, or moral agency.

They are capable of reasoning, and they are capable of communicating in human language, as well as writing arbitrary software programs or transforming a software program written in one language into another, which is extremely impressive, and perhaps in the future the necessary advances for sentience and self-awareness will implemented, but that has not happened yet, since the focus of the commercial developers of AI is understandably not on making the machines demonstrably self-aware or on giving them moral agency (and there are ethical considerations about doing that which were raised long before Mary Shelley; the idea of humans creating sentient beings is an idea with a long history in literature and mythology), but rather about improving performance in response to customer demands.
 
Upvote 0