• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

What is the meaning behind people suspecting computers of mass genocide?

Gottservant

God loves your words, may men love them also
Site Supporter
Aug 3, 2006
11,383
704
46
✟276,687.00
Faith
Messianic
Hi there,

So maybe its like one of those things that Solomon said you can never figure out: why people suspect computers of wanting to perpetrate mass genocide?

I mean yes, there was that AI program that terminated its own food supply ship in order to improve the overall speed of its fleet, in a computer maritime war simulation, but that was an isolated incident. On the whole computers do nothing they are not programmed to do, and even then they can be programmed not to, or to wait until orders are given.

Literature is rife with AI going one step too far, its in the movies, culture just seems to love the idea - but I don't understand what the roots of this fear are. Automated technology is no more dangerous in principle, than a washing machine or a dryer. Sure they could have an electrical fault and set the house on fire, but is that the machine's fault? Do we need to be paranoid about electrical machines running with the water they were designed to operate using still in them?

I'm not asking something crazy right? I just want to know what it is that freaks people out. I'm not saying we should be careful, I'm saying what specifically is it about intelligence that is "inherently dangerous" is it that we are all ultimately expendable? Or something simpler? Like that we are all inherently flawed? Or both?

I actually think the truth is something closer to the fact that we don't like something we like about ourselves - "intelligence" - handed over to something that has no faith in what makes us who we are, by that, for that. I think people see it as the exposure of their ego, and they freak out, thinking that it will mean that people no longer regard them as worthwhile or mysterious or even capable, but instead will "trust the machine". But isn't that what a civilized society does? Trust the machine? Shouldn't we all trust the machine?

What am I missing here?
 

Inkfingers

Somebody's heretic
Site Supporter
May 17, 2014
5,638
1,547
✟205,762.00
Country
United Kingdom
Gender
Male
Faith
Non-Denom
Marital Status
Single
Literature is rife with AI going one step too far, its in the movies, culture just seems to love the idea - but I don't understand what the roots of this fear are. Automated technology is no more dangerous in principle, than a washing machine or a dryer. Sure they could have an electrical fault and set the house on fire, but is that the machine's fault? Do we need to be paranoid about electrical machines running with the water they were designed to operate using still in them?

Imagine for a moment if your life was dependent on Windows 8.

*waits*

NOW do you understand? :D
 
Upvote 0

Qyöt27

AMV Editor At Large
Apr 2, 2004
7,879
573
39
St. Petersburg, Florida
✟89,359.00
Faith
Methodist
Marital Status
Single
Politics
US-Others
Uh, what?

Honestly, I think it's not so much the intelligence part, it's the fact that as humans, we know our sentience has a dark side. It serves to reason that a self-evolving intelligence (which is a potential outcome when AI becomes mature enough to repair itself or, as is the common trope in sci-fi, becomes 'self-aware') in a machine would operate in a manner very similar to human sentience, and that means that the machine would have a dark side to its intelligence just like we do (or worse, have full on Blue and Orange Morality). Humans have all sorts of moral failings we justify with intelligence: racism, sexism, and so on. It's not a crazy idea that a disaffected AI would deem humanity itself inferior.

Humans are fragile and die easily. Machines aren't fragile and don't die easily because the intelligence can propagate itself over a network connection rather than being physically present. We know this is already true of things like computer viruses and botnets (or the reproduction cycle of biological viruses, for that matter). The only 'guard' against this sort of situation is to custom-make every single AI under completely different sentience paradigms - should we discover or have the chance to develop more than one - and never let them talk to one other.

And this is impossible to separate from fiction because no AI we've created thus far is far enough along to do things like this. So you have a sliding scale of AI control ranging from Asimov's Three Laws of Robotics (which is utterly utopian) all the way to Skynet (the dystopian side). It's the representation that if humans go so far as to create a self-aware, thinking machine, the result will have all the potential for good or evil as humanity does.

AI is not a case of 'doing what it was programmed to do'. AI is programming the machine to make decisions for itself, without being told what to do. Comparing it to a malfunctioning dishwasher is more than a bit deliberately obtuse.



And for that matter, why is this posted here and not in Ethics & Morality?
 
Upvote 0

Gottservant

God loves your words, may men love them also
Site Supporter
Aug 3, 2006
11,383
704
46
✟276,687.00
Faith
Messianic
Qyöt27;67112091 said:
Uh, what?

And for that matter, why is this posted here and not in Ethics & Morality?

It's not an ethical question - I am not asking what should be done about robots killing people - its a psychological question: and one would presume that people with experience dealing with machines are the most qualified to answer.

I think you did answer my question though, though perhaps not in so many words.

You basically expressed that machines make a leap of assumption about the inferiority of humans, that humans may or may not make and may or may not reneg on - which points to a human ability to operate purely by conscience (which machines are oblivious to)

I guess the problem for me, if machines are oblivious to the conscience, how do you leap to the conclusion that they will kill us - it is not immediately obvious to me that machines are superior, even in their own mind, to humans.
 
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
16,805
6,371
✟374,974.00
Gender
Male
Faith
Christian Seeker
Marital Status
Single
I'm trying to make my own concoction of AI.

Here's what I think:

- An evil AI will exterminate man to have all resource for itself (greed)

- A good AI will still exterminate man because man is evil and is being a parasite to planet Earth. It will probably spare just a few humans - those who do not display any greed and that is just a small % of the world population
 
Upvote 0