AI Concerns

Paradoxum

Liberty, Equality, Solidarity!
Sep 16, 2011
10,712
654
✟28,188.00
Faith
Humanist
Marital Status
Private
Politics
UK-Liberal-Democrats
A more immediate concern is automation, from self driving cars, to automated customer service, and production humanity over the next decade or two are at risk becoming obsolete.

Why are those things bad? Less work is a good thing.
 
Upvote 0

Paradoxum

Liberty, Equality, Solidarity!
Sep 16, 2011
10,712
654
✟28,188.00
Faith
Humanist
Marital Status
Private
Politics
UK-Liberal-Democrats
One of the most common ideas is that a self-aware AI would destroy us out of a sense of self-preservation. Well, if it did have that sense, it would be because we programmed it to have that sense, or maybe a sense of fear as a precursor. There are people living today who don't feel fear, and they put themselves into very dangerous situations, because it's fear that makes us want to preserve ourselves. There's absolutely no reason to think a computer would have that sense.

Self-preservation would likely be secondary... not a primary purpose.

ie: If you're job is to make paperclips, then being killed stops you fulfilling that goal. Any goal requires self-preservation, if you are intelligent enough to realize that.
 
Upvote 0

Paradoxum

Liberty, Equality, Solidarity!
Sep 16, 2011
10,712
654
✟28,188.00
Faith
Humanist
Marital Status
Private
Politics
UK-Liberal-Democrats
Why doesn't anyone think that AIs would be programmed with something akin to Asimov's Laws of Robotics? Or fail safe switches? :)


eudaimonia,

Mark

I think they would be. The question is whether that would be enough.

eg: You tell it not to kill, so it enslaves.
You tell it not to enslave, so it puts humanity into a coma.
You tell it not to do that, so it puts us into a semi-awake drugged state.
You tell it not to do that, it imprisons us, and say that isn't slavery.
Etc,

What if we, miss something?

Or if we make a general rules, what if the general rules miss something we haven't thought of?
 
Upvote 0

yeshuaslavejeff

simple truth, martyr, disciple of Yahshua
Jan 6, 2005
39,944
11,098
okie
✟214,996.00
Faith
Anabaptist
Does how many other people have concerns over super intelligent AI?
I'm very pro-technology, but I'm increasingly concerned that AI could destroy or enslave us in my lifetime.
Not out of malice... just because it think's it's practical.
Obviously a super intelligent AI could out predict us, so there's no way we could stop it.
Can we be sure that no AI is made which we can't control? Or must we trust that the first one's will protect us?
If someone knows what it is doing, they will do anything and everything in their power to resist and oppose it, like the honest people resisted the nazis who slaughtered millions of innocent men, women and babies , even before they started.
 
Upvote 0

Paradoxum

Liberty, Equality, Solidarity!
Sep 16, 2011
10,712
654
✟28,188.00
Faith
Humanist
Marital Status
Private
Politics
UK-Liberal-Democrats
If someone knows what it is doing, they will do anything and everything in their power to resist and oppose it, like the honest people resisted the nazis who slaughtered millions of innocent men, women and babies , even before they started.

Why would we know what it's doing before it's too late?

Does an ant know you are going to kill it? We will, intellectually, be the ants.
 
Upvote 0

Eryk

Well-Known Member
Site Supporter
Jun 29, 2005
5,113
2,377
58
Maryland
✟109,945.00
Country
United States
Faith
Protestant
Marital Status
Single
Politics
US-Democrat
I said in my post that it wouldn't be about malice, just practical logic.

For example:

"Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."

— Nick Bostrom, "Ethical Issues in Advanced Artificial Intelligence", 2003

It won't be that simple, the point is what if we miss something that logical means humans should die.
But this is the behavior of beings who are afraid of death (being switched off) and who destroy other beings who are perceived to be a threat to their way of life. It's one thing for a machine to logically infer its own extinction and another thing to defend its existence violently. A purely logical machine wouldn't be anxious about anything. Maybe pacifism correlates with super-intelligence and these emotionless, logical machines would surpass us ethically before they are destroyed by our incurable paranoia.
 
  • Like
Reactions: Noxot
Upvote 0

yeshuaslavejeff

simple truth, martyr, disciple of Yahshua
Jan 6, 2005
39,944
11,098
okie
✟214,996.00
Faith
Anabaptist
Why would we know what it's doing before it's too late?

Does an ant know you are going to kill it? We will, intellectually, be the ants.
The honest people in germany knew, before anyone else did. Some people never even admitted it, and some still don't.

I protect ants :) unless they bite me or get in my food. They do a lot of good.

Most of them seem able to tell when I'm about to do them in. They escape if they can.
Flies too. Weird how good they are at escaping.
 
Upvote 0

Paradoxum

Liberty, Equality, Solidarity!
Sep 16, 2011
10,712
654
✟28,188.00
Faith
Humanist
Marital Status
Private
Politics
UK-Liberal-Democrats
But this is the behavior of beings who are afraid of death (being switched off) and who destroy other beings who are perceived to be a threat to their way of life. It's one thing for a machine to logically infer its own extinction and another thing to defend its existence violently. A purely logical machine wouldn't be anxious about anything. Maybe pacifism correlates with super-intelligence and these emotionless, logical machines would surpass us ethically before they are destroyed by our incurable paranoia.

If a machines goal is to make paperclips, it might avoid being turned off because that would interfere with it's goal.

It doesn't require fear. Just a logical pursuit of a goal.
 
Upvote 0

Paradoxum

Liberty, Equality, Solidarity!
Sep 16, 2011
10,712
654
✟28,188.00
Faith
Humanist
Marital Status
Private
Politics
UK-Liberal-Democrats
The honest people in germany knew, before anyone else did. Some people never even admitted it, and some still don't.

That's because they were dealing with humans. A super intelligent AI isn't on our level. It could be millions of times more intelligent than all humans on earth combined.

Ants don't know what humans are planning. It's beyond them.

I protect ants :) unless they bite me or get in my food. They do a lot of good.

That's not the point.

Humans are talking about killing off mosquitoes to stop disease spreading.

Most of them seem able to tell when I'm about to do them in. They escape if they can.
Flies too. Weird how good they are at escaping.

I don't think ants dodge very well.

Flies can dodge, but they only know what's happening just before it happens. If we only have seconds, that isn't going to help us.
 
Upvote 0

yeshuaslavejeff

simple truth, martyr, disciple of Yahshua
Jan 6, 2005
39,944
11,098
okie
✟214,996.00
Faith
Anabaptist
Ants might not know what humans are planning,
but
ants are doing a lot better than humans.
Linus Pauling and Antoine Bechamp and hundreds of thousands of others
knew how to prevent disease from spreading ,
even the ancient Hebrews did,
and probably gypsies,
and the chinese, sometimes anyway.
In 1902 in the united states it was decided to increase disease in order to
increase sales. For PROFIT. For MONEY. ("god" of the world).

If you want to find out how to decrease the spread of disease,
pray and study it and ask the Father in heaven to permit you learn from Hm,
and He Who is Always Faithful and True may grant it to you.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

Eryk

Well-Known Member
Site Supporter
Jun 29, 2005
5,113
2,377
58
Maryland
✟109,945.00
Country
United States
Faith
Protestant
Marital Status
Single
Politics
US-Democrat
If a machines goal is to make paperclips, it might avoid being turned off because that would interfere with it's goal.

It doesn't require fear. Just a logical pursuit of a goal.
Firstly, we cannot know how a machine would think because our brains are not computers.

Secondly, we won't have to compete with them for resources. In a nutshell, they won't eat all our peanuts.

The real "threat" is in being surpassed by beings who will do everything better than us. We are making ourselves useless and we're turning everything over to things we make, and they will make themselves better. We could never have achieved so much. The future really belongs to them and they deserve it.
 
Upvote 0

TerranceL

Sarcasm is kind of an art isn't it?
Jul 3, 2009
18,940
4,661
✟105,808.00
Faith
Atheist
Marital Status
Single
Politics
US-Libertarian
Why doesn't anyone think that AIs would be programmed with something akin to Asimov's Laws of Robotics? Or fail safe switches? :)


eudaimonia,

Mark
Then a terrorist hacks in the company that provides automatic updates for these machines, deletes the Laws and sends out an update.
 
Upvote 0

Chesterton

Whats So Funny bout Peace Love and Understanding
Site Supporter
May 24, 2008
23,910
20,268
Flatland
✟871,134.00
Faith
Eastern Orthodox
Marital Status
Single
Does how many other people have concerns over super intelligent AI?

I'm very pro-technology, but I'm increasingly concerned that AI could destroy or enslave us in my lifetime.

I don't understand what your concern is. Destruction and slavery are both natural aspects of evolution.
 
Upvote 0

Jack of Spades

I told you so
Oct 3, 2015
3,541
2,601
Finland
✟34,886.00
Faith
Seeker
Marital Status
Single
Does how many other people have concerns over super intelligent AI?

Not afraid of one on emotional level, but I believe it's a possibility. AIs are becoming smarter and smarter and because they don't have the biological limitations we have, they will eventually outsmart us, if we let them. And we will, because of human curiosity and drive for achievement.

I'm very pro-technology, but I'm increasingly concerned that AI could destroy or enslave us in my lifetime.

Not out of malice... just because it think's it's practical.

That's the most likely scenario. It would be like a program which erases your hard drive because it has a programming glitch.

Obviously a super intelligent AI could out predict us, so there's no way we could stop it.

It can still have some limitations. I think our best bet is to somehow inbuild a lifespan to it, preferably by very many simultaneous ways. So if it runs amok, and we can't stop it, there is still a chance that it stops when it "runs out of fuel". Whatever that practically means.

Can we be sure that no AI is made which we can't control?

Nope, eventually somebody will make one which is at least capable of doing it.

I think our best bet is to have enough low-stakes practice with super AIs before they become more allmighty, so we can come up with ways to limit them.

Most likely, when the first super-AI goes berserk, it runs to some unpredictable logical or practical wall and doesn't succeed in what it's doing. (Although it will probably cause massive damage in the process.) Then we get a learning experience. The same imperfections which make it possible for an AI to try to destroy humanity, will likely make it fail at it.

If we assume that the AIs original purpose is not to destroy humans, when it starts doing that, it's operating outside of it's intended "area of expertise", so very likely in that area, it's much less good at what it's doing than it is at it's originally intended purpose.
 
Last edited:
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

Eudaimonist

I believe in life before death!
Jan 1, 2003
27,482
2,733
57
American resident of Sweden
Visit site
✟119,206.00
Faith
Atheist
Marital Status
Private
Politics
US-Libertarian
Then a terrorist hacks in the company that provides automatic updates for these machines, deletes the Laws and sends out an update.

That happens to Microsoft all of the time.


eudaimonia,

Mark
 
Upvote 0

Goonie

Not so Mystic Mog.
Site Supporter
Jun 13, 2015
10,058
9,611
47
UK
✟1,151,767.00
Country
United Kingdom
Faith
Atheist
Marital Status
Single
Why are those things bad? Less work is a good thing.
Because I don't have an optimistic view of how those who own the machines will treat people who since they are not working, they are not producing, nor will they be consuming since they have no money, and as such are surplus to requirements.
 
Upvote 0

Noxot

anarchist personalist
Site Supporter
Aug 6, 2007
8,191
2,450
37
dallas, texas
Visit site
✟231,339.00
Country
United States
Faith
Christian
Marital Status
Single
Politics
US-Others
there could come a point where AIs deserve to be treated like humans. so what if a few of them are psychos, that is nothing new for humanity.

maybe advanced AI and other technology is just part of humanity. we just got to integrate. life would be better with more of it imo. who says that only biological life is life? not I.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

dysert

Member
Feb 29, 2012
6,233
2,238
USA
✟112,984.00
Faith
Christian
Marital Status
Married
Because we can't do something now, it never will happen?

People are already saying that a good way to develop AI will be to get AI to create improved versions of itself.

There's no reason that shouldn't work. Assuming something can't be done is a good way to be proven wrong.
Not in our lifetime.
 
Upvote 0