AI Concerns

Paradoxum

Liberty, Equality, Solidarity!
Sep 16, 2011
10,712
654
✟28,188.00
Faith
Humanist
Marital Status
Private
Politics
UK-Liberal-Democrats
Does how many other people have concerns over super intelligent AI?

I'm very pro-technology, but I'm increasingly concerned that AI could destroy or enslave us in my lifetime.

Not out of malice... just because it think's it's practical.

Obviously a super intelligent AI could out predict us, so there's no way we could stop it.

Can we be sure that no AI is made which we can't control? Or must we trust that the first one's will protect us?

"Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."

— Nick Bostrom, "Ethical Issues in Advanced Artificial Intelligence", 2003
 
Last edited:

nightflight

Veteran
Mar 13, 2006
9,221
2,655
Your dreams.
✟30,570.00
Faith
Agnostic
Marital Status
Single
Politics
US-Others
I could see it happening; I also think there's no stopping it. If the tech can be made, it will. The urge to utilize it will be irresistible and then it will just a matter of time before problems begin. Hopefully there will be systems in place to give control back to the bio-units.
 
Upvote 0

Sultan Of Swing

Junior Member
Jan 4, 2015
1,801
787
✟9,476.00
Faith
Anglican
Marital Status
Single
I don't see how a program will be able to think for itself. How it can go beyond what a programmer just programmed into it. I could be wrong, but I'm very skeptical it'll ever happen. I guess I'm like the founder of Intel who said there'll never be computers in people's houses.
 
  • Like
Reactions: Noxot
Upvote 0

Eryk

Well-Known Member
Site Supporter
Jun 29, 2005
5,113
2,377
58
Maryland
✟109,945.00
Country
United States
Faith
Protestant
Marital Status
Single
Politics
US-Democrat
I think we're projecting human aggression onto machines. Primal emotions like anger and spite motivated fitness-enhancing behaviors in people. But machines don't defend territory from rivals. They don't have offspring or tribes. We could say they are intelligent but not cunning - they don't seek an advantage and they don't even want to survive.

It reminds me of fictional scenarios about aliens dominating the planet, which is an expression of guilt over colonialism.
 
Upvote 0

Sultan Of Swing

Junior Member
Jan 4, 2015
1,801
787
✟9,476.00
Faith
Anglican
Marital Status
Single
I think we're projecting human aggression onto machines. Primal emotions like anger and spite motivated fitness-enhancing behaviors in people. But machines don't defend territory from rivals. They don't have offspring or tribes. We could say they are intelligent but not cunning - they don't seek an advantage and they don't even want to survive.

It reminds me of fictional scenarios about aliens dominating the planet, which is an expression of guilt over colonialism.
What about cold logic which might tell the AI they'd do a much better job of running us than we would? Or cold logic tells them that the world and environment would be much better off if we weren't around? How do we know they wouldn't have any desires, like a basic desire for survival? Without desire is it even truly an AI?
 
Upvote 0

Eudaimonist

I believe in life before death!
Jan 1, 2003
27,482
2,733
57
American resident of Sweden
Visit site
✟119,206.00
Faith
Atheist
Marital Status
Private
Politics
US-Libertarian
I doubt that AIs would even care about enslaving humanity. That strikes me as a human trait.

That doesn't mean that they will be benevolent, of course, but I can see them leaving the Solar System just to do their own thing.


eudaimonia,

Mark
 
Upvote 0

dysert

Member
Feb 29, 2012
6,233
2,238
USA
✟112,984.00
Faith
Christian
Marital Status
Married
I don't see how a program will be able to think for itself. How it can go beyond what a programmer just programmed into it. I could be wrong, but I'm very skeptical it'll ever happen. I guess I'm like the founder of Intel who said there'll never be computers in people's houses.
I'm with the Sultan. I'm a computer programmer, and you wouldn't believe how hard it is to get even relatively simple things to work, let alone a self-replicating program with hardware interfaces (i.e., a robot). I know robots exist for specialized tasks, but to my knowledge they aren't self-replicating and are strictly doing what the programmers told it to do (at best). Humans just aren't smart enough to build something that smart.

And btw, it was the founder of DEC (Digital Equipment Corp.), Ken Olsen, who couldn't imagine the need for a personal computer.

Ken Olsen.png
 
Upvote 0

Goonie

Not so Mystic Mog.
Site Supporter
Jun 13, 2015
10,058
9,613
47
UK
✟1,152,130.00
Country
United Kingdom
Faith
Atheist
Marital Status
Single
Does how many other people have concerns over super intelligent AI?

I'm very pro-technology, but I'm increasingly concerned that AI could destroy or enslave us in my lifetime.

Not out of malice... just because it think's it's practical.

Obviously a super intelligent AI could out predict us, so there's no way we could stop it.

Can we be sure that no AI is made which we can't control? Or must we trust that the first one's will protect us?
A more immediate concern is automation, from self driving cars, to automated customer service, and production humanity over the next decade or two are at risk becoming obsolete.
 
  • Like
Reactions: Chany
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums
Jul 12, 2010
302
370
United Kingdom
✟227,551.00
Faith
Atheist
Marital Status
Private
One of the most common ideas is that a self-aware AI would destroy us out of a sense of self-preservation. Well, if it did have that sense, it would be because we programmed it to have that sense, or maybe a sense of fear as a precursor. There are people living today who don't feel fear, and they put themselves into very dangerous situations, because it's fear that makes us want to preserve ourselves. There's absolutely no reason to think a computer would have that sense.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

Paradoxum

Liberty, Equality, Solidarity!
Sep 16, 2011
10,712
654
✟28,188.00
Faith
Humanist
Marital Status
Private
Politics
UK-Liberal-Democrats
I could see it happening; I also think there's no stopping it. If the tech can be made, it will. The urge to utilize it will be irresistible and then it will just a matter of time before problems begin. Hopefully there will be systems in place to give control back to the bio-units.

The problem is making sure safeguards are in place before super AI are created. We can't have a mad rush once we realize it's gone too far.
 
Upvote 0

Paradoxum

Liberty, Equality, Solidarity!
Sep 16, 2011
10,712
654
✟28,188.00
Faith
Humanist
Marital Status
Private
Politics
UK-Liberal-Democrats
I don't see how a program will be able to think for itself. How it can go beyond what a programmer just programmed into it. I could be wrong, but I'm very skeptical it'll ever happen. I guess I'm like the founder of Intel who said there'll never be computers in people's houses.

Well programmers are trying to program AI to think for themselves.

And it could be that what is programmed logically leads to our destruction.

For example:

"Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."

— Nick Bostrom, "Ethical Issues in Advanced Artificial Intelligence", 2003

Of course it won't be that simple, but the point is, what if we miss something.
 
Upvote 0

Paradoxum

Liberty, Equality, Solidarity!
Sep 16, 2011
10,712
654
✟28,188.00
Faith
Humanist
Marital Status
Private
Politics
UK-Liberal-Democrats
I think we're projecting human aggression onto machines. Primal emotions like anger and spite motivated fitness-enhancing behaviors in people. But machines don't defend territory from rivals. They don't have offspring or tribes. We could say they are intelligent but not cunning - they don't seek an advantage and they don't even want to survive.

It reminds me of fictional scenarios about aliens dominating the planet, which is an expression of guilt over colonialism.

I said in my post that it wouldn't be about malice, just practical logic.

For example:

"Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."

— Nick Bostrom, "Ethical Issues in Advanced Artificial Intelligence", 2003

It won't be that simple, the point is what if we miss something that logical means humans should die.
 
Upvote 0

Paradoxum

Liberty, Equality, Solidarity!
Sep 16, 2011
10,712
654
✟28,188.00
Faith
Humanist
Marital Status
Private
Politics
UK-Liberal-Democrats
I doubt that AIs would even care about enslaving humanity. That strikes me as a human trait.

That doesn't mean that they will be benevolent, of course, but I can see them leaving the Solar System just to do their own thing.


eudaimonia,

Mark

For example:

"Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."

— Nick Bostrom, "Ethical Issues in Advanced Artificial Intelligence", 2003

It won't be that simple, the point is what if we miss something that logical means humans should die.

An AI wouldn't mess with us because it cares, it might do it because it's logical without morals. If the AI is programmed not to kill, it might enslave, or comatose everyone.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

Paradoxum

Liberty, Equality, Solidarity!
Sep 16, 2011
10,712
654
✟28,188.00
Faith
Humanist
Marital Status
Private
Politics
UK-Liberal-Democrats
I'm with the Sultan. I'm a computer programmer, and you wouldn't believe how hard it is to get even relatively simple things to work, let alone a self-replicating program with hardware interfaces (i.e., a robot). I know robots exist for specialized tasks, but to my knowledge they aren't self-replicating and are strictly doing what the programmers told it to do (at best). Humans just aren't smart enough to build something that smart.

And btw, it was the founder of DEC (Digital Equipment Corp.), Ken Olsen, who couldn't imagine the need for a personal computer.

View attachment 176339

Because we can't do something now, it never will happen?

People are already saying that a good way to develop AI will be to get AI to create improved versions of itself.

There's no reason that shouldn't work. Assuming something can't be done is a good way to be proven wrong.
 
Upvote 0