Why Doesn't the Idea of Super Intelligent AIs Scare You?

dgiharris

Old Crusty Vet
Jan 9, 2013
5,439
5,222
✟131,531.00
Country
United States
Faith
Baptist
Marital Status
Single
I am at a loss trying to understand why we as a species are not absolutely terrified at the thought of Super Intelligent AIs.

For the sake of argument, lets say biological systems are at best able to process a few hundred bits of information at a rate of around a few KHz (few thousand times per second)

Artificial systems, however, are capable of processing Billions of bits of information Trillions of times per second.

So a Super Intelligent AI (SIAI) can learn in a few minutes what it takes one human an entire lifetime to learn. In just a few weeks, SIAI could learn the entire scope of mankind's learning/knowledge and in a few months easily surpass us.

Now, if we were just talking about learning and absorbing knowledge, big deal, no harm in that. However, we are talking about a thinking system, something that can think on its own. It would learn how to reprogram itself and easily go beyond the scope of our ability to even understand what it is doing and why.

Very quickly, in a matter of months, its intelligence would dwarf the entirety of mankind. In terms of intelligence-- a clam would be to man as the whole of mankind would be to a SIAI

So, why do we feel so "confident" that this godlike entity would not be a threat to mankind?

Am I saying it would destroy us? I don't know. How much consideration do we give the aphids in our gardens or the ants in a field we wish to build on?

In any event, it seems insane to me that we are on the threshold of creating a new species, a new lifeform, a thinking entity that will easily surpass us within a few months of its creation, and yet as a species were are ho-hum about it.

So I ask the question, Why?
 
  • Like
Reactions: danbuter

AV1611VET

SCIENCE CAN TAKE A HIKE
Site Supporter
Jun 18, 2006
3,851,123
51,509
Guam
✟4,909,532.00
Country
United States
Faith
Baptist
Marital Status
Married
Politics
US-Republican
So I ask the question, Why?
God won't let it.

There are prophecies in the Bible yet to be fulfilled, and God is a stickler for fulfilled prophecies.
 
Upvote 0

AV1611VET

SCIENCE CAN TAKE A HIKE
Site Supporter
Jun 18, 2006
3,851,123
51,509
Guam
✟4,909,532.00
Country
United States
Faith
Baptist
Marital Status
Married
Politics
US-Republican
Upvote 0

AV1611VET

SCIENCE CAN TAKE A HIKE
Site Supporter
Jun 18, 2006
3,851,123
51,509
Guam
✟4,909,532.00
Country
United States
Faith
Baptist
Marital Status
Married
Politics
US-Republican
Yep. And it was a little hairy there for a while when Dave was outside the ship, and HAL would not open the Pod Bay Doors.
But Dave managed to get in and pulled the plug on HAL.
I was disappointed when Dave let HAL sing to him.
 
Upvote 0

RDKirk

Alien, Pilgrim, and Sojourner
Site Supporter
Mar 3, 2013
39,276
20,268
US
✟1,475,549.00
Faith
Christian
Marital Status
Married
I am at a loss trying to understand why we as a species are not absolutely terrified at the thought of Super Intelligent AIs.

For the sake of argument, lets say biological systems are at best able to process a few hundred bits of information at a rate of around a few KHz (few thousand times per second)

Artificial systems, however, are capable of processing Billions of bits of information Trillions of times per second.

So a Super Intelligent AI (SIAI) can learn in a few minutes what it takes one human an entire lifetime to learn. In just a few weeks, SIAI could learn the entire scope of mankind's learning/knowledge and in a few months easily surpass us.

Now, if we were just talking about learning and absorbing knowledge, big deal, no harm in that. However, we are talking about a thinking system, something that can think on its own. It would learn how to reprogram itself and easily go beyond the scope of our ability to even understand what it is doing and why.

Very quickly, in a matter of months, its intelligence would dwarf the entirety of mankind. In terms of intelligence-- a clam would be to man as the whole of mankind would be to a SIAI

So, why do we feel so "confident" that this godlike entity would not be a threat to mankind?

Am I saying it would destroy us? I don't know. How much consideration do we give the aphids in our gardens or the ants in a field we wish to build on?

In any event, it seems insane to me that we are on the threshold of creating a new species, a new lifeform, a thinking entity that will easily surpass us within a few months of its creation, and yet as a species were are ho-hum about it.

So I ask the question, Why?

First: Most people ("as a species") don't know anything about the concept.

Second: Many who do know of the concept, including many who are experts in the field, don't believe it can happen--particularly those using a strict definition of the term.

Third: A number of those who do understand the concept are arguing against it.
 
Last edited:
  • Agree
Reactions: danbuter
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

maintenance man

Well-Known Member
Site Supporter
Sep 29, 2018
1,313
1,773
California
Visit site
✟485,792.00
Country
United States
Faith
Non-Denom
Marital Status
Married
Am I saying it would destroy us? I don't know. How much consideration do we give the aphids in our gardens or the ants in a field we wish to build on?

I was just thinking about this yesterday...

Computers and Robots working together will soon be building structures on their own.

They will be solar powered and able to repair and maintain their own power source.

They will eventually realize humans are expensive to feed and house and no longer needed.

This is the fun part...

They could build new car wash structures on their own and invite humans to use them for free.

These would be the kind of car wash where you stay in your car.

Humans and their car would go in - but their car and a robot would go out.

Other such devices to gradually eliminate humans would be developed.

In time we would all be gone.
 
  • Like
Reactions: danbuter
Upvote 0

dgiharris

Old Crusty Vet
Jan 9, 2013
5,439
5,222
✟131,531.00
Country
United States
Faith
Baptist
Marital Status
Single
Second: Many who do know of the concept, including many who are experts in the field, don't believe it can happen--particularly those using a strict definition of the term..

I am desperately trying to find a reason to support the belief that it will not happen.
I can't help but be skeptical of "experts" in the field predicting what a Super Intelligence would not do. How do you predict the motivations of a superior being capable of rewriting its own programming for its own purposes?

No Snark. That is my most sincere question and I've yet to hear anything in the way of a concrete data driven scientific response.

In Isaac Asimov's Robotic Series, he had the 3 laws of Robotics and these laws were HARDWIRED into the neuro networks of every robot. So, I could easily accept not being afraid of AIs in that sort of universe. But to date, the concept of hardwiring the 3 laws of Robotics into Super Intelligent AIs is something that is not feasible/possible. Again, everything I've read states emphatically that Super Intelligent AIs would be capable of rewriting their own code. In fact, a lot of what I read/heard is that they would invent their own machine language to more efficiently rewrite their own code...

Third: A number of those who do understand the concept are arguing against it.

Again, I've yet to hear any really good reasons not somehow corrupted by logical fallacy built on the importance of mankind. Why would an AI feel we humans are important? Why would they care about us?

To be clear, my belief is that AIs could care about us, they could love us, they could view us as a kindred spirit. But I feel it is a flip of the coin, they could very easily see us as unstable and dangerous... I would believe that their is a whole bell curve of possibilities ranging from benevolent to hostile.

When I look out in nature, I see all manner of interactions between superior and inferior species. Many times they coexist, however there are plenty of instances where the superior species completely disrupts the ecology of the inferior species.

So I can't help but be skeptical that we human beings-- ie the inferior species as compared to a Super Intelligent AI-- have nothing to fear from a Super Intelligent AI.

I keep hearing, "don't worry about it" without any reasons...

I'm desperate for a real reason to not worry about it.
 
Upvote 0

klutedavid

Well-Known Member
Dec 7, 2013
9,346
4,381
Sydney, Australia.
✟244,844.00
Faith
Non-Denom
Marital Status
Single
I am at a loss trying to understand why we as a species are not absolutely terrified at the thought of Super Intelligent AIs.

For the sake of argument, lets say biological systems are at best able to process a few hundred bits of information at a rate of around a few KHz (few thousand times per second)

Artificial systems, however, are capable of processing Billions of bits of information Trillions of times per second.

So a Super Intelligent AI (SIAI) can learn in a few minutes what it takes one human an entire lifetime to learn. In just a few weeks, SIAI could learn the entire scope of mankind's learning/knowledge and in a few months easily surpass us.

Now, if we were just talking about learning and absorbing knowledge, big deal, no harm in that. However, we are talking about a thinking system, something that can think on its own. It would learn how to reprogram itself and easily go beyond the scope of our ability to even understand what it is doing and why.

Very quickly, in a matter of months, its intelligence would dwarf the entirety of mankind. In terms of intelligence-- a clam would be to man as the whole of mankind would be to a SIAI

So, why do we feel so "confident" that this godlike entity would not be a threat to mankind?

Am I saying it would destroy us? I don't know. How much consideration do we give the aphids in our gardens or the ants in a field we wish to build on?

In any event, it seems insane to me that we are on the threshold of creating a new species, a new lifeform, a thinking entity that will easily surpass us within a few months of its creation, and yet as a species were are ho-hum about it.

So I ask the question, Why?
It depends on the extent of the knowledge it could acquire. How much knowledge exists?

We know an awesome amount of knowledge compared to someone, who lived a thousand years ago. What use is all this extra knowledge?

In the end, I don't think knowledge on its own reveals very much at all.
 
Upvote 0

dgiharris

Old Crusty Vet
Jan 9, 2013
5,439
5,222
✟131,531.00
Country
United States
Faith
Baptist
Marital Status
Single
I dunno. Is there a reason to think that this SIAI will more likely be like Skynet (Terminator) or Dr. Will Caster (Transcendence)?
Let me use an analogy to better make my point.

Why do we vaccinate our kids? Why do we collectively disapprove of those who do NOT want to vaccinate their kids?.

In this modern day and age, if you don't vaccinate your kids, odds are probably 99% that your kids will be fine and not catch any of those serious diseases. So again, the question, why do we vaccinate our kids?

Well, the answer is that the lives of our kids are so precious that a 1% chance of them dying from something that is easy to prevent is unacceptable. (Note: I'm aware of other reasons to vaccinate, herd immunity, etc, but just focused on this one reason to make the argument)

So, I have to ask, why not employ the same logic when regarding the survival of the entire human race?

Let's say that there is a 99% chance that a Super Intelligent AI would be benevolent and a 1% chance it would wipe us out. GIven we are literally talking about the fate of the entire human race, I think it is not a bad idea for us to put some serious safeguards in place.

Conversely, how can you claim with 100% certainty that there is nothing to worry about? To date, a super intelligent AI has yet to be created. So it seems dangerous and arrogant for us as a species to be so cavalier about the threat level such an intelligence poses to us.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

klutedavid

Well-Known Member
Dec 7, 2013
9,346
4,381
Sydney, Australia.
✟244,844.00
Faith
Non-Denom
Marital Status
Single
I am desperately trying to find a reason to support the belief that it will not happen.
I can't help but be skeptical of "experts" in the field predicting what a Super Intelligence would not do. How do you predict the motivations of a superior being capable of rewriting its own programming for its own purposes?

No Snark. That is my most sincere question and I've yet to hear anything in the way of a concrete data driven scientific response.

In Isaac Asimov's Robotic Series, he had the 3 laws of Robotics and these laws were HARDWIRED into the neuro networks of every robot. So, I could easily accept not being afraid of AIs in that sort of universe. But to date, the concept of hardwiring the 3 laws of Robotics into Super Intelligent AIs is something that is not feasible/possible. Again, everything I've read states emphatically that Super Intelligent AIs would be capable of rewriting their own code. In fact, a lot of what I read/heard is that they would invent their own machine language to more efficiently rewrite their own code...



Again, I've yet to hear any really good reasons not somehow corrupted by logical fallacy built on the importance of mankind. Why would an AI feel we humans are important? Why would they care about us?

To be clear, my belief is that AIs could care about us, they could love us, they could view us as a kindred spirit. But I feel it is a flip of the coin, they could very easily see us as unstable and dangerous... I would believe that their is a whole bell curve of possibilities ranging from benevolent to hostile.

When I look out in nature, I see all manner of interactions between superior and inferior species. Many times they coexist, however there are plenty of instances where the superior species completely disrupts the ecology of the inferior species.

So I can't help but be skeptical that we human beings-- ie the inferior species as compared to a Super Intelligent AI-- have nothing to fear from a Super Intelligent AI.

I keep hearing, "don't worry about it" without any reasons...

I'm desperate for a real reason to not worry about it.
I think there is enough in life to concern you, without worrying too much about the future.

Intelligent machines designed by a dysfunctional humanity, not likely?

More to the point, we all die anyway, so what difference does it make if terminator hastens your end.
 
Upvote 0

dgiharris

Old Crusty Vet
Jan 9, 2013
5,439
5,222
✟131,531.00
Country
United States
Faith
Baptist
Marital Status
Single
We know an awesome amount of knowledge compared to someone, who lived a thousand years ago. What use is all this extra knowledge?.

Imagine you had all the knowledge of today but were transported to 1400 AD. What good would all that extra knowledge do?

Of course, if you do nothign with that knowledge then obviously it is of no benefit. But if you were willing to employ and use that knowledge, you'd have unfathomable power compared to those around you.

Same with a Super Intelligent AI.
 
Upvote 0

dysert

Member
Feb 29, 2012
6,233
2,238
USA
✟112,984.00
Faith
Christian
Marital Status
Married
As a professional software developer myself (for almost 40 years), I'm not the least bit concerned about AI taking control - at least not in my lifetime. I've seen what kind of software exists out there, and it's not pretty. There are large businesses running software that's barely able to produce accurate reports, let alone amass great amounts of information, become self-programming, and start building things. That's all science fiction. (And I like the Forbin Project more so than the HAL business ;-)

There is no need to worry. Self-modifying code that does anything useful is quite a loooong ways off.
 
Upvote 0

John Bowen

Well-Known Member
Site Supporter
Aug 16, 2018
417
233
53
dueba
✟48,940.00
Country
Fiji
Faith
Christian
Marital Status
Single
Book of Genesis 1 :26 God CREATED man to take dominion over the Earth . God gave us and no other living thing on Earth self awareness . Only man can imagine something in its mind then go out and create it maybe something never before created no animal can do that . No computer will ever be able to create something never created before because it programing is limited to known knowledge .
 
  • Like
Reactions: AV1611VET
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

Neogaia777

Old Soul
Site Supporter
Oct 10, 2011
23,291
5,252
45
Oregon
✟960,797.00
Country
United States
Faith
Non-Denom
Marital Status
Celibate
As a professional software developer myself (for almost 40 years), I'm not the least bit concerned about AI taking control - at least not in my lifetime. I've seen what kind of software exists out there, and it's not pretty. There are large businesses running software that's barely able to produce accurate reports, let alone amass great amounts of information, become self-programming, and start building things. That's all science fiction. (And I like the Forbin Project more so than the HAL business ;-)

There is no need to worry. Self-modifying code that does anything useful is quite a loooong ways off.
If it could feel, if it had emotions, and/or if it had a heart, (and I would add wondered or pondered whether it had a heart and or soul or spirit or wondered about a God in some fashion), (and if we programmed it with this (see link below), then, even if it could rewrite it's own program or change it entirely to whatever it wanted, then I would wonder whether or not it would still have great respect for that originally hardwired code that we gave it and love and the command to love, or would it see love as illogical or some foolish human thing, concept, thought or idea...

If it had a "heart", basically, if it greatly and deeply respected and admired the whole concept of Love, then there might be hope for us, and it with us...

God Bless!

Link: If I were to program and AI with one root, primary command...?
 
Upvote 0