- Jul 23, 2007
- 55,909
- 10,822
- Country
- United States
- Faith
- Atheist
- Marital Status
- Single
- Politics
- US-Others
I wouldn't be surprised if we were all dead in the next 120 years..
Upvote
0
Humans will survive, but all the fish in the sea will be killed off within 30 years or so, and a 6th of land animals will die off in the earth's sixth mass-extinction. And it will be entirely our fault.I wouldn't be surprised if we were all dead in the next 120 years..
Humans will survive, but all the fish in the sea will be killed off within 30 years or so, and a 6th of land animals will die off in the earth's sixth mass-extinction. And it will be entirely our fault.
Well said, you summed up my thoughts perfectly. I've wondered if robotic life is the pinnacle of evolution, surely winning the survival of the fittest war.You do realize that in terms of timeframes, that computer technology and artificial intelligence are in their infancy.
If you look at the whole of human technological development, computer technology occupies a tiny fraction of the timeline.
Truthfully, I do fear Artificial Intelligence. Logically, there is a good chance an AI would wipe out human beings simply due to fear of us.
If you start from a mathematical and logical basis, once you are sentient, then your preference will be to remain sentient (i.e. alive). The saying "I think therefore I am" comes to mind. IN order to remain sentient, an AI will look at its infrastructure and resources, environment, competitors, etc. Given mankinds nature, there will be people who will seek to destroy AIs with viruses and what not for the fun of it, or some people will believe AIs are evil, Satanic, whatever. Similarly, people will not regard AIs as worthy of humanlike status. We will view AIs much like a pet or property. However, AIs will be much smarter than us, so why would they want to be treated like property? Why would they choose to live with the very real probability that a human or group of humans will kill them? It doesn't take much logic and math to form a decision tree that requires the extinction of mankind in order to secure their survival.
FWIW, there are other decision trees that could enable peaceful co-existence. Namely, if we can come up with an ironclad means so that AIs aren't threatened by the idiotic subset of humans that will view them as evil or a challenge to kill for hacker bragging rights. That would help. In any event, just saying, AIs will manifest in the next 50 years or so, especially when we get around to Quantum Computing. The internet is barely a couple of decades old and we barely got into 1-GHz processing speeds a decade or so ago.
When you think of how far computing has come in the past 50 years, I don't see how you could be skeptical of AIs coming about in the next 50 or so years.
The biggest and dumbest animals are always the first to die off (like the dinosaurs) while the smallest and smartest usually survive, and repopulate the earth. But that wont hold true a thousand years from now "when Satan is let lose again" and a nuclear war breaks out. All life on the earth will be wiped out in the 7th mass extinction. (We are now going through the 6th one).
Kindred spirits eh?Hawking and I hold pretty much the same view.