Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.
A more immediate concern is automation, from self driving cars, to automated customer service, and production humanity over the next decade or two are at risk becoming obsolete.
One of the most common ideas is that a self-aware AI would destroy us out of a sense of self-preservation. Well, if it did have that sense, it would be because we programmed it to have that sense, or maybe a sense of fear as a precursor. There are people living today who don't feel fear, and they put themselves into very dangerous situations, because it's fear that makes us want to preserve ourselves. There's absolutely no reason to think a computer would have that sense.
Why doesn't anyone think that AIs would be programmed with something akin to Asimov's Laws of Robotics? Or fail safe switches?
eudaimonia,
Mark
If someone knows what it is doing, they will do anything and everything in their power to resist and oppose it, like the honest people resisted the nazis who slaughtered millions of innocent men, women and babies , even before they started.Does how many other people have concerns over super intelligent AI?
I'm very pro-technology, but I'm increasingly concerned that AI could destroy or enslave us in my lifetime.
Not out of malice... just because it think's it's practical.
Obviously a super intelligent AI could out predict us, so there's no way we could stop it.
Can we be sure that no AI is made which we can't control? Or must we trust that the first one's will protect us?
If someone knows what it is doing, they will do anything and everything in their power to resist and oppose it, like the honest people resisted the nazis who slaughtered millions of innocent men, women and babies , even before they started.
But this is the behavior of beings who are afraid of death (being switched off) and who destroy other beings who are perceived to be a threat to their way of life. It's one thing for a machine to logically infer its own extinction and another thing to defend its existence violently. A purely logical machine wouldn't be anxious about anything. Maybe pacifism correlates with super-intelligence and these emotionless, logical machines would surpass us ethically before they are destroyed by our incurable paranoia.I said in my post that it wouldn't be about malice, just practical logic.
For example:
"Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."
— Nick Bostrom, "Ethical Issues in Advanced Artificial Intelligence", 2003
It won't be that simple, the point is what if we miss something that logical means humans should die.
The honest people in germany knew, before anyone else did. Some people never even admitted it, and some still don't.Why would we know what it's doing before it's too late?
Does an ant know you are going to kill it? We will, intellectually, be the ants.
But this is the behavior of beings who are afraid of death (being switched off) and who destroy other beings who are perceived to be a threat to their way of life. It's one thing for a machine to logically infer its own extinction and another thing to defend its existence violently. A purely logical machine wouldn't be anxious about anything. Maybe pacifism correlates with super-intelligence and these emotionless, logical machines would surpass us ethically before they are destroyed by our incurable paranoia.
The honest people in germany knew, before anyone else did. Some people never even admitted it, and some still don't.
I protect antsunless they bite me or get in my food. They do a lot of good.
Most of them seem able to tell when I'm about to do them in. They escape if they can.
Flies too. Weird how good they are at escaping.
Firstly, we cannot know how a machine would think because our brains are not computers.If a machines goal is to make paperclips, it might avoid being turned off because that would interfere with it's goal.
It doesn't require fear. Just a logical pursuit of a goal.
Then a terrorist hacks in the company that provides automatic updates for these machines, deletes the Laws and sends out an update.Why doesn't anyone think that AIs would be programmed with something akin to Asimov's Laws of Robotics? Or fail safe switches?
eudaimonia,
Mark
Does how many other people have concerns over super intelligent AI?
I'm very pro-technology, but I'm increasingly concerned that AI could destroy or enslave us in my lifetime.
Does how many other people have concerns over super intelligent AI?
I'm very pro-technology, but I'm increasingly concerned that AI could destroy or enslave us in my lifetime.
Not out of malice... just because it think's it's practical.
Obviously a super intelligent AI could out predict us, so there's no way we could stop it.
Can we be sure that no AI is made which we can't control?
Then a terrorist hacks in the company that provides automatic updates for these machines, deletes the Laws and sends out an update.
Because I don't have an optimistic view of how those who own the machines will treat people who since they are not working, they are not producing, nor will they be consuming since they have no money, and as such are surplus to requirements.Why are those things bad? Less work is a good thing.
Not in our lifetime.Because we can't do something now, it never will happen?
People are already saying that a good way to develop AI will be to get AI to create improved versions of itself.
There's no reason that shouldn't work. Assuming something can't be done is a good way to be proven wrong.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?