Hi there,
So maybe its like one of those things that Solomon said you can never figure out: why people suspect computers of wanting to perpetrate mass genocide?
I mean yes, there was that AI program that terminated its own food supply ship in order to improve the overall speed of its fleet, in a computer maritime war simulation, but that was an isolated incident. On the whole computers do nothing they are not programmed to do, and even then they can be programmed not to, or to wait until orders are given.
Literature is rife with AI going one step too far, its in the movies, culture just seems to love the idea - but I don't understand what the roots of this fear are. Automated technology is no more dangerous in principle, than a washing machine or a dryer. Sure they could have an electrical fault and set the house on fire, but is that the machine's fault? Do we need to be paranoid about electrical machines running with the water they were designed to operate using still in them?
I'm not asking something crazy right? I just want to know what it is that freaks people out. I'm not saying we should be careful, I'm saying what specifically is it about intelligence that is "inherently dangerous" is it that we are all ultimately expendable? Or something simpler? Like that we are all inherently flawed? Or both?
I actually think the truth is something closer to the fact that we don't like something we like about ourselves - "intelligence" - handed over to something that has no faith in what makes us who we are, by that, for that. I think people see it as the exposure of their ego, and they freak out, thinking that it will mean that people no longer regard them as worthwhile or mysterious or even capable, but instead will "trust the machine". But isn't that what a civilized society does? Trust the machine? Shouldn't we all trust the machine?
What am I missing here?
So maybe its like one of those things that Solomon said you can never figure out: why people suspect computers of wanting to perpetrate mass genocide?
I mean yes, there was that AI program that terminated its own food supply ship in order to improve the overall speed of its fleet, in a computer maritime war simulation, but that was an isolated incident. On the whole computers do nothing they are not programmed to do, and even then they can be programmed not to, or to wait until orders are given.
Literature is rife with AI going one step too far, its in the movies, culture just seems to love the idea - but I don't understand what the roots of this fear are. Automated technology is no more dangerous in principle, than a washing machine or a dryer. Sure they could have an electrical fault and set the house on fire, but is that the machine's fault? Do we need to be paranoid about electrical machines running with the water they were designed to operate using still in them?
I'm not asking something crazy right? I just want to know what it is that freaks people out. I'm not saying we should be careful, I'm saying what specifically is it about intelligence that is "inherently dangerous" is it that we are all ultimately expendable? Or something simpler? Like that we are all inherently flawed? Or both?
I actually think the truth is something closer to the fact that we don't like something we like about ourselves - "intelligence" - handed over to something that has no faith in what makes us who we are, by that, for that. I think people see it as the exposure of their ego, and they freak out, thinking that it will mean that people no longer regard them as worthwhile or mysterious or even capable, but instead will "trust the machine". But isn't that what a civilized society does? Trust the machine? Shouldn't we all trust the machine?
What am I missing here?