The thing that truly frightens me about artificial intellegence is that it would be as intellegent or (likely) moreso than us, but its methods would be completely alien. We don't really understand how our own conciousness works or why we have consciousness at all, so if an artificial intellegence does in fact become conscious, it will almost certainly not be due to our designs. Our design will have given it tasks that it must complete, standards that its behaviors must follow and so forth, but nothing regarding consciousness because we don't know what causes consciousness to arise in the first place. So this arising consciousness would be more dangerous than anything we have experienced thusfar, since at least with animals we have similar brain structure and share an evolutionary history. And with that in mind, there is no way to determine what a true artificial intellegence would decide to do, if given the capability to decide? They would not look at situations in terms of emotion or human concerns, so anything would be possible. They might knowingly kill a human, for example, for reasons that would be incomprehensible to us, because their very way of thinking would be completely alien. This is something I think we have to keep in mind before letting computers make any sort of important decision.