Exist said:
"I don't think that robots can ever be concious as a consequence of the manner in which they operate."
If you don't think things are aware because of the way they were built, then what makes us aware? What makes dogs more aware than jellyfish, but less aware than us?
And why is it that only things with "brains" are aware (rocks and plants aren't aware)?
You don't think the brain has anything to do with awareness?
I don't know exactly what makes us aware but this doesn't bother me because there are still many mysteries about how human beings operate. I suspect that it has something to do with our ability to directly access abstract concepts, but I don't have a scientific explanation for it.
However, the only part of robots that we are interested in is their computer "brains" so we can think of a robot as essentially a computer program. A computer program is essentially just a list of algorithms that the program will follow, sometimes giving output or rearranging data. Every program can be broken down into these things, so this is all that a robot would be able to do, though it would have quite complex algorithms.
But the problem is that human thinking cannot be broken down into algorithms, no matter how complex they are. There are a variety of ways to see this. Searle's chinese room thought experiment shows (intuitively, it is by no means a proof) that a one can create a program so complex that it appears that it is thinking, but in actuality there is no understanding within the computer but rather just a blind obedience to a rule book.
Then we have the mathematical arguments, which Penrose has really jumped on. There tend to be two main arguments, and they are related. One works heavily on the incompleteness theorem and it basically boils down to algorithmic systems will have certain statements that they can never determine the truth of, or they will have statements that can be true or false for them (essentially the law of noncontradiction is void for them). This leads to two conclusions:
1.) Human's accomplishments in math show that we are not bound by the incompleteness theorem, therefore we do not think in the algorithmic manner of robots.
2.) Human reason is flawed; forget the law of noncontradiction, we can't even know anything.
Most people choose option 1.
The other argument is related, but I think it is more easily accesible and more powerful. It is based off the halting problem. What is that? Well imagine that you wanted a computer to prove an existence theorem based off the natural numbers. If a number of the type you are interested in exists you could have it check one than two than three etc. until it finds it. But if that number does not exist you will have a problem because it will never find one (it will never halt). So the only thing that really matters for the existence proof is whether the algorithm that searches the natural numbers will halt or not. So what we need is another algorithm that checks whether a given algorithm will exist. The problem? It can be proven that that algorithm is impossible to construct. So computers have a problem here, as they will not be able to figure out whether any given algorithm halts, though it will be possible for a human to figure this out. So again, however we think, it seems like it isn't in the same way that computers operate.
But I suppose the thing that interests me is, why would you think that we work like robots in the first place? Why would you think that robots have the ability to think? Almost every argument that I've heard boils down to an argument from ignorance, i.e. we don't know that robots
can't think so therefore they think. If it's so clear to you, shouldn't there be a better argument?