First, I just want to emphasize that you have questioned only the first premise, "Computers can't have knowledge."
The rest follows. I suspect your argument resolves to a semantic one - that knowledge & understanding are human-only by definition - I hope not.
It is an intentional correspondence, something that one knows oneself to hold.
I take it you'd deny that when a computer is asked whether it has information about an object in it's data store, looks up the index to check, and finds that object, that it knows it has the requested information? If so, we need to agree on functional definitions of our terms; what functional definition or description are you using for 'knowledge'?
When the human comes to knowledge that "Socrates is a human being," he actually understands something about Socrates (and also the essence of human beings). When a computer 'says' "2 + 2 = 4" there is no real understanding present.
I'll grant you that a calculator has no need to understand the rules it applies, but "Socrates is a human being" is a simple categorization, set theory - the only thing it tells you about human beings is that Socrates is one; hardly the 'essence' of human beings, particularly if, as for many people, Socrates is just an odd name.
If you think it implies understanding, then computers have had understanding since the 1980s; I suggest that understanding is more than applied set theory. Just to clarify, what functional definition or description are you using for 'understanding'?
It is just manipulating a representational system that has been programmed to have some semblance to reality in precisely the way it was told to manipulate it.
Which echoes human manipulation of their representational models which they have learned to construct to have some semblance to reality, in the way they've learned to manipulate it...
The elusiveness of knowledge isn't the problem, but rather the inaccessibility (i.e. no access in principle).
I still don't see the problem. It isn't all-or-nothing, you don't need absolute certainty before you can proceed; your knowledge just has to be
good enough to do the job. For example, Newtonian Mechanics is wrong - but it's a
good enough approximation for all everyday human scale applications, and good enough for NASA to use it to send probes on tours of the planets. Similarly, evolutionary success means traits that are
good enough to ensure production of viable offspring down the generations.
There are several different ways to approach this topic. First I would approach it via truth where knowledge is a kind of possession of truth, in which case what I said above about proper intentionality comes into play.
I think defining knowledge in terms of truth (ignoring analytic and tautological truth) will lead you into murky waters, not least because of the distinction between what you
think is true, and the actual truth (i.e. correspondence to reality), which is uncertain. I suggest knowledge would be better defined in terms of possession of information (i.e. interpreted data).
Another angle of approach is that of meaning. Knowledge has intrinsic meaning, bits stored in a computer do not.
You're comparing different levels of abstraction. Knowledge is information stored in the brain as associative maps of synaptic connections; bits stored in a computer can have meaning if they can be interpreted on a representational level, e.g. as symbols, maps, or associations. If you add an intermediary layer of abstraction, knowledge can be be stored in a computer as emulations of associative maps of synaptic connections (artificial neural networks), similar to those in the brain. In terms of information processing, there is no significant difference.
A computer is able to manipulate the bits in a way desirable to human beings--for as a tool of human beings it is programmed by them and completely dependent on them insofar as its interactions with the world will have any value--but there is no intrinsic meaning.
If you define intrinsic meaning as something relevant only to humans, and knowledge as that which has intrinsic meaning, you'll inevitably end up thinking only humans can have knowledge.
I see no reason, in principle, why we couldn't eventually make a system (as an android or robot) that would be independent, learn from experience, and have basic drives related to its own long term function and development. The design goal (independence, development) would necessarily be of our devising, but such a system could generate and set its own intermediate goals, and go off to do its thing. IOW, if we give a system the drives and goals evolution has given us, there's no reason it couldn't do most of what we do (although not always in the same way, e.g. reproduction). Whether we'd want to do that is another matter, but space exploration might be one reason.
Not in the way you suppose. Computers do science in the same qualitative way that a thermometer 'does science': by reporting back to human beings what they asked to be reported and created the instrument to report. The case of AI is merely different in degree. In that case the requested result is indicative of other data gathered by the computer/instrument and the physical consequences are more impressive.
I don't see that as fundamentally different from what humans do given the structure and drives that resulted from evolution. In our case, the physical consequences are even more impressive.
The idea that it is only a question of degree is a strawman, and a common one at that. Add a billion orders of magnitude to the most advanced computer and it will still be incapable of truth and knowledge.
I disagree. Simple knowledge is done already; analytic truths are done already; synthetic truths will always be uncertain for both computers and us. If we want computers with the cognitive capacity to exceed what we can do, we can build them. Ever since the very first computers, pundits have been telling us what they could never do, and one by one those 'impossibles' have been achieved. We now have real-time text, voice, and image recognition, real-time text and speech language translation; laptop chess software that can beat a chess grand master; a Go system that can bet the best Go masters; a computer Jeopardy champion; self-driving cars; systems that can learn and teach themselves, etc. Each an achievement previously thought impossible. These are all limited-domain systems, but systems are becoming more flexible - the Jeopardy computer, IBM's Watson is being redirected for medical diagnosis, legal advice, and cookery advice. If generalised, multi-domain systems are required, we can build them.