Warning - another long post (last one, I promise!)
...As long as an experience is attended to, the intentionality necessary for knowledge is present.
Have you never discovered or realised that you've learnt something without attending to it at the time?
... the crucial stand you are taking is in denying humans the traditional, colloquial understanding of knowledge...
I'm not saying anything about what humans have or don't have, or what they can do or can't do. I'm simply suggesting a simple set of basic definitions that are suitably contextual, and reasonably close to the dictionary versions, that we can match with any system, human or otherwise, to determine whether it has that basic property. If a system has more than the basic features, we can expand the definition and add a qualifier (e.g. 'self-knowledge').
... it will become important not to bias definitions of knowledge, understanding, truth, and belief in favor of computational realities.
The definitions should be able to be compared to any system, organic or inorganic.
We must take them at face value, as human realities, and see if they also apply to computers.
Sure; if you disagree with or want to add to the definitions I suggested, explain why and suggest a revised definition.
...now I assume you see that I was not making a merely tautological or analytic statement?
I see that you didn't intend to.
If you did not previously know that Socrates is a human being, and a sound syllogism led you to that conclusion, then you would have new information. (It is simply untrue that everyone who knows Socrates knows that he is a human being or that everyone who knows what a human being is knows that Socrates was one)
Precisely my point - that's the only information it gives you. Of itself, it tells you nothing about human beings (let alone their 'essence'), other than that Socrates is one. You may be able to tell something about humans from what you already know about Socrates, and vice-versa, but that's all.
So if "Applying rules without understanding them in order to come to a "conclusion" is not understanding" for a computer, and you admit to applying rules without understanding them, then - by your own logic - you are not understanding either. This kind of confusion is why I think a clear definition of understanding is necessary.
We create computers, we do not create human beings.
Yet the human population continues to grow...
... the human capacity for self-knowledge is perhaps one sign that he is fundamentally different from a computer.
In general, yes; but computers can have self-knowledge, and have been made
self-aware in a limited sense (also see
Self-aware Mario); it's not human level self-awareness, but it's important not to confuse what has been achieved to date with what is potentially achievable.
Computational acts can be exhaustively defined, human knowing cannot.
The computational acts of more than trivially simple artificial neural networks cannot be more exhaustively defined than human knowing - not least because they're architecturally similar substrates. This can be a problem for understanding why ANNs do what they do.
There is an asymmetry in our definitional capabilities with respect to the two entities. This is why your requests for definitions do not strike me as altogether helpful (although provisional definitions can sometimes be useful).
A definition is a concise description or explanation of the meaning of a term. A concept is an abstract idea, so concept definitions must also be abstract, independent of any particular entities. If you feel we can't do this for concepts such as knowledge and understanding, then I guess we must change the subject, because without a definition of a concept, effective communication about it isn't possible.
Consider a parrot. I say "blue," he says, "blue." I say, "red," he says, "red." Does this mean that the parrot knows what blue and red are?
Alex the parrot knew and understood that, and a whole lot more
The programmer designs optical hardware that parrots the human eye, calibrates the optical hardware to quantify the frequencies of light visible to the human eye, divides that quantified/interpreted input according to the average human color ranges for "blue," "red," etc., and tells the computer to record and store the cartesian coordinate pixel information alongside the assigned color (etc.).
My point wasn't about the colour of the blocks, but the knowledge and understanding of their spatial relationships, as I thought I'd made clear.
According to your own definition of understanding, because no generalization, conceptualization, or abstraction took place.
On the contrary, deriving the rules that apply from a number of example situations, and then correctly applying them to novel situations, involves abstraction (abstracting the rules from the examples), conceptualization (the rules express the concepts), and generalization (applying the rules to novel situations). This is a clear example of understanding.
Are humans capable of knowledge that is not merely focused on practical manipulation? Knowledge for the sake of knowledge?
Certainly (assuming that by 'capable of knowledge' you mean 'capable of acquiring knowledge'). Knowledge for the sake of knowledge is fine; a goal need not involve practical manipulation, nor a defined end point.
Let me just say that in this, as in the "tabula rasa" language below, I don't believe you.
It isn't the language that's tabula rasa, it's the language acquisition (learning) system. You don't need to believe me, I've given you links to the media article, the published paper, some relevant results, and the full software and documentation; you can verify it for yourself.
What does this even mean? It strikes me as very vague, like hand-waving.
It's a (very simplified) description of the kind of artificial neural network used in the ANNABELL system.
The behavior of the AI derives from the programming and the input, and nothing else. You desire to tell me that the behavior of the AI somehow transcends the code, but clearly that is not the case.
Yes and no; that conflates programmed behaviours and learnt behaviours (functioning at a higher level of abstraction), which is a crucial distinction. I don't think you'll find many AIs with hard-coded behaviours these days. Most are based on artificial neural networks and
trained to behave in the desired way. So the ANNABELL system was programmed to be a network system that could learn; it had no program code or data relating to language. It was trained to learn a language by linguistic interaction alone - which, incidentally, suggests that Chomsky's idea of an innate grammar is not required for language acquisition.
If the AI produces unexpected behavior and fulfills an (arbitrary) goal set by a human being other than the programmer, then the fact that it is unexpected merely derives from the ignorance of the programmer. If he was a better programmer he would have seen the output ahead of time and it wouldn't have been unexpected.
That assumes the programmer knows what his algorithms will be used for; for example, evolutionary algorithms often produce unexpected but highly effective results, without the programmer having any way to predict what they might be.
AI sits well below complex animal life and is believed by some to rival human knowledge and understanding, and all the while/millenia philosophers have pointed out the qualitative differences between humans and animals, thus a fortiori accounting for the differences between humans and AI. Does that strike you as strange?
No, should it? AI at present is domain-specific, and those domains are narrow. Within those domains AI can rival or exceed rival human knowledge and understanding, but I don't think anyone's claiming more than that.
If you agree that AI is less complex than complex life, then wouldn't it be easier to argue that apes and humans are on the same level?
I don't think that's a coherent question - AI is, by definition, less complex than life that is more complex than AI; how complex that is depends on the AI and the domain in question. And humans
are apes - specifically, Great Apes (Hominidae).
...in your account you ascribed the construction of the representational models to the human being. Thus even in your account agency creeps in.
The human being learns from experience and constructs representational models using a biological neural network. An artificial neural network can learn from experience and construct representational models in a broadly similar way (though with orders of magnitude less processing complexity & sophistication). Agency (acting to some effect, i.e. interacting with the environment) is obviously necessary; if, by 'agency', you mean something different, explain what you mean and why you think it is necessary.
Computers, understood properly, are purely passive in the sense that they are not self-moving (such as my billiard illustration above explains).
That was plain assertion. Computers respond to their inputs, and humans to theirs (perceptions and sensations).
In order to equalize humans and computers, you must claim that humans too are purely passive, determined, and totally moved by antecedent conditions.
Who said anything about 'equalizing' humans and computers? Humans are orders of magnitude more complex than any current computer system.
Humans are not passive, they actively interact with their environment, but yes, I think those interactions are determined by antecedent conditions (there may be a smidgen of randomness, but insignificant). When you make a decision or a choice, or take an action, do you base it on anything? do you have a reason for it?
This is an undeniable way in which your view represents a demoting of the human being rather than a promoting of the computer, for the idea that a human is self-moving, is an agent, is common belief
I don't see it as a demotion at all; humans are the result of over 2.5 billion years of evolution, the most awesomely complex and sophisticated system yet discovered. It's just a shame it's still so unreliable and prone to magical thinking...
If Frumious is right, then humans are not truly agents. True or false?
Depends what you mean by 'truly agents'. Care to give a coherent definition or explanation?
...you seem to be denying speculative knowledge, human agency, and qualitatively different "exemplar goals" from the initial evolution-driven goal.
I'm not denying 'speculative knowledge', or human agency (if you mean acting to some effect), and I have no problem with goals that appear entirely divorced from the evolutionary goal of reproduction - it's a feature of complex systems that you can get emergent, indirect, or unexpected behaviours.
Recall that you're arguing against my idea that knowledge has intrinsic meaning whereas bits stored in a computer do not.
Ah, no; You can see from my definitions that I agree that raw data (e.g. bits stored in a computer) have no meaning, information is data given meaning, and that knowledge is stored information. So I argue knowledge has meaning
by definition.
We are comparing the human interpreter to the computational interpreter, where the input to the human is the entire physical world and the input to the computer is that limited data it receives.
Um, no again. Input to the human is restricted to the limited data it receives through its senses - a wider variety than most computers, but in many cases considerably less in quantity (consider the LHC computer and 'big data' processors).
Now the bits stored in a computer that represent its "knowledge" do not have any intrinsic meaning apart from the meaning the human bestows on them and transfers, through programming, to the computer.
True for hard-coded computer systems, not so much for artificial neural network learning systems. For both humans and ANN learning systems, the patterns of data input (for humans, data from the senses, passing up the afferent nerves to the brain) have no intrinsic meaning apart from that imposed by the processing areas (first stage sensory processing areas in humans), that have been trained (not programmed), by experience (interaction with the environment), to interpret them in useful ways.
Your claim is that the basic interpreter bestowed on the computer by the human being is not qualitatively different from the basic interpreter bestowed on the human being by evolution. If evolution exhaustively explains the human being, then I believe you would be correct.
OK.
Note too that in this case no intrinsic meaning exists anywhere, only artificial and imposed meaning.
Data only has meaning to some system that can interpret it (as information).
The question we must ask ourselves is whether humans can do things that qualitatively transcend computers, animals, and that trajectory of acts which evolution renders possible. Whether humans are capable of qualitatively different acts than evolution is able to account for--much more than a roomba in a rectangular room. So far I have offered a few candidates:
Agency. Humans can act, and decide whether or not to act. (Free will, ATDO, etc.)
Humans are capable of speculative knowledge, knowledge for the sake of knowledge, truth apart from manipulation.
Neither agency (acting and deciding to act or not), nor speculative knowledge, or knowledge for the sake of knowledge, necessarily transcend evolutionary or AI systems possibility; and determinism doesn't mean you can't decide to act or not, it simply means that what you do (or don't) decide is determined. I don't know what you mean by 'truth apart from manipulation' - truth is correspondence with reality, and (apart from analytic truths) necessarily uncertain.
Free will and ATDO are more complicated; if you like, I can address them in a separate post, to stop this one becoming even longer...
...For the human "All bodies are extended" has meaning even apart from practical applications. We know what each term means and that the statement is true.
It is true by definition. We know this if we have been told it or learnt about it (e.g. from early modern philosophy or substance dualism), and understand it if we know about objects, and the world. But I don't see why one couldn't, in principle, train an ANN system to understand it in simple terms (e.g. to explain what it means); I also don't see why one would.
We can ponder it, come to see it more clearly, examine it, etc. This is not so for the computer.
Like humans, computers can only do such things if they have the cognitive capability and training. But if you could build an ANN system with the structure and complexity of a human brain, and train it as thoroughly, I would expect it to be able to do so - although there's really no good reason to do so...
...it's as true now as it was then. The ancient philosophers made similar arguments 2500 years ago and they have only sharpened with time.
If you read the sample interactions (link), you'll see the ANNABELL system behaving as one interpreter and the teacher as another, in conversation about things and events in the world (the 3rd pole), comparing favourably to a 5 year-old and mother on the same subjects.
They can approximate it, but they don't understand it. Maybe a circle is a better illustration.
What is a circle? A perfect circle? It is an infinite number of infinitesimal points equidistant from a center point. Do we really know what that is? It doesn't exist in the physical world; it is impossible to see one; we can only approximate a perfect circle in material realities.
Geometrical shapes are abstractions that can be formulated mathematically. Computers can handle that kind of abstraction with ease - it's applying it to material approximations they've had difficulty with, though this has been much improved recently.
To say that we truly understand something that can only be approximated by material realities tells us a few things. It tells us that computers can't 'understand' perfect circles, and it tells us that we transcend the material realm. Presumably you would disagree that we can really understand a perfect circle?
We can understand geometrical shapes in various ways - as can computers. You'll have to explain what you mean by 'truly understand' and 'really understand'; I've suggested a basic definition - do you accept it, or would you like to supply one of your own?
The real difference between us on this point is in whether the human starting point is an approximation or a kind of identity--whether we approximate truth or really know truth. We agree that the computer can only approximate truth. If the human really knows truth then the computer will only ever have an approximation of the knowledge and truth that we have (as I said before).
I don't agree that a computer can only approximate truth because I don't know what you mean by that (example?). Truths about the world (non-analytic truths) are necessarily uncertain, so in that sense, neither computer nor human can 'really know truth'.
You are using knowledge in the sense of an approximation, such that a computer has knowledge of circles insofar as it can approximate one. Real knowledge of a circle entails knowledge of a perfect circle, an understanding that necessarily transcends the material world.
Nope - I have already given my definition of knowledge, which doesn't involve approximation. A perfect circle is a mathematical abstraction; computers can handle them easily.
We could teach a monkey or a computer to draw a circle. They would come to associate our command with their action of drawing. Even if we said "Circle!" a billion times and they drew a billion circles, they would not come to understand the definition of a circle, despite their practical ability to produce one.
Monkeys are not computers. Computer systems can be made that can infer or idealise 'pure' geometric forms from multiple approximations.
Your whole case rests on the claim, "It drew a circle, therefore it understands what a circle is."
I haven't made such a claim, nor do I think it is true. I've given my definition of understanding; if you disagree with it, explain why; if you have a better one, provide it.