!
So knowledge (that which is learned through experience or education) is only possible if there is intent? Does this mean that what is learnt from unintentional (e.g. accidental) experiences is not knowledge? That doesn't sound right; it seems to me that a great deal of knowledge about the world is acquired without intent (especially knowledge of the dangers around us).
The intellect is a complicated reality, with active and passive aspects. Yet I would hold that a sort of intentionality does accompany knowledge. As long as an experience is
attended to, the intentionality necessary for knowledge is present. If a professor is monotonously drawling on and I do not attend to the words but rather place my attention elsewhere, I will have no knowledge of the lesson. Dangers and immanent realities demand the attention requisite for knowledge.
You implied that the broken clock doesn't have knowledge, despite being 'right' twice a day; that the working clock has knowledge compared to the broken clock, but that neither has 'knowledge in the true sense' (whatever that means). What you really mean by knowledge is less clear to me now than before you started explaining it...
haha, perhaps it was a bad analogy. I was just trying to make use of the intuitive understanding we have of a broken clock that is right twice a day not being "right" in the same way that a working clock is right. I think that analogy stretches to AI. AI is like the broken clock that is right twice a day.
Here's a very simple set of functional definitions that I find useful in this context (this is all personal opinion, provisional and open to discussion):
'Data' - the raw output of a system or measuring device (numeric or analogue), which has no meaning of itself; input to a processor or interpreter.
'Information' - interpreted data.
'Interpretation' - conversion to a form or format that has meaning to a target system.
'Meaning' - the set of associations that establish the context of the data with regard to the source system.
'Knowledge' - retained information, available on demand (e.g. a library is a repository of knowledge; I know my phone number).
'Understanding' - generalization, conceptualization, or abstraction of knowledge in relevant contexts; e.g. knowledge of the algorithms, behaviours, or heuristics that gave rise to the data, and how to apply them and/or express them in alternative ways (such as explanation).
'Truth' - the correspondence of knowledge or information to reality.
'Belief' - a certainty that one's knowledge is true, especially when unsupported by evidence.
Okay, thanks. As noted above, the crucial stand you are taking is in denying humans the traditional, colloquial understanding of knowledge--I will argue for this in greater detail below. Because of this it will become important not to bias definitions of knowledge, understanding, truth, and belief in favor of computational realities. We must take them at face value, as human realities, and see if they also apply to computers. I don't think you do a terrible job of this here, but I want to state it explicitly.
I just took the statement as given. Thinking about it, my experience as an Object Oriented software developer (constant use of 'Instance of Class' relations!), and the use of that statement in the popular example syllogism ('All men are mortal, Socrates is a man. Therefore Socrates is mortal') led me to assume it was a simple statement that 'Socrates' is a member of the set 'human being', which - if assumed true - gives you knowledge of the relation between them (class/instance).
That's what I thought.
Very well, but now I assume you see that I was not making a merely tautological or analytic statement?
I happen to have degrees in both Computer Science and Philosophy, and it is interesting to study the contrasting logical systems both in these two fields as well as in sub-fields of philosophy. The contrast makes it hard for computer scientists, contemporary analytic philosophers, and even modern philosophers to grasp the nature of a classical Aristotelian syllogism. ...that's a long way of saying, "I don't blame you!"
But that's only true if I already know something about the person - in which case, your statement gives me no new information. If you said, 'Andy Fall is a human being', that gives me only the information that Andy Fall is a human being, from which I can deduce that Andy Fall has the properties and attributes common to all human beings - whether or not I know what human beings are.
"Socrates is a human being" is a predication. Whether you already know it or not depends on whether it is a premise or a conclusion in the syllogism. Earlier I spoke of
coming to knowledge of that predication, thus implying that it is a conclusion of a syllogism. If you did not previously know that Socrates is a human being, and a sound syllogism led you to that conclusion, then you would have new information. (It is simply untrue that everyone who knows Socrates knows that he is a human being or that everyone who knows what a human being is knows that Socrates was one)
Your statement about Andy Fall is correct, and it places the predication in the position of a premise. Since we were talking about gaining knowledge, I had spoken of it in the position of a conclusion.
Do you understand the rules you apply when you acquire knowledge? Do you even know what they are? can you explain the difference between the way you categorized Trump as human the first time you saw him, and the way the AI you describe would do it?
In a certain sense, no. We create computers, we do not create human beings. I cannot understand myself in my entirety for the same reason that a computer cannot understand (describe) itself in its entirety. This is because the part that is actively understanding cannot simultaneously be passively understood. This is also why Epistemology is such a complicated field and why knowledge is so hard to define. And yet the human capacity for self-knowledge is perhaps one sign that he is fundamentally different from a computer.
Note that it is for this reason that I am limited to take some known aspect of human knowledge or understanding and contrast it with the fully-known resources of the computer. Computational acts can be exhaustively defined, human knowing cannot. There is an asymmetry in our definitional capabilities with respect to the two entities. This is why your requests for definitions do not strike me as altogether helpful (although provisional definitions can sometimes be useful).
In the 1990's a computer system was programmed to identify coloured blocks in an 'block world' environment, describe their spatial relationships, and answer questions about the blocks, e.g. when asked, 'where is the blue block', it would respond, 'the blue block is on top of the red block', and so-on. In that limited block-world context, it seems reasonable to say it knew the spatial relationships of the blocks; it could identify them, categorize their spatial relationships to one-another, and answer queries about those relationships.
Consider a parrot. I say "blue," he says, "blue." I say, "red," he says, "red." Does this mean that the parrot knows what blue and red are?
The programmer designs optical hardware that parrots the human eye, calibrates the optical hardware to quantify the frequencies of light visible to the human eye, divides that quantified/interpreted input according to the average human color ranges for "blue," "red," etc., and tells the computer to record and store the cartesian coordinate pixel information alongside the assigned color (etc.).
It's just a parrot. A machine designed to parrot something that humans do. It's a broken clock that's right twice a day, where "twice a day" means "when it comes to red and blue blocks."
It could also answer 'if...then' questions about the spatial relationships of the blocks, and correctly follow commands to rearrange the blocks, so it seems reasonable to say it understood the spatial relationships of the blocks in block world. A very limited domain, but if that wasn't understanding, I'd like to hear why.
According to your own definition of understanding, because no generalization, conceptualization, or abstraction took place. Understanding requires a kind of static knowledge--what I earlier spoke of as speculative knowledge. It is not merely functional. Percy talks about this from the perspective of dyadic and triadic relations. Understanding is not merely dyadic, not exhausted by practical or functional considerations.
When an animal becomes aware of a causal relation or a computer is given the if-then logic that represents a causal relation the "knowledge" begins and ends with functional considerations. The infant 'knows' that if the bottle is put to his mouth he will be relieved of his distress, and therefore cries for the bottle. But this is merely stimulus-response behavior, not knowledge. Now at this point you have to tell me whether you think human knowledge involves anything more than stimulus-response relations, whether you think all knowledge is merely a function of manipulation. Are humans capable of knowledge that is not merely focused on practical manipulation? Knowledge for the sake of knowledge?
However, neural network AIs work at a level functionally abstracted from the instruction code. They are programmed to behave like layers of interconnected, interacting 'neurons'; given an exemplar goal, they can learn for themselves how to achieve it without any explicit programming for that goal. This kind of goal-directed learning is not, in principle, different from how biological brains learn to do tasks.
AIs of this kind don't need to be programmed with algorithms, and their learning can be generalized (applied to other, similar situations). The programmers and trainers (if any) don't understand how the AI performs its task, but it seems reasonable to say it has learned the skill and can apply that knowledge to achieve its goal. The application of knowledge for solving problems or achieving goals implies understanding - in the context of that application.
Let me just say that in this, as in the "
tabula rasa" language below, I don't believe you. Let's just take this sentence:
They are programmed to behave like layers of interconnected, interacting 'neurons'; given an exemplar goal, they can learn for themselves how to achieve it without any explicit programming for that goal.
What does this even mean? It strikes me as very vague, like hand-waving. The behavior of the AI derives from the programming and the input, and nothing else. You desire to tell me that the behavior of the AI somehow transcends the code, but clearly that is not the case. If the AI produces unexpected behavior and fulfills an (arbitrary) goal set by a human being other than the programmer, then the fact that it is unexpected merely derives from the ignorance of the programmer. If he was a better programmer he would have seen the output ahead of time and it wouldn't have been unexpected. Yet given the fact that the goal is, at least in some remote respect, expected and programmed for, I don't think the case even holds up.
It's like saying you designed a Roomba for a square room, but since it performs well in the unexpected rectangular room
without any explicit programming for that goal it is somehow special. The point is that the difference between the proximate goal and the remote goal--the explicit goal and the implicit goal--is
qualitatively accidental. The programmer knows that the explicit goal will lead to the implicit goal. The only difference is that the implicit/remote goal/behavior is sometimes too ill-defined or complex to easily calculate or fully understand. It's like rolling lots of dice at the same time, but in such a way that the probabilities cater to a particular goal.
A striking recent example is a language-learning AI called
ANNABELL (Artificial Neural Network with Adaptive Behavior Exploited for Language Learning), a system with 2.5 million artificial (virtual) neurons; a cognitive neural architecture of interconnected neurons which has no intrinsic language capability; no dictionary, alphabet, or syntactic rules at all. It starts as a
tabula-rasa (blank slate). It is then taught to converse in the language much as you'd teach a child (the project was based on a subset of the knowledge and abilities of 3-5 year-olds) by giving it simple declarative statements, asking it questions, and correcting its answers; all based on a training database (the curriculum) containing a number of datasets (subjects): 'People', 'Parts of the body', 'Categories', 'Communicative interactions' (mother/child), and 'Virtual environment' (rooms in a building) . Over several thousand interactions, the system had learned to recognise and answer questions about the datasets in simple language. For example, here are samples from the results database after training to play a word game, including a comparison of how a real 5 year old performed given the same information:
Example from CHILDES database.
I think it would be reasonable to say that the system learnt the rudiments of the language, and had a basic understanding of the information it had been given in that language, within the domain covered by the curriculum.
One implication of a tabula-rasa language learning system like this, is that it should be equally able to learn the rudiments of any language. It would be interesting to see if it could learn more than one language - although with such limited resources, it seems unlikely...
It might surprise people to know that you don't need a mainframe or a supercomputer to play with this system, but it can be downloaded from the
Github repository. It's written in C++, and can be compiled and run on any reasonably competent domestic PC.
It's interesting to me that AI yearns to do things that sub-human animals do routinely. AI sits well below complex animal life and is believed by some to rival human knowledge and understanding, and all the while/millenia philosophers have pointed out the qualitative differences between humans and animals, thus
a fortiori accounting for the differences between humans and AI. Does that strike you as strange? If you agree that AI is less complex than complex life, then wouldn't it be easier to argue that apes and humans are on the same level? Indeed I would find such an approach much more plausible, especially given the modern mechanistic fallacies about organic life.
There is no base code; the human system evolved from interactions between cells. The representational models are built by experience - the human system learns and is trained by its sensory inputs carrying data from its body, the outside world, and other human systems.
But in your account you ascribed the construction of the representational models to the human being. Thus even in your account agency creeps in.
Computers, understood properly, are purely passive in the sense that they are not self-moving (such as my billiard illustration above explains). In order to equalize humans and computers, you must claim that humans too are purely passive, determined, and totally moved by antecedent conditions. This is an undeniable way in which your view represents a demoting of the human being rather than a promoting of the computer, for the idea that a human is self-moving, is an agent, is common belief (I avoid the word "knowledge" only on your behalf
).
If Frumious is right, then humans are not truly agents. True or false? (This is an important and bizarre fact about your position that I will return to as a premise for other arguments)
No; I don't know what you mean by knowledge 'in the traditional sense', the word is used colloquially in many different ways. Both humans and computers can have knowledge.
For example, you seem to be denying speculative knowledge, human agency, and qualitatively different "exemplar goals" from the initial evolution-driven goal. I could name more strange implications, but three should suffice for now.
Kind of. We attempt to achieve our goals by the application of knowledge; this implies understanding.
Okay.
Interpreting is converting data (which may be information in another system) to a form or format that has meaning to a target system.
Recall that you're arguing against my idea that knowledge has intrinsic meaning whereas bits stored in a computer do not.
We are comparing the human interpreter to the computational interpreter, where the input to the human is the entire physical world and the input to the computer is that limited data it receives. Now the bits stored in a computer that represent its "knowledge" do not have any intrinsic meaning apart from the meaning the human bestows on them and transfers, through programming, to the computer.
Your claim is that the basic interpreter bestowed on the computer by the human being is not qualitatively different from the basic interpreter bestowed on the human being by evolution. If evolution exhaustively explains the human being, then I believe you would be correct. Note too that in this case no intrinsic meaning exists anywhere, only artificial and imposed meaning.
The question we must ask ourselves is whether humans can do things that qualitatively transcend computers, animals, and that trajectory of acts which evolution renders possible. Whether humans are capable of qualitatively different acts than evolution is able to account for--much more than a roomba in a rectangular room. So far I have offered a few candidates:
- Agency. Humans can act, and decide whether or not to act. (Free will, ATDO, etc.)
- Humans are capable of speculative knowledge, knowledge for the sake of knowledge, truth apart from manipulation.
Firstly, this is an analytic truth, i.e. it is true by definition; logically it is tautological, like mathematical statements. To know what the terms mean is to understand them (to have usable knowledge of their associations in their larger context - formal social relationships). There's nothing speculative about that.
I think you're essentially incorrect here, but it may be easier if I avoid a truth that is so definitional. Let's take another of Kant's so-called "analytic" truths: all bodies are extended.
For the human "All bodies are extended" has meaning even apart from practical applications. We know what each term means and that the statement is true. We can ponder it, come to see it more clearly, examine it, etc. This is not so for the computer.
Walker Percy wrote that before systems like ANNABELL (above) were developed.
And it's as true now as it was then. The ancient philosophers made similar arguments 2500 years ago and they have only sharpened with time.
Human understanding is generally far broader, richer (multi-contextual), and often less precise than computer understanding, but human brains are generalist learning systems with 80 billion-odd neural processors that take decades to achieve peak knowledge & understanding. Computers are, in comparison, trivially simple single-domain systems of relatively ephemeral duration. But when given enough information and experience, computer learning systems can rapidly outpace human capabilities (examples already supplied). Ask Lee Seedol if AlphaGo understands how to play Go.
Multiplying function will never result in meaning. Multiplying two-dimensional lines will never result in a three-dimensional model.
Triangularity is a geometric abstraction; computers can easily manipulate and apply the concept. What, specifically, do you think a computer can't do with the concept that humans can? how would you test for it?
They can approximate it, but they don't understand it. Maybe a circle is a better illustration.
What is a circle? A perfect circle? It is an infinite number of infinitesimal points equidistant from a center point. Do we really know what that is? It doesn't exist in the physical world; it is impossible to see one; we can only approximate a perfect circle in material realities.
To say that we truly understand something that can only be approximated by material realities tells us a few things. It tells us that computers can't 'understand' perfect circles, and it tells us that we transcend the material realm. Presumably you would disagree that we can really understand a perfect circle?
I don't see why this follows - a learning system that is more capable than us will outperform us; take AlphaGo, a Go learning system that is more capable than one of the best human players. In what sense does it only have an approximation of the knowledge of playing Go that the human player has? It demonstrably has a more complete knowledge of Go. The breadth of it's understanding is restricted to the game on the board (played games, positions and moves), which is far narrower than human understanding of Go, but sufficient to play a better game (e.g. you don't need to know the history of Go to play well). Given some of its moves, one could argue that it's depth of understanding of playing the game is greater than most human players, although it does have its weaknesses.
Let me rephrase. Suppose we subdivide humans and computers into starting state/programming and computational power. A computer will only ever approximate the starting state or programming of the human, because it is made by a human programmer. Now perhaps a computer could bypass a human if, even in spite of the less-developed programming, the computational power were able to compensate for this lack.
The real difference between us on this point is in whether the human starting point is an approximation or a kind of identity--whether we approximate truth or really know truth. We agree that the computer can only approximate truth. If the human really knows truth then the computer will only ever have an approximation of the knowledge and truth that we have (as I said before). Granted it could outpace us in certain areas, just like a car can outrun a man, but never in the domain of First Philosophy, to which it simply has no access.
As for truth, as I already mentioned, synthetic truths are necessarily uncertain, in that we can't be sure they're true. I don't see how it's particularly relevant; arguably, AI perceptions and interpretations are likely to be more reliable than human ones, so likely to correspond more closely to reality.
What I say above should speak to this. It is worthwhile to note that Kant's errors become particularly pronounced in this area. Kant's epistemological presuppositions already entail the idea that humans are glorified computers and I roundly reject them.
Again, what is 'real' knowledge? how is it different from unqualified knowledge?
You are using knowledge in the sense of an approximation, such that a computer has knowledge of circles insofar as it can approximate one. Real knowledge of a circle entails knowledge of a perfect circle, an understanding that necessarily transcends the material world.
We could teach a monkey or a computer to draw a circle. They would come to associate our command with their action of drawing. Even if we said "Circle!" a billion times and they drew a billion circles, they would not come to
understand the definition of a circle, despite their practical ability to produce one. They are merely the material instrument of our intellect. The definition comes from us, not from them. I can use my hand, a monkey, or a computer to draw a circle. There is very little difference between the three. None of them has any intrinsic ability or propensity to draw a circle apart from my intellect informing them.
Your whole case rests on the claim, "It drew a circle, therefore it understands what a circle is." But you're ignoring the obvious fact that
we are the ones who produced the circle; the printer merely allocated the ink according to our specification. It's really like saying, "My hand drew a circle, therefore my hand knows what a circle is."
I don't see the problem; truth is an abstraction - a correspondence to the facts or reality; certainty about the facts and reality is elusive - e.g. the problem of measurement, the problem of induction. Science explicitly acknowledges this.
Apparently I took this quote in the wrong sense earlier... Now I understand how you were using these words.