I'm happy to abbreviate the discussion; syllogisms are OK as something to discuss, but the premises are often disputed at length, so not necessarily shorter.
Great. I wanted to provide a full response alongside premises to start to move in that direction, but time prevented this. Here is a normal reply:
- Have you never discovered or realised that you've learnt something without attending to it at the time?
In a word, no.
I'm not saying anything about what humans have or don't have, or what they can do or can't do. I'm simply suggesting a simple set of basic definitions that are suitably contextual, and reasonably close to the dictionary versions, that we can match with any system, human or otherwise, to determine whether it has that basic property. If a system has more than the basic features, we can expand the definition and add a qualifier (e.g. 'self-knowledge').
Your arguments imply a great deal about humans, but my comment was not meant to directly address your definitions.
The definitions should be able to be compared to any system, organic or inorganic.
If you're right then they will apply to all such systems. If you're not they won't. To shape them that way from the outset is a form of begging the question. (But again, I am not criticizing your definitions, only providing structure to the methodology)
Precisely my point - that's the only information it gives you. Of itself, it tells you nothing about human beings (let alone their 'essence'), other than that Socrates is one. You may be able to tell something about humans from what you already know about Socrates, and vice-versa, but that's all.
Let me say a little bit, for I don't want this to turn into a logic class.
When the proposition is a conclusion--I explained why it is a conclusion in my last--the
syllogism provides insight into the conclusion via the middle term. Thus the reason that one concludes such a proposition will inevitably provide insight into the essence of attribution in the conclusion:
- All humans are mortal
- Socrates is a human
- Therefore Socrates is mortal
According to this argument Socrates is mortal because of his humanity. A sound syllogism always tells you something about the terms of the conclusion. We multiply these sorts of syllogisms
ad nauseam in order to understand different parts of reality.
So if "Applying rules without understanding them in order to come to a "conclusion" is not understanding" for a computer, and you admit to applying rules without understanding them, then - by your own logic - you are not understanding either. This kind of confusion is why I think a clear definition of understanding is necessary.
Oh, I misunderstood your question. Applying rules without understanding them in order to come to a "conclusion" is not understanding. I thought you were asking about epistemology, about understanding my understanding. Such recursive understanding grows weaker at each level, but when the human understands at the first level he is not applying unknown rules as the computer is. Do you think that applying rules without understanding them in order to come to a "conclusion" is understanding?
Yet the human population continues to grow...
Generation and creation are very different things. We do not make babies in the same way we make computers.
The computational acts of more than trivially simple artificial neural networks cannot be more exhaustively defined than human knowing - not least because they're architecturally similar substrates. This can be a problem for understanding why ANNs do what they do.
Although I disagree, you missed the point and did not address the argument behind it. You are characteristically claiming that the difference is one of degree, but I gave an argument for why it is not. A thing cannot understand itself in the way it can understand other things, especially things that it created. "This is because the part that is actively understanding cannot simultaneously be passively understood."
Alex the parrot knew and understood that, and a whole lot more
How wonderful.
My point wasn't about the colour of the blocks, but the knowledge and understanding of their spatial relationships, as I thought I'd made clear.
The same argument applies.
On the contrary, deriving the rules that apply from a number of example situations, and then correctly applying them to novel situations, involves abstraction (abstracting the rules from the examples), conceptualization (the rules express the concepts), and generalization (applying the rules to novel situations). This is a clear example of understanding.
Where did such "derivation" take place?
Certainly (assuming that by 'capable of knowledge' you mean 'capable of acquiring knowledge'). Knowledge for the sake of knowledge is fine; a goal need not involve practical manipulation, nor a defined end point.
Okay. And how could a computer ever seek knowledge for the sake of knowledge (given that its commands always come externally, from the programmer)?
If I am reading about black holes, and someone asks me why, I can just answer, "Because I want to know; I'm interested." It seems that all the computer can say is, "Because my programming caused me to do so; I seek knowledge always as a means to obedience to my programming" (or something like that). Presumably you would say that the human does not truly seek knowledge for the sake of knowledge, but for the remote sake of survival or some other thing; he only
thinks he seeks knowledge in itself.
Yes and no; that conflates programmed behaviours and learnt behaviours (functioning at a higher level of abstraction), which is a crucial distinction. I don't think you'll find many AIs with hard-coded behaviours these days. Most are based on artificial neural networks and trained to behave in the desired way. So the ANNABELL system was programmed to be a network system that could learn; it had no program code or data relating to language. It was trained to learn a language by linguistic interaction alone - which, incidentally, suggests that Chomsky's idea of an innate grammar is not required for language acquisition.
You're begging the question that there is some qualitative difference between programmed behaviors and "learned" behaviors. Like I said, any behavior is a function of the programming and the input, and nothing else. It has nothing to do with whether the behavior is "hard-coded." A roomba navigating a rectangular room hasn't really
learned anything. Else demonstrate behavior that transcends the programming and the input.
That assumes the programmer knows what his algorithms will be used for; for example, evolutionary algorithms often produce unexpected but highly effective results, without the programmer having any way to predict what they might be.
If he knows the domain of inputs then he knows what it
can be used for, not to mention what it will be used for.
No, should it? AI at present is domain-specific, and those domains are narrow. Within those domains AI can rival or exceed rival human knowledge and understanding, but I don't think anyone's claiming more than that.
Like I said, cars have been able to outrun men for over a century, yet no one made the mistake of thinking that humans are glorified cars.
I don't think that's a coherent question - AI is, by definition, less complex than life that is more complex than AI; how complex that is depends on the AI and the domain in question. And humans are apes - specifically, Great Apes (Hominidae).
It is a coherent question precisely because "complex life" is a common term signifying higher animal life.
The human being learns from experience and constructs representational models using a biological neural network. An artificial neural network can learn from experience and construct representational models in a broadly similar way (though with orders of magnitude less processing complexity & sophistication). Agency (acting to some effect, i.e. interacting with the environment) is obviously necessary; if, by 'agency', you mean something different, explain what you mean and why you think it is necessary.
Agency creeps in because every computer is made by a human agent.
That was plain assertion. Computers respond to their inputs, and humans to theirs (perceptions and sensations).
And computers are purely passive, are they not? Purely determined by antecedent conditions?
Who said anything about 'equalizing' humans and computers? Humans are orders of magnitude more complex than any current computer system.
And yet your thesis is that it is only a matter of time before computers catch up.
Humans are not passive, they actively interact with their environment, but yes, I think those interactions are determined by antecedent conditions (there may be a smidgen of randomness, but insignificant). When you make a decision or a choice, or take an action, do you base it on anything? do you have a reason for it?
Good, I just wanted you to state that explicitly.
I don't see it as a demotion at all; humans are the result of over 2.5 billion years of evolution, the most awesomely complex and sophisticated system yet discovered. It's just a shame it's still so unreliable and prone to magical thinking...
Most would say the idea that a human isn't a self-moving agent is a demotion.
Depends what you mean by 'truly agents'. Care to give a coherent definition or explanation?
I already did and you already admitted that humans aren't agents. (One essential condition would be that an agent is not fully determined by antecedent conditions)
I'm not denying 'speculative knowledge', or human agency (if you mean acting to some effect), and I have no problem with goals that appear entirely divorced from the evolutionary goal of reproduction - it's a feature of complex systems that you can get emergent, indirect, or unexpected behaviours.
Keyword: appear. No?
Um, no again. Input to the human is restricted to the limited data it receives through its senses - a wider variety than most computers, but in many cases considerably less in quantity (consider the LHC computer and 'big data' processors).
How does this relate whatsoever to my point? You like to nitpick, don't you?
True for hard-coded computer systems, not so much for artificial neural network learning systems. For both humans and ANN learning systems, the patterns of data input (for humans, data from the senses, passing up the afferent nerves to the brain) have no intrinsic meaning apart from that imposed by the processing areas (first stage sensory processing areas in humans), that have been trained (not programmed), by experience (interaction with the environment), to interpret them in useful ways.
I would just call that something like "second-level programming."
Data only has meaning to some system that can interpret it (as information).
Okay.
Neither agency (acting and deciding to act or not), nor speculative knowledge, or knowledge for the sake of knowledge, necessarily transcend evolutionary or AI systems possibility; and determinism doesn't mean you can't decide to act or not, it simply means that what you do (or don't) decide is determined. I don't know what you mean by 'truth apart from manipulation' - truth is correspondence with reality, and (apart from analytic truths) necessarily uncertain.
I asked a relevant question above about speculative knowledge that applies here as well.
Free will and ATDO are more complicated; if you like, I can address them in a separate post, to stop this one becoming even longer...
We can table it for now.
It is true by definition. We know this if we have been told it or learnt about it (e.g. from early modern philosophy or substance dualism), and understand it if we know about objects, and the world. But I don't see why one couldn't, in principle, train an ANN system to understand it in simple terms (e.g. to explain what it means); I also don't see why one would.
Like humans, computers can only do such things if they have the cognitive capability and training. But if you could build an ANN system with the structure and complexity of a human brain, and train it as thoroughly, I would expect it to be able to do so - although there's really no good reason to do so...
Are you granting that a human could ponder such a proposition and come to see it more clearly and fully?
If you read the sample interactions (link), you'll see the ANNABELL system behaving as one interpreter and the teacher as another, in conversation about things and events in the world (the 3rd pole), comparing favourably to a 5 year-old and mother on the same subjects.
I did read it, but my friend also has a parrot. He's never confused it with a human being.
Geometrical shapes are abstractions that can be formulated mathematically. Computers can handle that kind of abstraction with ease - it's applying it to material approximations they've had difficulty with, though this has been much improved recently.
A computer that holds the equation of a circle understands a circle about as much as a blackboard that holds the equation. To the computer it is just a set of numbers and operations that can be used to produce material approximations.
We can understand geometrical shapes in various ways - as can computers. You'll have to explain what you mean by 'truly understand' and 'really understand'; I've suggested a basic definition - do you accept it, or would you like to supply one of your own?
A relevant aspect of such understanding would be the ability to reflect or ponder over it. I can imagine circles, think about the definition of a circle, consider mathematical representations, etc.
I don't agree that a computer can only approximate truth because I don't know what you mean by that (example?). Truths about the world (non-analytic truths) are necessarily uncertain, so in that sense, neither computer nor human can 'really know truth'.
Truths about the world are sufficient.
Nope - I have already given my definition of knowledge, which doesn't involve approximation. A perfect circle is a mathematical abstraction; computers can handle them easily.
What this comes down to is the computer's capacity for mathematics as opposed to manipulation of numbers.
1. Does the definition of a circle transcend material reality?
2. Can a purely material computer understand something which transcends material reality?
Monkeys are not computers. Computer systems can be made that can infer or idealise 'pure' geometric forms from multiple approximations.
No, they can't.