Freewill?

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,261
8,057
✟326,642.00
Faith
Atheist
Exactly! Are you done learning in this life?
What has that to do with computers? Oh, and computers can learn from experience, exploration, experiment, trial and error.
It requires intelligence to dig a hole.
Maybe you think all creatures capable of making holes are intelligent (woodworm?), but is erosion intelligent? Lightning?
The question is: where does that intelligence come from?
The question is, what definition of intelligence are you using? o_O
Still takes intelligence for termites to build mounds, where did that intelligence come from?
Evolution.
My human logic tells me it's impossible for intelligence to come from something unintelligent. Should I defy my human logic just because you say my logic is faulty?
Ideally, I hope you'd check the evidence for yourself and do some critical thinking. Untutored human logic and reasoning is notoriously unreliable - that's partly why the scientific method has been developed.
Where are you getting your intelligence from? Humans? Where did humans get their intelligence from?
Evolution.
Not if a higher intelligence didn't design them first to be capable of gaining knowledge for themselves.
Obviously; you either wait for 2 billion+ years of uncertain evolution, or use an evolved intelligence.
 
Upvote 0

Radrook

Well-Known Member
Feb 25, 2016
11,536
2,723
USA
Visit site
✟134,848.00
Country
United States
Faith
Non-Denom
Marital Status
Single
Some folks are fine with laws and regulations which governments issue and uphold by means of force. But as soon as these same folks are told that a creator has laws that he expects them to abide by and will use force to uphold them, then these same folk claim that they are being deprived of their freedom.
 
Upvote 0

Chriliman

Everything I need to be joyful is right here
May 22, 2015
5,895
569
✟163,501.00
Country
United States
Faith
Non-Denom
Marital Status
Married
Maybe you think all creatures capable of making holes are intelligent (woodworm?), but is erosion intelligent? Lightning?

I'm suggesting that you can't separate intelligence from life, no matter how primitive that life is. If it's alive, it processes information in some way, which non-living matter cannot do. My brain processes information and I use reasoning to understand my reality, obviously not all life is capable of this level of intelligence and it's possible that there is an even higher level of intelligence than neither you or I are capable of comprehending, due to the limits of our brains.
 
Last edited:
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,261
8,057
✟326,642.00
Faith
Atheist
I'm suggesting that you can't separate intelligence from life, no matter how primitive that life is.
That depends on your definition of 'intelligence'. If you're suggesting even the simplest life is intelligent, without qualification, it would seem to devalue the word to the point of uselessness.

What is your definition of intelligence?
If it's alive, it processes information in some way, which non-living matter cannot do.
Computers do. Are you equating information processing with intelligence? If so, you're implying that not just computers, but almost all modern technology, especially digital technology - from thermostats to calculators & watches to TVs and washing machines, is intelligent... I'd expect some qualifications or categories to go with that.
My brain processes information and I use reasoning to understand my reality, obviously not all life is capable of this level of intelligence and it's possible that there is an even higher level of intelligence than neither you or I are capable of comprehending, due to the limits of our brains.
It's possible, but there's no evidence for it. The observable world behaves exactly as we'd expect if such a thing didn't exist, so as yet, there's no reason to think that such a thing either exists or exercises any influence in our locale.
 
Upvote 0

zippy2006

Dragonsworn
Nov 9, 2013
6,834
3,410
✟244,737.00
Country
United States
Faith
Catholic
Marital Status
Single
I take it you'd deny that when a computer is asked whether it has information about an object in it's data store, looks up the index to check, and finds that object, that it knows it has the requested information? If so, we need to agree on functional definitions of our terms; what functional definition or description are you using for 'knowledge'?

As I already noted, a necessary condition for knowledge is intentional correspondence to reality. Analogically, we could say that the working clock has knowledge compared to the broken clock that is "right" twice a day (even though neither has knowledge in the true sense). Even when the broken clock is right, it is only by accident, and knowledge is not the result of an accident but rather an intention to know.

When I throw a dart at an electronic dartboard, hit the double bullseye, and receive 50 points, the computer in the dartboard does not have knowledge that the 50 points has been earned, it merely adds 50 points based on the input stimulation and the program. It can only be said to have knowledge in a way similar to the broken clock.

I have a hunch that we will need to focus the scope of this conversation if it isn't going to get out of hand, and "set theory" is a viable candidate:

I'll grant you that a calculator has no need to understand the rules it applies, but "Socrates is a human being" is a simple categorization, set theory - the only thing it tells you about human beings is that Socrates is one; hardly the 'essence' of human beings, particularly if, as for many people, Socrates is just an odd name.

First, I wouldn't call it set theory, and second, I think you overlook a great deal in the nature of predication.

...the only thing it tells you about human beings is that Socrates is one; hardly the 'essence' of human beings...

It only tells you that Socrates is one of what? A human being? If you don't know what a human being is, what is the essence of a human being, than how could you say that Socrates is a human being? It's not just a "simple categorization." It is a truth. It is a correspondence of the mind to reality. In order to know that it is true, you must know both Socrates and what a human being is, and connect the two. If we say "Socrates is a human being" and are performing a mere categorization without knowledge of either the subject or the predicate, then we do not have knowledge and it is an empty statement.

...particularly if, as for many people, Socrates is just an odd name.

But then you've missed the point. Socrates is just the proverbial example. Substitute "Donald Trump," "Michael Jordan," or your mother if you like. The point is that we really know something about these people.

If you think it implies understanding, then computers have had understanding since the 1980s; I suggest that understanding is more than applied set theory.

Why? Because a typist types into a computer "Donald Trump is human?" Does that give the computer understanding of the truth about Trump? Or because a programmer commands the computer to associate Trump with the category of humanity? Does that prove that the computer understands that Trump is human?

Or perhaps AI analyses live feed of Trump and categorizes him as human based on the input data it receives? Does that prove that the AI understands that Trump is human? It is just a more sophisticated example of the first two, and any "knowledge" is still the programmer's, not the AI's. Applying rules without understanding them in order to come to a "conclusion" is not understanding.

Which echoes human manipulation of their representational models which they have learned to construct to have some semblance to reality, in the way they've learned to manipulate it...

But who wrote the base code in the case of the human? Where does the representational model come from?

I still don't see the problem. It isn't all-or-nothing, you don't need absolute certainty before you can proceed; your knowledge just has to be good enough to do the job. For example, Newtonian Mechanics is wrong - but it's a good enough approximation for all everyday human scale applications, and good enough for NASA to use it to send probes on tours of the planets. Similarly, evolutionary success means traits that are good enough to ensure production of viable offspring down the generations.

Characteristically, you are denying humans knowledge in the traditional sense, rather than ascribing it to computers. I had forgotten this detail, as it has been some time since I've visited this topic.

What you're saying is something like this: "Knowledge is that thing that allows us to achieve our practical goals. Therefore if we are achieving our practical goals, then knowledge must be present." Is that accurate? How are you defining knowledge?

I think defining knowledge in terms of truth (ignoring analytic and tautological truth) will lead you into murky waters, not least because of the distinction between what you think is true, and the actual truth (i.e. correspondence to reality), which is uncertain. I suggest knowledge would be better defined in terms of possession of information (i.e. interpreted data).

What is interpreted data? Information is not intrinsically meaningful.

You're comparing different levels of abstraction. Knowledge is information stored in the brain as associative maps of synaptic connections; bits stored in a computer can have meaning if they can be interpreted on a representational level, e.g. as symbols, maps, or associations. If you add an intermediary layer of abstraction, knowledge can be be stored in a computer as emulations of associative maps of synaptic connections (artificial neural networks), similar to those in the brain. In terms of information processing, there is no significant difference.

What does it mean to interpret something?

One aspect of knowledge is that it is speculative. Or rather, we must consider the human's capacity for speculative knowledge. That is, knowledge that is merely symbolic or interpretive, and not practical. This kind of knowledge is meaningful even apart from application.

For example, in language there is the sign and the meaning. Consider two signs: "unmarried man" and "bachelor." Since these signs signify the same thing, there are for the human three realities present: two signs and one meaning signified. For the human "An unmarried man is a bachelor" has meaning even apart from practical applications. We know what each term means and that they signify the same reality.

The computer is simply incapable of this act of speculative reason. It can associate the two terms, it can "interpret" them in various ways to point at each other, but it cannot actually accomplish the abstract act that the human being performs. The meaning, the triadic relation, as Walker Percy says, is proper to humans.

If you define intrinsic meaning as something relevant only to humans, and knowledge as that which has intrinsic meaning, you'll inevitably end up thinking only humans can have knowledge.

It is interesting that human knowledge and language led many philosophers to posit an immaterial aspect of our being. For example, the abstract concept of triangularity, known apart from every particular material triangle or representation, is not containable in material realities. A human and only a human can actually know what a triangle is. A computer can spit out definitions, it can spit out a million different triangles by applying drawing procedures, but it can never possess the abstract concept of triangularity that exists apart from any material definition or instantiation.

I see no reason, in principle, why we couldn't eventually make a system (as an android or robot) that would be independent, learn from experience, and have basic drives related to its own long term function and development. The design goal (independence, development) would necessarily be of our devising, but such a system could generate and set its own intermediate goals, and go off to do its thing. IOW, if we give a system the drives and goals evolution has given us, there's no reason it couldn't do most of what we do (although not always in the same way, e.g. reproduction). Whether we'd want to do that is another matter, but space exploration might be one reason.

I agree with this. But the computer would have only an approximation of the knowledge and truth that we have (due to the design goal coming from us).

The crucial point of disagreement between us seems to be whether humans themselves have real knowledge and access to truth or only an approximation.

I don't see that as fundamentally different from what humans do given the structure and drives that resulted from evolution. In our case, the physical consequences are even more impressive.

Okay, fine. It is late here and I will have to think about an efficient answer to this, although the points above relate.

I disagree. Simple knowledge is done already; analytic truths are done already; synthetic truths will always be uncertain for both computers and us. If we want computers with the cognitive capacity to exceed what we can do, we can build them. Ever since the very first computers, pundits have been telling us what they could never do, and one by one those 'impossibles' have been achieved. We now have real-time text, voice, and image recognition, real-time text and speech language translation; laptop chess software that can beat a chess grand master; a Go system that can bet the best Go masters; a computer Jeopardy champion; self-driving cars; systems that can learn and teach themselves, etc. Each an achievement previously thought impossible. These are all limited-domain systems, but systems are becoming more flexible - the Jeopardy computer, IBM's Watson is being redirected for medical diagnosis, legal advice, and cookery advice. If generalised, multi-domain systems are required, we can build them.

Then the pundits are silly, for it is clear that each of those operations can be achieved by dyadic logic machines such as computers.

Lovely post--thanks for your thoughts. :)
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,261
8,057
✟326,642.00
Faith
Atheist
Warning - long post!

...knowledge is not the result of an accident but rather an intention to know.
So knowledge (that which is learned through experience or education) is only possible if there is intent? Does this mean that what is learnt from unintentional (e.g. accidental) experiences is not knowledge? That doesn't sound right; it seems to me that a great deal of knowledge about the world is acquired without intent (especially knowledge of the dangers around us).
... the computer in the dartboard does not have knowledge that the 50 points has been earned, it merely adds 50 points based on the input stimulation and the program. It can only be said to have knowledge in a way similar to the broken clock.
You implied that the broken clock doesn't have knowledge, despite being 'right' twice a day; that the working clock has knowledge compared to the broken clock, but that neither has 'knowledge in the true sense' (whatever that means). What you really mean by knowledge is less clear to me now than before you started explaining it...

Here's a very simple set of functional definitions that I find useful in this context (this is all personal opinion, provisional and open to discussion):

'Data' - the raw output of a system or measuring device (numeric or analogue), which has no meaning of itself; input to a processor or interpreter.
'Information' - interpreted data.
'Interpretation' - conversion to a form or format that has meaning to a target system.
'Meaning' - the set of associations that establish the context of the data with regard to the source system.
'Knowledge' - retained information, available on demand (e.g. a library is a repository of knowledge; I know my phone number).
'Understanding' - generalization, conceptualization, or abstraction of knowledge in relevant contexts; e.g. knowledge of the algorithms, behaviours, or heuristics that gave rise to the data, and how to apply them and/or express them in alternative ways (such as explanation).
'Truth' - the correspondence of knowledge or information to reality.
'Belief' - a certainty that one's knowledge is true, especially when unsupported by evidence.
...If you don't know what a human being is, what is the essence of a human being, than how could you say that Socrates is a human being? It's not just a "simple categorization." It is a truth. It is a correspondence of the mind to reality. In order to know that it is true, you must know both Socrates and what a human being is, and connect the two.
I just took the statement as given. Thinking about it, my experience as an Object Oriented software developer (constant use of 'Instance of Class' relations!), and the use of that statement in the popular example syllogism ('All men are mortal, Socrates is a man. Therefore Socrates is mortal') led me to assume it was a simple statement that 'Socrates' is a member of the set 'human being', which - if assumed true - gives you knowledge of the relation between them (class/instance).
If we say "Socrates is a human being" and are performing a mere categorization without knowledge of either the subject or the predicate, then we do not have knowledge and it is an empty statement.
That's what I thought.
But then you've missed the point. Socrates is just the proverbial example. Substitute "Donald Trump," "Michael Jordan," or your mother if you like. The point is that we really know something about these people.
But that's only true if I already know something about the person - in which case, your statement gives me no new information. If you said, 'Andy Fall is a human being', that gives me only the information that Andy Fall is a human being, from which I can deduce that Andy Fall has the properties and attributes common to all human beings - whether or not I know what human beings are.
...a typist types into a computer "Donald Trump is human?" Does that give the computer understanding of the truth about Trump? Or because a programmer commands the computer to associate Trump with the category of humanity? Does that prove that the computer understands that Trump is human?
Of course not, that was my point...
Or perhaps AI analyses live feed of Trump and categorizes him as human based on the input data it receives? Does that prove that the AI understands that Trump is human? It is just a more sophisticated example of the first two, and any "knowledge" is still the programmer's, not the AI's. Applying rules without understanding them in order to come to a "conclusion" is not understanding.
Do you understand the rules you apply when you acquire knowledge? Do you even know what they are? can you explain the difference between the way you categorized Trump as human the first time you saw him, and the way the AI you describe would do it?

In the 1990's a computer system was programmed to identify coloured blocks in an 'block world' environment, describe their spatial relationships, and answer questions about the blocks, e.g. when asked, 'where is the blue block', it would respond, 'the blue block is on top of the red block', and so-on. In that limited block-world context, it seems reasonable to say it knew the spatial relationships of the blocks; it could identify them, categorize their spatial relationships to one-another, and answer queries about those relationships.

It could also answer 'if...then' questions about the spatial relationships of the blocks, and correctly follow commands to rearrange the blocks, so it seems reasonable to say it understood the spatial relationships of the blocks in block world. A very limited domain, but if that wasn't understanding, I'd like to hear why.

However, neural network AIs work at a level functionally abstracted from the instruction code. They are programmed to behave like layers of interconnected, interacting 'neurons'; given an exemplar goal, they can learn for themselves how to achieve it without any explicit programming for that goal. This kind of goal-directed learning is not, in principle, different from how biological brains learn to do tasks.

AIs of this kind don't need to be programmed with algorithms, and their learning can be generalized (applied to other, similar situations). The programmers and trainers (if any) don't understand how the AI performs its task, but it seems reasonable to say it has learned the skill and can apply that knowledge to achieve its goal. The application of knowledge for solving problems or achieving goals implies understanding - in the context of that application.

A striking recent example is a language-learning AI called ANNABELL (Artificial Neural Network with Adaptive Behavior Exploited for Language Learning), a system with 2.5 million artificial (virtual) neurons; a cognitive neural architecture of interconnected neurons which has no intrinsic language capability; no dictionary, alphabet, or syntactic rules at all. It starts as a tabula-rasa (blank slate). It is then taught to converse in the language much as you'd teach a child (the project was based on a subset of the knowledge and abilities of 3-5 year-olds) by giving it simple declarative statements, asking it questions, and correcting its answers; all based on a training database (the curriculum) containing a number of datasets (subjects): 'People', 'Parts of the body', 'Categories', 'Communicative interactions' (mother/child), and 'Virtual environment' (rooms in a building) . Over several thousand interactions, the system had learned to recognise and answer questions about the datasets in simple language. For example, here are samples from the results database after training to play a word game, including a comparison of how a real 5 year old performed given the same information: Example from CHILDES database.

I think it would be reasonable to say that the system learnt the rudiments of the language, and had a basic understanding of the information it had been given in that language, within the domain covered by the curriculum.

One implication of a tabula-rasa language learning system like this, is that it should be equally able to learn the rudiments of any language. It would be interesting to see if it could learn more than one language - although with such limited resources, it seems unlikely...

It might surprise people to know that you don't need a mainframe or a supercomputer to play with this system, but it can be downloaded from the Github repository. It's written in C++, and can be compiled and run on any reasonably competent domestic PC.
But who wrote the base code in the case of the human? Where does the representational model come from?
There is no base code; the human system evolved from interactions between cells. The representational models are built by experience - the human system learns and is trained by its sensory inputs carrying data from its body, the outside world, and other human systems.
Characteristically, you are denying humans knowledge in the traditional sense, rather than ascribing it to computers.
No; I don't know what you mean by knowledge 'in the traditional sense', the word is used colloquially in many different ways. Both humans and computers can have knowledge.
What you're saying is something like this: "Knowledge is that thing that allows us to achieve our practical goals. Therefore if we are achieving our practical goals, then knowledge must be present." Is that accurate?
Kind of. We attempt to achieve our goals by the application of knowledge; this implies understanding.
How are you defining knowledge?
Retained information, available on demand - see definitions above.
What is interpreted data? Information is not intrinsically meaningful.
Information is data with meaning - it is literally informative, it tells you something about a system. See definitions above.
What does it mean to interpret something?
Interpreting is converting data (which may be information in another system) to a form or format that has meaning to a target system. See definitions above.
One aspect of knowledge is that it is speculative. Or rather, we must consider the human's capacity for speculative knowledge. That is, knowledge that is merely symbolic or interpretive, and not practical. This kind of knowledge is meaningful even apart from application.
OK... so?
For the human "An unmarried man is a bachelor" has meaning even apart from practical applications. We know what each term means and that they signify the same reality.

The computer is simply incapable of this act of speculative reason.
Firstly, this is an analytic truth, i.e. it is true by definition; logically it is tautological, like mathematical statements. To know what the terms mean is to understand them (to have usable knowledge of their associations in their larger context - formal social relationships). There's nothing speculative about that.
It can associate the two terms, it can "interpret" them in various ways to point at each other, but it cannot actually accomplish the abstract act that the human being performs. The meaning, the triadic relation, as Walker Percy says, is proper to humans.
Walker Percy wrote that before systems like ANNABELL (above) were developed.

Human understanding is generally far broader, richer (multi-contextual), and often less precise than computer understanding, but human brains are generalist learning systems with 80 billion-odd neural processors that take decades to achieve peak knowledge & understanding. Computers are, in comparison, trivially simple single-domain systems of relatively ephemeral duration. But when given enough information and experience, computer learning systems can rapidly outpace human capabilities (examples already supplied). Ask Lee Seedol if AlphaGo understands how to play Go.
It is interesting that human knowledge and language led many philosophers to posit an immaterial aspect of our being. For example, the abstract concept of triangularity, known apart from every particular material triangle or representation, is not containable in material realities. A human and only a human can actually know what a triangle is. A computer can spit out definitions, it can spit out a million different triangles by applying drawing procedures, but it can never possess the abstract concept of triangularity that exists apart from any material definition or instantiation.
Triangularity is a geometric abstraction; computers can easily manipulate and apply the concept. What, specifically, do you think a computer can't do with the concept that humans can? how would you test for it?

It seems to me that if you don't have clear and precise definitions for terms like 'understanding', it's easy to fall into the trap of thinking that there is something mysterious and undefinable about them...
... the computer would have only an approximation of the knowledge and truth that we have (due to the design goal coming from us).
I don't see why this follows - a learning system that is more capable than us will outperform us; take AlphaGo, a Go learning system that is more capable than one of the best human players. In what sense does it only have an approximation of the knowledge of playing Go that the human player has? It demonstrably has a more complete knowledge of Go. The breadth of it's understanding is restricted to the game on the board (played games, positions and moves), which is far narrower than human understanding of Go, but sufficient to play a better game (e.g. you don't need to know the history of Go to play well). Given some of its moves, one could argue that it's depth of understanding of playing the game is greater than most human players, although it does have its weaknesses.

As for truth, as I already mentioned, synthetic truths are necessarily uncertain, in that we can't be sure they're true. I don't see how it's particularly relevant; arguably, AI perceptions and interpretations are likely to be more reliable than human ones, so likely to correspond more closely to reality.
The crucial point of disagreement between us seems to be whether humans themselves have real knowledge and access to truth or only an approximation.
Again, what is 'real' knowledge? how is it different from unqualified knowledge?
 
Last edited:
Upvote 0

Radrook

Well-Known Member
Feb 25, 2016
11,536
2,723
USA
Visit site
✟134,848.00
Country
United States
Faith
Non-Denom
Marital Status
Single
Hello everyone. I would like to discuss freewill, and whether such a thing is possible Scientifically, Logically, and according to Scripture. I will start with Logic.

I have a choice between A or B. God knows that I will choose A. By my freewill I choose B. Please explain. Thank you all and God bless you.
My knowledge of what a person will choose doesn't affect his freedom to choose it.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

zippy2006

Dragonsworn
Nov 9, 2013
6,834
3,410
✟244,737.00
Country
United States
Faith
Catholic
Marital Status
Single
Warning - long post!

!

So knowledge (that which is learned through experience or education) is only possible if there is intent? Does this mean that what is learnt from unintentional (e.g. accidental) experiences is not knowledge? That doesn't sound right; it seems to me that a great deal of knowledge about the world is acquired without intent (especially knowledge of the dangers around us).

The intellect is a complicated reality, with active and passive aspects. Yet I would hold that a sort of intentionality does accompany knowledge. As long as an experience is attended to, the intentionality necessary for knowledge is present. If a professor is monotonously drawling on and I do not attend to the words but rather place my attention elsewhere, I will have no knowledge of the lesson. Dangers and immanent realities demand the attention requisite for knowledge.

You implied that the broken clock doesn't have knowledge, despite being 'right' twice a day; that the working clock has knowledge compared to the broken clock, but that neither has 'knowledge in the true sense' (whatever that means). What you really mean by knowledge is less clear to me now than before you started explaining it...

haha, perhaps it was a bad analogy. I was just trying to make use of the intuitive understanding we have of a broken clock that is right twice a day not being "right" in the same way that a working clock is right. I think that analogy stretches to AI. AI is like the broken clock that is right twice a day.

Here's a very simple set of functional definitions that I find useful in this context (this is all personal opinion, provisional and open to discussion):

'Data' - the raw output of a system or measuring device (numeric or analogue), which has no meaning of itself; input to a processor or interpreter.
'Information' - interpreted data.
'Interpretation' - conversion to a form or format that has meaning to a target system.
'Meaning' - the set of associations that establish the context of the data with regard to the source system.
'Knowledge' - retained information, available on demand (e.g. a library is a repository of knowledge; I know my phone number).
'Understanding' - generalization, conceptualization, or abstraction of knowledge in relevant contexts; e.g. knowledge of the algorithms, behaviours, or heuristics that gave rise to the data, and how to apply them and/or express them in alternative ways (such as explanation).
'Truth' - the correspondence of knowledge or information to reality.
'Belief' - a certainty that one's knowledge is true, especially when unsupported by evidence.

Okay, thanks. As noted above, the crucial stand you are taking is in denying humans the traditional, colloquial understanding of knowledge--I will argue for this in greater detail below. Because of this it will become important not to bias definitions of knowledge, understanding, truth, and belief in favor of computational realities. We must take them at face value, as human realities, and see if they also apply to computers. I don't think you do a terrible job of this here, but I want to state it explicitly.

I just took the statement as given. Thinking about it, my experience as an Object Oriented software developer (constant use of 'Instance of Class' relations!), and the use of that statement in the popular example syllogism ('All men are mortal, Socrates is a man. Therefore Socrates is mortal') led me to assume it was a simple statement that 'Socrates' is a member of the set 'human being', which - if assumed true - gives you knowledge of the relation between them (class/instance).
That's what I thought.

Very well, but now I assume you see that I was not making a merely tautological or analytic statement?

I happen to have degrees in both Computer Science and Philosophy, and it is interesting to study the contrasting logical systems both in these two fields as well as in sub-fields of philosophy. The contrast makes it hard for computer scientists, contemporary analytic philosophers, and even modern philosophers to grasp the nature of a classical Aristotelian syllogism. ...that's a long way of saying, "I don't blame you!" :p

But that's only true if I already know something about the person - in which case, your statement gives me no new information. If you said, 'Andy Fall is a human being', that gives me only the information that Andy Fall is a human being, from which I can deduce that Andy Fall has the properties and attributes common to all human beings - whether or not I know what human beings are.

"Socrates is a human being" is a predication. Whether you already know it or not depends on whether it is a premise or a conclusion in the syllogism. Earlier I spoke of coming to knowledge of that predication, thus implying that it is a conclusion of a syllogism. If you did not previously know that Socrates is a human being, and a sound syllogism led you to that conclusion, then you would have new information. (It is simply untrue that everyone who knows Socrates knows that he is a human being or that everyone who knows what a human being is knows that Socrates was one)

Your statement about Andy Fall is correct, and it places the predication in the position of a premise. Since we were talking about gaining knowledge, I had spoken of it in the position of a conclusion.

Do you understand the rules you apply when you acquire knowledge? Do you even know what they are? can you explain the difference between the way you categorized Trump as human the first time you saw him, and the way the AI you describe would do it?

In a certain sense, no. We create computers, we do not create human beings. I cannot understand myself in my entirety for the same reason that a computer cannot understand (describe) itself in its entirety. This is because the part that is actively understanding cannot simultaneously be passively understood. This is also why Epistemology is such a complicated field and why knowledge is so hard to define. And yet the human capacity for self-knowledge is perhaps one sign that he is fundamentally different from a computer.

Note that it is for this reason that I am limited to take some known aspect of human knowledge or understanding and contrast it with the fully-known resources of the computer. Computational acts can be exhaustively defined, human knowing cannot. There is an asymmetry in our definitional capabilities with respect to the two entities. This is why your requests for definitions do not strike me as altogether helpful (although provisional definitions can sometimes be useful).

In the 1990's a computer system was programmed to identify coloured blocks in an 'block world' environment, describe their spatial relationships, and answer questions about the blocks, e.g. when asked, 'where is the blue block', it would respond, 'the blue block is on top of the red block', and so-on. In that limited block-world context, it seems reasonable to say it knew the spatial relationships of the blocks; it could identify them, categorize their spatial relationships to one-another, and answer queries about those relationships.

Consider a parrot. I say "blue," he says, "blue." I say, "red," he says, "red." Does this mean that the parrot knows what blue and red are?

The programmer designs optical hardware that parrots the human eye, calibrates the optical hardware to quantify the frequencies of light visible to the human eye, divides that quantified/interpreted input according to the average human color ranges for "blue," "red," etc., and tells the computer to record and store the cartesian coordinate pixel information alongside the assigned color (etc.).

It's just a parrot. A machine designed to parrot something that humans do. It's a broken clock that's right twice a day, where "twice a day" means "when it comes to red and blue blocks."

It could also answer 'if...then' questions about the spatial relationships of the blocks, and correctly follow commands to rearrange the blocks, so it seems reasonable to say it understood the spatial relationships of the blocks in block world. A very limited domain, but if that wasn't understanding, I'd like to hear why.

According to your own definition of understanding, because no generalization, conceptualization, or abstraction took place. Understanding requires a kind of static knowledge--what I earlier spoke of as speculative knowledge. It is not merely functional. Percy talks about this from the perspective of dyadic and triadic relations. Understanding is not merely dyadic, not exhausted by practical or functional considerations.

When an animal becomes aware of a causal relation or a computer is given the if-then logic that represents a causal relation the "knowledge" begins and ends with functional considerations. The infant 'knows' that if the bottle is put to his mouth he will be relieved of his distress, and therefore cries for the bottle. But this is merely stimulus-response behavior, not knowledge. Now at this point you have to tell me whether you think human knowledge involves anything more than stimulus-response relations, whether you think all knowledge is merely a function of manipulation. Are humans capable of knowledge that is not merely focused on practical manipulation? Knowledge for the sake of knowledge?

However, neural network AIs work at a level functionally abstracted from the instruction code. They are programmed to behave like layers of interconnected, interacting 'neurons'; given an exemplar goal, they can learn for themselves how to achieve it without any explicit programming for that goal. This kind of goal-directed learning is not, in principle, different from how biological brains learn to do tasks.

AIs of this kind don't need to be programmed with algorithms, and their learning can be generalized (applied to other, similar situations). The programmers and trainers (if any) don't understand how the AI performs its task, but it seems reasonable to say it has learned the skill and can apply that knowledge to achieve its goal. The application of knowledge for solving problems or achieving goals implies understanding - in the context of that application.

Let me just say that in this, as in the "tabula rasa" language below, I don't believe you. Let's just take this sentence:

They are programmed to behave like layers of interconnected, interacting 'neurons'; given an exemplar goal, they can learn for themselves how to achieve it without any explicit programming for that goal.

What does this even mean? It strikes me as very vague, like hand-waving. The behavior of the AI derives from the programming and the input, and nothing else. You desire to tell me that the behavior of the AI somehow transcends the code, but clearly that is not the case. If the AI produces unexpected behavior and fulfills an (arbitrary) goal set by a human being other than the programmer, then the fact that it is unexpected merely derives from the ignorance of the programmer. If he was a better programmer he would have seen the output ahead of time and it wouldn't have been unexpected. Yet given the fact that the goal is, at least in some remote respect, expected and programmed for, I don't think the case even holds up.

It's like saying you designed a Roomba for a square room, but since it performs well in the unexpected rectangular room without any explicit programming for that goal it is somehow special. The point is that the difference between the proximate goal and the remote goal--the explicit goal and the implicit goal--is qualitatively accidental. The programmer knows that the explicit goal will lead to the implicit goal. The only difference is that the implicit/remote goal/behavior is sometimes too ill-defined or complex to easily calculate or fully understand. It's like rolling lots of dice at the same time, but in such a way that the probabilities cater to a particular goal.

A striking recent example is a language-learning AI called ANNABELL (Artificial Neural Network with Adaptive Behavior Exploited for Language Learning), a system with 2.5 million artificial (virtual) neurons; a cognitive neural architecture of interconnected neurons which has no intrinsic language capability; no dictionary, alphabet, or syntactic rules at all. It starts as a tabula-rasa (blank slate). It is then taught to converse in the language much as you'd teach a child (the project was based on a subset of the knowledge and abilities of 3-5 year-olds) by giving it simple declarative statements, asking it questions, and correcting its answers; all based on a training database (the curriculum) containing a number of datasets (subjects): 'People', 'Parts of the body', 'Categories', 'Communicative interactions' (mother/child), and 'Virtual environment' (rooms in a building) . Over several thousand interactions, the system had learned to recognise and answer questions about the datasets in simple language. For example, here are samples from the results database after training to play a word game, including a comparison of how a real 5 year old performed given the same information: Example from CHILDES database.

I think it would be reasonable to say that the system learnt the rudiments of the language, and had a basic understanding of the information it had been given in that language, within the domain covered by the curriculum.

One implication of a tabula-rasa language learning system like this, is that it should be equally able to learn the rudiments of any language. It would be interesting to see if it could learn more than one language - although with such limited resources, it seems unlikely...

It might surprise people to know that you don't need a mainframe or a supercomputer to play with this system, but it can be downloaded from the Github repository. It's written in C++, and can be compiled and run on any reasonably competent domestic PC.

It's interesting to me that AI yearns to do things that sub-human animals do routinely. AI sits well below complex animal life and is believed by some to rival human knowledge and understanding, and all the while/millenia philosophers have pointed out the qualitative differences between humans and animals, thus a fortiori accounting for the differences between humans and AI. Does that strike you as strange? If you agree that AI is less complex than complex life, then wouldn't it be easier to argue that apes and humans are on the same level? Indeed I would find such an approach much more plausible, especially given the modern mechanistic fallacies about organic life.

There is no base code; the human system evolved from interactions between cells. The representational models are built by experience - the human system learns and is trained by its sensory inputs carrying data from its body, the outside world, and other human systems.

But in your account you ascribed the construction of the representational models to the human being. Thus even in your account agency creeps in.

Computers, understood properly, are purely passive in the sense that they are not self-moving (such as my billiard illustration above explains). In order to equalize humans and computers, you must claim that humans too are purely passive, determined, and totally moved by antecedent conditions. This is an undeniable way in which your view represents a demoting of the human being rather than a promoting of the computer, for the idea that a human is self-moving, is an agent, is common belief (I avoid the word "knowledge" only on your behalf ;)).

If Frumious is right, then humans are not truly agents. True or false? (This is an important and bizarre fact about your position that I will return to as a premise for other arguments)

No; I don't know what you mean by knowledge 'in the traditional sense', the word is used colloquially in many different ways. Both humans and computers can have knowledge.

For example, you seem to be denying speculative knowledge, human agency, and qualitatively different "exemplar goals" from the initial evolution-driven goal. I could name more strange implications, but three should suffice for now.

Kind of. We attempt to achieve our goals by the application of knowledge; this implies understanding.

Okay.

Interpreting is converting data (which may be information in another system) to a form or format that has meaning to a target system.

Recall that you're arguing against my idea that knowledge has intrinsic meaning whereas bits stored in a computer do not.

We are comparing the human interpreter to the computational interpreter, where the input to the human is the entire physical world and the input to the computer is that limited data it receives. Now the bits stored in a computer that represent its "knowledge" do not have any intrinsic meaning apart from the meaning the human bestows on them and transfers, through programming, to the computer.

Your claim is that the basic interpreter bestowed on the computer by the human being is not qualitatively different from the basic interpreter bestowed on the human being by evolution. If evolution exhaustively explains the human being, then I believe you would be correct. Note too that in this case no intrinsic meaning exists anywhere, only artificial and imposed meaning.

The question we must ask ourselves is whether humans can do things that qualitatively transcend computers, animals, and that trajectory of acts which evolution renders possible. Whether humans are capable of qualitatively different acts than evolution is able to account for--much more than a roomba in a rectangular room. So far I have offered a few candidates:

  • Agency. Humans can act, and decide whether or not to act. (Free will, ATDO, etc.)
  • Humans are capable of speculative knowledge, knowledge for the sake of knowledge, truth apart from manipulation.

Firstly, this is an analytic truth, i.e. it is true by definition; logically it is tautological, like mathematical statements. To know what the terms mean is to understand them (to have usable knowledge of their associations in their larger context - formal social relationships). There's nothing speculative about that.

I think you're essentially incorrect here, but it may be easier if I avoid a truth that is so definitional. Let's take another of Kant's so-called "analytic" truths: all bodies are extended.

For the human "All bodies are extended" has meaning even apart from practical applications. We know what each term means and that the statement is true. We can ponder it, come to see it more clearly, examine it, etc. This is not so for the computer.

Walker Percy wrote that before systems like ANNABELL (above) were developed.

And it's as true now as it was then. The ancient philosophers made similar arguments 2500 years ago and they have only sharpened with time.

Human understanding is generally far broader, richer (multi-contextual), and often less precise than computer understanding, but human brains are generalist learning systems with 80 billion-odd neural processors that take decades to achieve peak knowledge & understanding. Computers are, in comparison, trivially simple single-domain systems of relatively ephemeral duration. But when given enough information and experience, computer learning systems can rapidly outpace human capabilities (examples already supplied). Ask Lee Seedol if AlphaGo understands how to play Go.

Multiplying function will never result in meaning. Multiplying two-dimensional lines will never result in a three-dimensional model.

Triangularity is a geometric abstraction; computers can easily manipulate and apply the concept. What, specifically, do you think a computer can't do with the concept that humans can? how would you test for it?

They can approximate it, but they don't understand it. Maybe a circle is a better illustration.

What is a circle? A perfect circle? It is an infinite number of infinitesimal points equidistant from a center point. Do we really know what that is? It doesn't exist in the physical world; it is impossible to see one; we can only approximate a perfect circle in material realities.

To say that we truly understand something that can only be approximated by material realities tells us a few things. It tells us that computers can't 'understand' perfect circles, and it tells us that we transcend the material realm. Presumably you would disagree that we can really understand a perfect circle?

I don't see why this follows - a learning system that is more capable than us will outperform us; take AlphaGo, a Go learning system that is more capable than one of the best human players. In what sense does it only have an approximation of the knowledge of playing Go that the human player has? It demonstrably has a more complete knowledge of Go. The breadth of it's understanding is restricted to the game on the board (played games, positions and moves), which is far narrower than human understanding of Go, but sufficient to play a better game (e.g. you don't need to know the history of Go to play well). Given some of its moves, one could argue that it's depth of understanding of playing the game is greater than most human players, although it does have its weaknesses.

Let me rephrase. Suppose we subdivide humans and computers into starting state/programming and computational power. A computer will only ever approximate the starting state or programming of the human, because it is made by a human programmer. Now perhaps a computer could bypass a human if, even in spite of the less-developed programming, the computational power were able to compensate for this lack.

The real difference between us on this point is in whether the human starting point is an approximation or a kind of identity--whether we approximate truth or really know truth. We agree that the computer can only approximate truth. If the human really knows truth then the computer will only ever have an approximation of the knowledge and truth that we have (as I said before). Granted it could outpace us in certain areas, just like a car can outrun a man, but never in the domain of First Philosophy, to which it simply has no access.

As for truth, as I already mentioned, synthetic truths are necessarily uncertain, in that we can't be sure they're true. I don't see how it's particularly relevant; arguably, AI perceptions and interpretations are likely to be more reliable than human ones, so likely to correspond more closely to reality.

What I say above should speak to this. It is worthwhile to note that Kant's errors become particularly pronounced in this area. Kant's epistemological presuppositions already entail the idea that humans are glorified computers and I roundly reject them.

Again, what is 'real' knowledge? how is it different from unqualified knowledge?

You are using knowledge in the sense of an approximation, such that a computer has knowledge of circles insofar as it can approximate one. Real knowledge of a circle entails knowledge of a perfect circle, an understanding that necessarily transcends the material world.

We could teach a monkey or a computer to draw a circle. They would come to associate our command with their action of drawing. Even if we said "Circle!" a billion times and they drew a billion circles, they would not come to understand the definition of a circle, despite their practical ability to produce one. They are merely the material instrument of our intellect. The definition comes from us, not from them. I can use my hand, a monkey, or a computer to draw a circle. There is very little difference between the three. None of them has any intrinsic ability or propensity to draw a circle apart from my intellect informing them.

Your whole case rests on the claim, "It drew a circle, therefore it understands what a circle is." But you're ignoring the obvious fact that we are the ones who produced the circle; the printer merely allocated the ink according to our specification. It's really like saying, "My hand drew a circle, therefore my hand knows what a circle is."

I don't see the problem; truth is an abstraction - a correspondence to the facts or reality; certainty about the facts and reality is elusive - e.g. the problem of measurement, the problem of induction. Science explicitly acknowledges this.

Apparently I took this quote in the wrong sense earlier... Now I understand how you were using these words. :)
 
Last edited:
Upvote 0

Radrook

Well-Known Member
Feb 25, 2016
11,536
2,723
USA
Visit site
✟134,848.00
Country
United States
Faith
Non-Denom
Marital Status
Single
Please consider the Categorical Imperative in reference to the demand for total freedom.

Online Dictionary
categor′ical imper′ative
n.
the rule of Immanuel Kant that one's actions should be capable of serving as the basis of universal law.
[1820–30]
Random House Kernerman Webster's College Dictionary,
http://www.thefreedictionary.com/categorical+imperative


The Categorical Imperative requires that whenever we are about to make a moral decision we ask ourselves what the effect would be if EVERYONE did as we have decided to do. For example, if we feel that we have a right to steal, or to murder, or to lie, or to physically assault others when frustrated, or to sucker-punch someone for personal sadistic amusement, then what would the effect be if EVERYONE felt the same and proceeded to behave in those ways?

Obviously each person becoming a law unto himself is extremely detrimental to social order and social order is essential for human survival and progress. That in turn makes a demand for total unrestrained freedom irrational and anti social. That is why most humans choose to abide by law. No law? No protection from the strong abusing the weak. Your neighbor could murder you at a whim with no fear of retribution, businesses could cheat their customers at their leisure, bullies could pummel anyone they deemed fair target without nary a care in the world. The creator knew this and wisely and lovingly provided restrictions on what his reasoning creatures could and could not do as a protective measure and NOT because he feared that his creatures would become his equals as Satan accused.

DBT
Jeremiah 10:23
I know, Jehovah, that the way of man is not his own; it is not in a man that walketh to direct his steps.

Proverbs 3:6
in all your ways submit to him, and he will make your paths straight.

Proverbs 16:9
In their hearts humans plan their course, but the LORD establishes their steps.
 
Last edited:
Upvote 0

Astrophile

Newbie
Aug 30, 2013
2,280
1,525
76
England
✟233,673.00
Country
United Kingdom
Faith
Atheist
Marital Status
Widowed
For example, if we feel that we have a right to steal, or to murder, or to lie, or to physically assault others when frustrated, or to sucker-punch someone for personal sadistic amusement, then what would the effect be if EVERYONE felt the same and proceeded to behave in those ways?

What if I feel that I have the right to remain unmarried and childless? What would the effect be if EVERYONE felt the same way and proceeded to behave in this way? (Obviously the human race would die out.) So does this mean that neither I nor anybody else has the right to remain unmarried and childless?
 
Upvote 0

Radrook

Well-Known Member
Feb 25, 2016
11,536
2,723
USA
Visit site
✟134,848.00
Country
United States
Faith
Non-Denom
Marital Status
Single
What if I feel that I have the right to remain unmarried and childless? What would the effect be if EVERYONE felt the same way and proceeded to behave in this way? (Obviously the human race would die out.) So does this mean that neither I nor anybody else has the right to remain unmarried and childless?
In that case common sense would tell us that our right to decide takes precedence over the Categorical Imperative.
 
Upvote 0

quatona

"God"? What do you mean??
May 15, 2005
37,512
4,301
✟175,292.00
Faith
Seeker
The Categorical Imperative requires that whenever we are about to make a moral decision we ask ourselves what the effect would be if EVERYONE did as we have decided to do. For example, if we feel that we have a right to steal, or to murder, or to lie, or to physically assault others when frustrated, or to sucker-punch someone for personal sadistic amusement, then what would the effect be if EVERYONE felt the same and proceeded to behave in those ways?
No, that´s not what the categorical imperative says. It´s a meta-moral imperative, not a moral principle. It doesn´t deal with actions, it deals with the choice of principles.
In your pop-version the categorical imperative ("What if everybody did that?") is a completely stupid principle.

There´s nothing wrong with becoming a plumber, even though society would collapse if everyone were a plumber. The same is true for most choices we make.

Obviously each person becoming a law unto himself is extremely detrimental to social order and social order is essential for human survival and progress. That in turn makes a demand for total unrestrained freedom irrational and anti social. That is why most humans choose to abide by law.
You are describing how people use their freedom.
Besides, most people (hopefully) don´t choose to abide by law principally. They (hopefully) use their freedom to form an opinion whether a law is good or not - and on occasion will refuse to abide by the law.
No law? No protection from the strong abusing the weak.
OTOH laws can downright be the institutionalized abuse of the weak by the strong.
The creator knew this and wisely and lovingly provided restrictions on what his reasoning creatures could and could not do
Obviously, he didn´t - or else these things wouldn´t happen. Interestingly, we keep hearing the "free will" defense for this, all the time.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

Radrook

Well-Known Member
Feb 25, 2016
11,536
2,723
USA
Visit site
✟134,848.00
Country
United States
Faith
Non-Denom
Marital Status
Single
No, that´s not what the categorical imperative says. It´s a meta-moral imperative, not a moral principle. It doesn´t deal with actions, it deals with the choice of principles.
In your pop-version the categorical imperative ("What if everybody did that?") is a completely stupid principle.

There´s nothing wrong with becoming a plumber, even though society would collapse if everyone were a plumber. The same is true for most choices we make.


You are describing how people use their freedom.
Besides, most people (hopefully) don´t choose to abide by law principally. They (hopefully) use their freedom to form an opinion whether a law is good or not - and on occasion will refuse to abide by the law.

OTOH laws can downright be the institutionalized abuse of the weak by the strong.

Obviously, he didn´t - or else these things wouldn´t happen. Interestingly, we keep hearing the "free will" defense for this, all the time.

I suggest you take take the matter up with Emmanuel Kant since it was his idea not mine:

New World Encyclopedia


Categorical imperative


The Categorical Imperative is the central concept in Kant’s ethics. It refers to the “supreme principle of morality” (4:392), from which all our moral duties are derived. The basic principle of morality is an imperative because it commands certain courses of action. It is a categorical imperative because it commands unconditionally, quite independently of the particular ends and desires of the moral agent.

Kant formulates the Categorical Imperative in several different ways but according to the well-known "Universal Law" formulation, you should "…act only according to that maxim by which you can at the same time will that it be a universal law." Since maxims are, roughly, principles of action, the categorical imperative commands that one should act only on universal principles, principles that could be adopted by all rational agents....

Moral Rules and the Categorical Imperative

According to Kant, moral rules are categorical imperatives. Furthermore, Kant thought that all our moral duties, substantive categorical imperatives, depend on a basic requirement of rationality, which he regards as the supreme principle of morality (4: 392): this is the categorical imperative. The categorical imperative, as opposed to categorical imperatives, substantive moral rules, is the basic form of the moral law.

An analogy with the biblical Golden Rule might help to make the relation between categorical imperatives and the Categorical Imperative somewhat clearer. In Mathew 7:6, Jesus Christ urges that “all things … that you want men to do to you, you also must likewise do to them:
http://www.newworldencyclopedia.org/entry/Categorical_imperative

BTW

You are describing how people use their freedom.
Besides, most people (hopefully) don´t choose to abide by law principally. They (hopefully) use their freedom to form an opinion whether a law is good or not - and on occasion will refuse to abide by the law.

I never claimed that people should become veritable robots led merely by inflexible rules. That is the first thing you learn to avoid doing in ethics. Very basic. So that is your strawman idea-not mine.

OTOH laws can downright be the institutionalized abuse of the weak by the strong.

I never claimed that all law is good. Obviously it isn’t. Neither did I claim that we should blindly follow all laws. Again, those are strawman and totally your ideas not mine.

Obviously, he didn´t - or else these things wouldn´t happen. Interestingly, we keep hearing the "free will" defense for this, all the time.

Not sure what you are saying here but it doesn't sound like anything that I have ever said.
 
Last edited:
Upvote 0

PsychoSarah

Chaotic Neutral
Jan 13, 2014
20,521
2,609
✟95,463.00
Faith
Atheist
Marital Status
In Relationship
My knowledge of what a person will choose doesn't affect his freedom to choose it.
However, many people view our lives as "tests" by a deity to make the "right" choices. But, if said deity already knows the choices we will make long before we make them, that requires predetermination. Otherwise, there is a chance of said deity being wrong, and you can't actually claim that they know the choices that will be made.
 
Upvote 0

Radrook

Well-Known Member
Feb 25, 2016
11,536
2,723
USA
Visit site
✟134,848.00
Country
United States
Faith
Non-Denom
Marital Status
Single
However, many people view our lives as "tests" by a deity to make the "right" choices. But, if said deity already knows the choices we will make long before we make them, that requires predetermination. Otherwise, there is a chance of said deity being wrong, and you can't actually claim that they know the choices that will be made.
Even though he might know the choice the choice is still made by the chooser.
 
Upvote 0

PsychoSarah

Chaotic Neutral
Jan 13, 2014
20,521
2,609
✟95,463.00
Faith
Atheist
Marital Status
In Relationship
Even though he might know the choice the choice is still made by the chooser.
It is literally impossible to have absolute certainty about any given event without predetermination being a factor. And where predetermination applies, free will is but an illusion. There is no point in waiting for a person to make choices you know they already will, no reason for them ever to have physical bodies. No reason for the illusion of free will.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

Radrook

Well-Known Member
Feb 25, 2016
11,536
2,723
USA
Visit site
✟134,848.00
Country
United States
Faith
Non-Denom
Marital Status
Single
It is literally impossible to have absolute certainty about any given event without predetermination being a factor. And where predetermination applies, free will is but an illusion. There is no point in waiting for a person to make choices you know they already will, no reason for them ever to have physical bodies. No reason for the illusion of free will.
Predetermination implies forced actions. There is no force applied to a human who willfully chooses to do what he has decided to do just as their is no force involved in my knowing that a boxer will lose a fight because he has agreed to throw the fight. My knowledge does not in any way deprive him of his ability to have chosen hat outcome. Now, if I had said that I would kill his whole family if he won, then I am attempting to predetermine what he will do via coercion. Perhaps there is equivocation involved in our ability to agree.

Here is the definition I am going by:

predetermination

pre·de·ter·mine (prē′dĭ-tûr′mĭn)
v. pre·de·ter·mined, pre·de·ter·min·ing, pre·de·ter·mines
v.tr.
1. To determine, decide, or establish in advance: "These factors predetermine to a large extent the outcome" (Jessica Mitford).
2. To influence or sway toward an action or opinion; predispose.
v.intr.
To determine or decide something in advance.
pre′de·ter′mi·nate (-mə-nĭt) adj.
pre′de·ter′mi·na′tion n.
http://www.thefreedictionary.com/predetermination
 
Upvote 0