My claim all along has been that the determinist will necessarily contradict himself, for it is not possible, even in principle, for him to avoid all self-contradiction.
And I have agreed that it's only possible to avoid some behavioural contradictions. But, as I said, failure to always live up to one's beliefs about how the world is or should be is not uncommon - I suspect it is universal.
But the determinists I know just see it as a fact about the world, they don't feel they have to live it as a lifestyle or religion; IOW, the knowledge that the world is deterministic is not sufficient to significantly change long-established habits of thought and behaviour, particularly how it
feels, and how it feels is a major behavioural driver.
In a society where determinism was widely accepted and children grew up with that understanding, their behaviour would probably be more in keeping with it.
I already explained the problem and you are not addressing it. You have two possible logical responses to my argument: 1) deny that the consequence premise is present, or 2) deny that the consequence premise involves counterfactual possibility.
I'm afraid I don't understand the problem you seem to see. AFAICS there is nothing in the examples of decision-making previously given that is not already done by artificial systems.
I see two options here: 1) You are yet again failing to address the argument at hand, and are merely asserting that they magically choose between A and ~A, or 2)
There's no magic, just deterministic evaluation. As I said, if a chess program can do it, it is not a problem for deterministic systems.
You are talking about rats (in which case the words "evaluation" and "decision" are anthropomorphic word games). I assume you really are talking about rats here, no? Or did you intend this explanation to be beyond rats?
I wasn't talking about rats in particular - your example introduced an 'entity', so I followed suit. It was you that first introduced rats, for some reason that remains unclear...
By 'evaluation' I mean 'to judge or calculate the quality, importance, amount, or value of something'. In this context, the value of some course of action; e.g. is the outcome likely to be rewarding or not.
By 'decision' I mean a choice. For intelligent creatures, I don't consider them anthropomorphisms; for simple artificial systems, they probably are (the Dennettian 'intentional stance'); for complex ones, probably not, it's a fine line - did Alpha Go decide to play its surprising match-winning move to defeat the world champion on the basis of its evaluation of the position? Yes, I think that's an acceptable way to describe it - unless you want to define the words as something only humans, or conscious entities can do. I don't have a problem with it.
What I said applies to any creature with a brain complex enough for explicit modelling of consequences (projecting into the past and future) - which may require some degree of consciousness.
Again, you'll have to answer the question about rats above. If we want to play word games and we are talking about rats (or qualitatively equivalent entities), then <There is no problem with a deterministic entity "evaluating" both the potential outcome of doing A and the potential outcome of not doing A - or "evaluating" the potential outcome of many possible "choices."> If we don't want to play word games then yes, there is a problem with deterministic entities presupposing counterfactual possibility.
We can talk about rats if you want. So what, exactly is the problem you see with deterministic modelling of counterfactual scenarios?
Let's use a simpler example. Suppose I write a very simple computer program to feed my dog. Each day when the dog steps on a large scale, the computer distributes 235g of food if the dog weighs more than 60 lbs, or it distributes 265g of food if the dog weighs less than 60 lbs.
Apparently you would say that such a program is "evaluating" and "deciding" and perhaps even "thinking." But these really are word games. The computer is a passive instrument, and is not in truth evaluating or deciding, for such terms imply activity. It would be similarly false to claim that gutter sieve systems, such as LeafFilter, "evaluate" and "decide" when to let things pass into the rain gutter, depending on whether they are leaves or water.
Computers make decisions no more than LeafFilter does. You are becoming confused by the idea of a metaphorical predication. Saying that a computer decides or a rat makes mistakes is like saying that LeafFilter evaluates or that a basketball jumps. Such is not a coherent case for determinism; it is just sloppy philosophy of language.
I think it's more a matter of utility. When some system, natural or artificial, exhibits behaviour recognisably or sufficiently similar to human behaviour in some context, I think it's reasonable to use the same words to describe that behaviour. The question for me is where the line is drawn, i.e. what counts as sufficiently similar behaviour. That's a matter of judgement & preference. YMMV.
I'd say you're engaged in anthropomorphic word games, but it is helpful to know that you really do think rats make mistakes in the same way humans do. Apparently I was right all along, and you see humans as complex rats.
Lol!, no. Humans are primates, complex mammals. We share a common mammalian ancestor with rats, but the capabilities we've been discussing are widespread among vertebrates - even birds (e.g. corvids, parrots).