You wrote one of those replies so lengthy any response goes over the character limit.
I can split my response into 2 parts. I can also significantly shorten my response by you acknowledging that...
1. I'm not arguing for a spirit or soul or supernatural cause for free will. Free will is already defined without any supernatural cause. A possibility of choosing differently between at least 2 options. Simple enough.
2. You seem to think the evidence for determinism is overwhelming....yet when asked what could prove or disprove it, we need a time machine and a highly controlled situation with extremely limited options lol. You also seem to think that there's "overwhelming evidence" for determinism...yet I haven't seen any...it seems like the evidence for or against would require time machines and highly controlled circumstances. Ergo, no evidence for determinism exists. Do you understand that? If not, what possible evidence of determinism exists?
3. Weird off-topic nonsense like "the justice system" gets brought up whenever the illusion of morality is mentioned. Why? I can't think of any philosopher who both thinks determinism is true and morals are real and not merely an illusion in much the same way as free will is. Why does this keep happening? Is this really a discussion about the justice system wrapped in a bad misunderstanding about determinism? Laws and morals are not the same....in fact, most of the determinist's here seem to think breaking laws regarding immigration doesn't make everyone breaking them evil or immoral monsters. As another poster put it "laws are necessary for society to function" and yes....immigration laws exist for that reason. People are furious in Chicago because they are paying more money for less public services due to "sanctuary city laws" and the moral character of the people who are draining their economy doesn't seem to matter. We can all probably think of hundreds of laws that don't change our moral views of anyone. These things may occasionally overlap or seem similar in certain ways, but they aren't the same or even related. Murder isn't illegal because it's immoral...in some places, hundreds or thousands of girls or women will be killed by their brothers or fathers or husband's to restore the family "honor"....and it's seen as both moral and illegal. It's hard to have a functional society without a law against murder....but that doesn't mean morals and laws are connected. You have parts of big cities looted regularly...and it's defended morally. At the same time, big stores are leaving those places permanently and it's decried as immoral though it's totally legal. Laws and the justice system itself will never be a part of an argument for determinism because determinists can't hold any real moral values anymore than they can make real free will decisions. Do you agree and we can skip this in my full reply? Or are you struggling with the concept for some reason? You don't have any morals....just an illusion of morality. You may want people to consider their lack of agency when determining prison sentences....but you should consider your lack of morality before suggesting anyone do or not do anything.
I understand that politics, particularly on the left, has dressed itself up in morality a lot....but it's not really my fault or anyone else's if someone buys into the idea that your politics make you a moral person. The same people today who believe their politics are morally good while the oppositional politics are morally bad also thought that we shouldn't judge people based on race just 10 years ago...because it was so immoral to do so. Now those people think you're immoral if race isn't the first thing you consider in all sorts of situations lol.
4. Everything you wrote about the brain is unnecessary. I'm not sure why you brought it up. Did you imagine I was suggesting our brains aren't involved in choices? I'm not saying that at all....it seems like they need to be involved in choices. Are you saying our brains are simple input=output machines? Seems like we would have had AI long ago if that were the case. The best we have so far are some AIs capable of simple problem solving that is done by accessing huge data sets. It's an issue of novelty that seems outside the reach of all AI.
If you can understand and agree with those points (or even some of them) you'll reduce the size of my reply considerably.
Apologies for the delay in replying. I'll try to keep it short.
1. Suppose an autonomous vehicle at a T-junction can either turn left towards a nearby recharge point, or right towards its next passenger pick-up point. It is physically capable of either action. It uses a learning algorithm, with parameters and priorities adjusted by experience of numerous prior journeys, to determine whether it turns left or right by using a variety of information, including the charge remaining in its battery, the distance to the recharge point, the distance to the pick-up point, the time until the pick-up was booked for, the speed it is capable of, the amount of traffic on the road, etc. Every time the car reaches a T-junction, even one it has encountered before, circumstances will be different and it may not turn the same direction as it did previously.
It seems to me that the car has a possibility of choosing differently between at least 2 options, so, by your definition, it has free will.
Do you agree? If not, can you explain why not, i.e. the difference between the kind of 'choice' the car makes and a free will choice?
2. I already described what I mean by 'effective determinism', i.e. at roughly the scale of molecules and above, quantum indeterminacy averages out, which enables the reliable & predictable physics & chemistry which makes the macro-scale everyday world we experience - including life itself - possible. This is how the Standard Model of physics can explain how the world of our everyday experience works, i.e. how it can be accounted for in terms of protons, neutrons, electrons, gravity, & electromagnetism, interacting.
3. I agree with some of what you say. I mentioned morality and the justice system because the interplay between free will and moral responsibility has significant implications for ethics and justice systems. For instance, notions of punishment and reward are often based on the assumption that individuals are free to choose between right and wrong, and therefore can justly be held responsible for their actions.
I have my doubts about talking about morality & free will in terms of illusions because of the connotations of the word (deception, being fooled, etc) and much of our perception of the world is based on what could be called illusions, but which are useful and predictive. Perhaps Dennett's concept of 'real patterns' is better.
The philosophy of morals & moral values & judgements and their relation other to other values & judgements is complex, but if morality refers to the principles and values that guide human behavior, distinguishing between what is considered right and wrong, good and bad, then I have morality, based on feelings and adjusted by reason. But I don't hold with the idea of moral responsibility (predicated on accountability, free will, intentionality, and consequences).
4. I'm not saying our brains are 'simple input-output machines', I'm saying they're
complex, learning input-output machines. Current AIs (Large Language Models) basically just model how words are used without relation to their external referents (and so, meaning). So they could be said to understand language 'as she is wrote', but they don't understand what it says. This is why many AI researchers think a true AGI will need sensory input & physical interaction with the world, e.g. embodiment (or a pretty good simulation) and will need a processing architecture to match. YMMV.
Finally, when I post on a forum, I generally try to be as clear and explanatory as I can in the hope that other readers can get an idea of what I'm on about - I realise I'm not the best at this and it often makes my posts longer than they might be, but I do what I can in the time I have ¯\_(ツ)_/¯