The autonomous car does not have free will because, with humans, "will" indicates desire. The word "would" is a past tense of "will". It's a bit of an antiquated usage, but remember the Kipling story called The Man Who Would Be King, i.e. the man who wants or wanted to be king. If a waiter asks you if you want the soup or a salad for starters, and you say "I'll have the salad" (I will have the salad), it means you want the salad. If you're kidnapped and help captive, you're being held "against your will". You don't want to be captive. The car at the T-junction has no such desires.
Apologies for the late reply.
OK, so there's more to free will than the "
possibility of choosing differently between at least 2 options". 'Desires' are necessary too. I would suggest that (in this context) a desire is a felt need for some future outcome and, for actionable outcomes, we could call this a goal.
The autonomous car has the equivalent of desires - these are the functional goals it has, implemented as a hierarchy of tasks with dynamic priorities, e.g. primary task = bid for a job to pick up a client at a specified place & time and deliver to a specified destination - unless there is a higher priority constraint or task, e.g. fuel too low, or maintenance/repair required. High-level tasks are made up of sub-tasks, e.g. picking up a client involves an initial task of navigating to the pick-up point by the shortest or quickest route according to fuel levels, time pressure, traffic, etc. which itself involves calculating the shortest or quickest route, etc.
When the primary task is completed, its priority drops until a new pick-up request is successfully bid for. If fuel is low or traffic too heavy to reach the client within the required window, it will not bid for the job. If fuel is too low to allow bids for a certain percentage of jobs within its area, refill becomes the highest priority, and so on.
A human driver could face the same choice as your car, but he would not be acting on algorithms. He'd be acting on what he wants. Does he want to pick up the passengers on time and have good job performance, or does he want to not run out of gas or electricity? And a human could do many other things which are not limited to any given algorithms. He could on a whim decide to quit his job and drive to the nearest pub to start drinking.
Even if I were to agree by some weird use of semantics that the car had will, there is no way I could agree that it was free, when everyone knows it was programmed to do specific things in specific circumstances.
The car is a learning system - it learns from past experience, adjusting its parameters and priorities according to their degree of success over time. So it will not do specific things in specific circumstances, because it adapts by generalisation (like an AI). This is not so different, in principle, from human behaviour.
A human also has goals (desires) consisting of a hierarchy of tasks with dynamic priorities. When the body signals to the brain that nourishment is required, a 'low fuel' state (hunger) is flagged and the 'get food' task priority rises. Past a certain threshold, depending on the priority of other tasks, it will become a primary goal.
Of course, humans are vastly more complex than the autonomous car, which was just a simple analogy. We have many more - often competing or conflicting - goals, both long-term and short-term. Our short-term goals are mainly subconscious in origin and our long-term goals are derived from social conditioning & pressure, the learned deferment or suppression of short-term goals, and planning (e.g. visualising a future self). We can select between competing short-term goals on the basis of long-term goals.
Ultimately, our algorithm isn't very complicated - we select our course of action according to the strength of our motivations (desires, goals) at a decision point, and the strength of those motivations is dynamic (affected by mood, memory, external reinforcement, etc). We assess willpower by how well we can defer short-term gratification, or how much discomfort we're prepared to endure, in favour of long-term goals.
All this stuff is what David Chalmers called the '
easy problems' of consciousness, not because they were necessarily easy to explain in practice but to distinguish them from the '
hard problem' of consciousness (why there is subjective experience at all) for which it is not clear that an explanation is possible
in principle.