Yet when I refer to lived experience (reality) you say that doesn't prove anything. Saying that there may be some people who disagree with Christ's teachings prove that God's laws are subjective is a logical fallacy. All it shows is that some people are not practicing Christianity despite claiming they do.
You make the mistake of thinking just because someone says they are a Christian that they are also living as a Christian. As I said Christ's teachings are clear so we have a good reference point to use to see if people are living as Christian or not.
Ah, so you get to decide if they are REAL Christians (t) or not. Gotcha.
You are not understanding the logic. It is not about what you eat, it is about "likes and dislikes" in tastes of food when applying it to moral right and wrong. So therefore under your example applying it to morality, it would be like saying to someone you are wrong and incorrect in liking custard tarts because Choc cake is the correct and right one for people to like.
There is no difference in "liking or disliking" choc cake/icecream or vanilla icecream, custard tart, or apple pie. So when a person says they like not stealing as opposed to stealing there is no distinction as far as 'likes and dislikes" in tastes are concerned.
You do not seem to properly understand the difference between objective and subjective.
Don't you know?
No, they are programming a machine to make the decision. The machine has been giving a set of instructions to make the decision in that situation. In no way is the programmer making any decision themselves to choose to run down a person or crash into cars. That is what the experts were saying. Do you disagree with what the expert ethicists are saying? IE "That's despite the fact". Facts are objectively right.
That’s despite the fact that it’s considered an extremely flawed way to think about a complicated problem by prominent ethicists and researchers.
Why the Trolley Dilemma is a terrible model for trying to make self-driving cars safer
And the way the car is programmed determines what it will do.
How you do not see the chain of responsibility is beyond me.
Good at least you are allowing brakes to be included, unlike the Trolley problem. What about beeping a horn. So saying that the brakes won't stop the car in time is implying that there was a very short distance between the car and the pedestrian.
You do realise that there's such a thing as reaction time, right? Why do you have the idea that all these things work instantly?
They don't.
That would imply that the pedestrian came out from nowhere (between parked cars) and ran in front of the car. If that's the case then the driver is not at fault. The pedestrian was jaywalking which is against the road rule laws and therefore is responsible for the accident.
Once again you ifgnore the issue and instead resort to placing blame.
No I am simply saying that the Trolley problem requires a person with intentions for choosing the right or wrong option. A machine cannot do that as it has no intentions. In case you use the programmer as mentioned above the programmer's intentions are to save lives so he has no intention to harm or kill anyone. Therefore the Trolley problem doesn't work here.
Whether it's a programmer making a decision about how he will program a car to respond or a train signaller making a decision as to what track the trolley will take, the intention is always to save as many lives as possible. Why do you not understand that it is possible to have a situation where zero casualties is not an option?
But in those situations no one has chosen to kill them. If its a mistake it is usually "I didn't see them", "they ran out in front of me" etc. In that case, there is no intention, therefore, no culpability for intentional killing. But acknowledging mistakes happen also means that mistakes can be avoided and that the killing can be avoided. So in you introducing extra criteria to the Trolley problem you are opening the door for other alternatives. That's good because that is what real life is about.
So in the trolley problem, if you switch the track, you should be charged with the person's murder? After all, you intentionally threw the switch knowing it would result in the person's death.
But unlike the Trolley problem the person changing the track has no intention of killing anyone. There is no one on the track that he knows will most certainly be killed by sending the carriage in that direction. Therefore no culpability.
Prove it.
How do you know the signaller didn't decide to intentionally kill one person to save five?
That's not the point. It is taking all that away and denying human agency to even try (succeed or fail) in the first place that is unreal.
We don't always have the option to take action like that. Sometimes we're just stuck with a really bad situation.
In fact, as the ethical experts said it is damaging to the human psyche as it forces people into unreal traumatic situations where they cannot do anything. When they are in a real situation it can cause people to take the trolley options rather than try everything to save people.
Again, sometimes we just can't do anything. You don't seem to realise this. You seem to think we are always able to do SOMETHING to make the problem okay. We can't.
No lead expert ethicists who are far more knowledgable about this than you or I to criticize it. At least its real-life and allowing human agency. You don't seem to understand the difference. It doesn't matter even if a person tries and fails to save the people. It is the fact that they are allowed to try that is important. It allows agency and this is what reduces the persons culpability because they never intended for anyone to be killed. But the Trolley problem denies all that and forces the person to be a robot suppressing all their natural tendencies to try and save the person. That's why it's unreal.
Once again, the only reason you claim it isn't realistic is because you don't like the idea of a situation where you don't have enough agency to make things all better.