But if you don't change the track, the outcome is the same as if there was no one there at all.
Then it's not a Trolley problem anymore, there's no one there to be held accountable for doing anything. It has to have a person there making the choice to make it a dilemma.
So it is objectively right to kill a five year old to save the lives of 5 ninety year olds?
Mass murder is always worse.
So you assign equal value to all lives? A ninety-year-old's life has the same value as a five year old?
Quantity as in age is not the only way to value life. Didn't you understand the logic about how the quality that people's lives have on others and even themselves is important as well? What if the child is severely crippled or has cancer. What if the old people increase the quantity of life for others because they have a qualitative effect on 5-year-olds and their families.
Except many Christians have very different views.
Christ's teaching show a clear moral code. There is no ambiguity. That's why Christ taught in parables so that it was clear and people understood and could not misinterpret things.
Your claims don't seem to apply to reality.
Yet when I refer to lived experience (reality) you say that doesn't prove anything. Saying that there may be some people who disagree with Christ's teachings prove that God's laws are subjective is a logical fallacy. All it shows is that some people are not practicing Christianity despite claiming they do.
You make the mistake of thinking just because someone says they are a Christian that they are also living as a Christian. As I said Christ's teachings are clear so we have a good reference point to use to see if people are living as Christian or not.
Still doesn't make any difference.
I like chocolate cake. I do not like custard tarts. If someone offers me a choice between chocolate cake or custard tarts for desert, the two choices are not equal to me.
You are not understanding the logic. It is not about what you eat, it is about "likes and dislikes" in tastes of food when applying it to moral right and wrong. So therefore under your example applying it to morality, it would be like saying to someone you are wrong and incorrect in liking custard tarts because Choc cake is the correct and right one for people to like.
There is no difference in "liking or disliking" choc cake/icecream or vanilla icecream, custard tart, or apple pie. So when a person says they like not stealing as opposed to stealing there is no distinction as far as 'likes and dislikes" in tastes are concerned.
That you would kill so readily.
Why is killing wrong.
So what? Some person still has to make the decision as to what will happen.
No, they are programming a machine to make the decision. The machine has been giving a set of instructions to make the decision in that situation. In no way is the programmer making any decision themselves to choose to run down a person or crash into cars. That is what the experts were saying. Do you disagree with what the expert ethicists are saying? IE "That's despite the fact". Facts are objectively right.
That’s despite the fact that it’s considered an extremely flawed way to think about a complicated problem by prominent ethicists and researchers.
Why the Trolley Dilemma is a terrible model for trying to make self-driving cars safer
Once again you seem to think that a car's brakes wills top the car instantly. They do not work that way.
Good at least you are allowing brakes to be included, unlike the Trolley problem. What about beeping a horn. So saying that the brakes won't stop the car in time is implying that there was a very short distance between the car and the pedestrian. That would imply that the pedestrian came out from nowhere (between parked cars) and ran in front of the car. If that's the case then the driver is not at fault. The pedestrian was jaywalking which is against the road rule laws and therefore is responsible for the accident.
So you are saying it's a subjective situation and that's why humans are better at it?
No I am simply saying that the Trolley problem requires a person with intentions for choosing the right or wrong option. A machine cannot do that as it has no intentions. In case you use the programmer as mentioned above the programmer's intentions are to save lives so he has no intention to harm or kill anyone. Therefore the Trolley problem doesn't work here.
You know mistakes happen right? Do you honestly think rail workers are never hit by trains?
But in those situations no one has chosen to kill them. If its a mistake it is usually "I didn't see them", "they ran out in front of me" etc. In that case, there is no intention, therefore, no culpability for intentional killing. But acknowledging mistakes happen also means that mistakes can be avoided and that the killing can be avoided. So in you introducing extra criteria to the Trolley problem you are opening the door for other alternatives. That's good because that is what real life is about.
And in the real world situation I linked to, these options were not available because there was no one on the train.
But unlike the Trolley problem the person changing the track has no intention of killing anyone. There is no one on the track that he knows will most certainly be killed by sending the carriage in that direction. Therefore no culpability.
There's no guarantee that those options would be there if it happened in real life.
That's not the point. It is taking all that away and denying human agency to even try (succeed or fail) in the first place that is unreal.
In fact, as the ethical experts said it is damaging to the human psyche as it forces people into unreal traumatic situations where they cannot do anything. When they are in a real situation it can cause people to take the trolley options rather than try everything to save people.
Yeah, because the trolley problem is nice and simple, a fact which has lead you to criticize it.
No lead expert ethicists who are far more knowledgable about this than you or I to criticize it.
In real life, the problem would be a lot more difficult to figure out, and yet you STILL claim there is an objective solution! How can that be when there are a near-infinite number of variables?
At least its real-life and allowing human agency. You don't seem to understand the difference. It doesn't matter even if a person tries and fails to save the people. It is the fact that they are allowed to try that is important. It allows agency and this is what reduces the persons culpability because they never intended for anyone to be killed. But the Trolley problem denies all that and forces the person to be a robot suppressing all their natural tendencies to try and save the person. That's why it's unreal.