(Photo: Roman Boed)
This is absolutely fascinating!
The classical ethical dilemma goes something like this:
A train is about to crash into a bus full of people. If you do nothing, it will do so and kill them. If you switch the tracks, the train will instead hit and kill only one person. Do you switch the tracks?
Now let's update that dilemma and hand it over to a robot. Olivia Goldhill writes at Quartz:
Imagine you’re in a self-driving car, heading towards a collision with a group of pedestrians. The only other option is to drive off a cliff. What should the car do?
If you're the passenger, then you have a lot at stake in the decision that your robotic car makes. What should you do? I'm not sure, but psychological researchers led by Jean-François Bonnefon from the Toulouse School of Economics surveyed 900 people to ask them what they thought the car should do:
They found that 75% of people thought the car should always swerve and kill the passenger, even to save just one pedestrian.
That's very noble of them. But according to Helen Frowe, a psychology professor at Stockholm University, it can get more complicated:
For example, a self-driving car could contain four passengers, or perhaps two children in the backseat. How does the moral calculus change?
If the car’s passengers are all adults, Frowe believes that they should die to avoid hitting one pedestrian, because the adults have chosen to be in the car and so have more moral responsibility.
Although Frowe believes that children are not morally responsible, she still argues that it’s not morally permissible to kill one person in order to save the lives of two children.
-via Marilyn Bellamy
When I drive, I have chosen to take on the responsibility of operating a machine capable of killing people. It only seems fitting that, if possible, for negative consequences to fall upon me rather than upon innocent pedestrians.
If they're jaywalking, then that's a different story. In that situation, I say run them all down.
For instance, in the example given, the car knows about both dangers long enough beforehand to be able to make a decision. Why couldn't the car be designed to be able to stop properly without hurting anyone, then? Or it picks a fourth path: it decides that running into a wall at a reduced speed would be better since the airbags and seat belts will protect the passengers sufficiently at the speed it knows it can reduce to given the circumstances...
As long as the car is capable of seeing into its future far enough, it should never have to make a moral decision, merely best-decision-at-the-moment is enough to keep everyone alive, as long as it has the technical safety design to implement whatever the best decision requires. Short of another driver's active malicious interference, that is.
I'd be more worried about being in such a vehicle if all the bugs haven't been worked out of the programming yet.
However, accidents happen and people die in accidents, even though it is rather improbable, a residual risk remains that an accident happens and a person will die.
However people use cars accepting this little chance of people dying... In germany in 2014 3.377 died in traffic-accidents...
On the other hand and people take part in lotteries, with a much smaller chance to winm as there are much less people winning 1.000.000 bucks or more, each year.
In my opinion self driving cars should increase the safety of the traffic, but getting in a self driving car will still comprise the residual risk of the car kill you or someone else in a traffic accident.
Finally i would not like the car to decide who is going to die...
Unless it's malfunctioning, but then it'd be malfunctioning.
I hate you, idiot engineer at Ford.
(serious part of this post ends here)
If there are no passengers and no witnesses then it can proceed on its killing spree!