Should a driverless car be programmed with ethics? They will have to be. In the hypothetical scenario in which a driver must decide between risking the lives of pedestrians or risking the life of passengers in the car, an autonomous car must make that decision in a split second. Dr. Iyad Rahwan of MIT’s Media Lab is looking at the problem.
To understand that question Dr Rahwan and his team conducted an online survey to see how people feel about cars handling the social dilemma.
Overall they found that people were largely utilitarian, favouring the protection of the greater number of lives. For instance, they would favour the car being programmed to drive into a wall rather than a crowd of people.
Certainly not surprising. However, that sense of utilitarianism didn’t remain when people were asked if they would buy a car with such programming. Instead self interest prevailed for hypothetical buyers preferring to purchase a car that would save themselves even if it meant killing many others.
Yes, it’s always easier to answer hypothetical ethics questions than to contemplate buying a vehicle that may someday decide to kill you in order to save others. That’s the gist of the latest comic from CommitStrip.