UO researcher helps probe the morals of driverless cars

Google's self-driving cars lined up ready to go

In the popular 2004 movie "I, Robot," futuristic robots were programmed to follow three basic laws centered on causing no harm to humans. Driverless vehicles now pose a very similar, real life moral dilemma, according to UO psychologist Azim Shariff and colleagues from two other institutions.

Just what is an autonomous vehicle to do when faced with running down a group of pedestrians? Plow into the group, or swerve and potentially risk the lives of the car's occupants? And if you know how such cars are programmed, would you still buy one?

Autonomous vehicles are projected to be able to eliminate up to 90 percent of traffic accidents. However, not all crashes will be avoidable, and some will require the vehicles' automated programming to make difficult ethical decisions.

These issues need to be front and center now for consumers, government regulators and manufacturers, Shariff and co-authors Jean-François Bonnefon of the University of Toulouse in France and Iyad Rahwan of the Massachusetts Institute of Technology write in the June 24 issue of the journal Science.

In a series of six online surveys that probed various consequences, the trio asked participants in the United States questions about how they would want their self-driving vehicles to behave. They found that people generally approve of cars programmed to sacrifice their passengers to save others, but these same people are not enthusiastic about riding in such utilitarian vehicles themselves.

“Our study was motivated by the recognition that the introduction of autonomous vehicles doesn’t just face technological challenges, but psychological ones," said Shariff, an associate professor in the UO Department of Psychology. "The widespread adoption of these AVs will save lives, so it’s vital to figure out how to overcome those psychological barriers.”

The scenarios involved in the survey varied in the number of pedestrian and passenger lives that could be saved, among other factors. Overall, participants said that AVs should be programmed to be utilitarian, but the same people also said they would prefer to buy cars that protected them and their passengers, especially if family members were involved. This suggests, the authors concluded, that if both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in the latter — even though they would prefer others to do so.

The study is detailed more completely in a news release issued by MIT.

— By Jim Barlow, University Communications