With computers in cars becoming more intelligent, it will only be a matter of time before self-driving vehicles become faced with an impossible situation, and one that requires proper programming.
Car drivers are slowly losing various aspects of control as vehicle technology marches forward. There are cars that can parallel park themselves, set cruise control on their own and even pass other vehicles without the driver lifting a finger.
With this increase in automation, scientists are already looking into how to program smart vehicles if they ever become with an impossible, almost no-win situation. And a key to that programming will be public perception and reaction, something that has been unknown until a recent study from Jean-Francois Bonnefon of the Toulouse School of Economics in France.
Bonnefon and associates decided to tackle public opinion and draw some conclusions based on that research. Using Amazon’s Mechanical Turk to get input from several hundred individuals, the group posed several scenarios, including the potential for the driver to be killed, but many others saved, while also adding in variables such as the age of the driver or potential victims, whether children were in the car, and even if the driver was the person responding to the questions.
The results were somewhat predictable: As long as the driver was not themselves, responders were in favor of progrmming smart cars in a way that would minimize the potential loss of life.
“[Participants] were not as confident that autonomous vehicles would be programmed that way in reality-and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves,” the study concludes.
But then, that result opened up even more questions: “Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the car, than for the rider of the motorcycle? Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place? If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?”
In the end, the study says that these questions need to be addressed and algorithm form sooner rather than later as smart cars become more and more prevalent.
Source: Cornell University, via MIT Technology Review
Published: Oct 28, 2015 03:38 pm