Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.

The Moral Dilemma of Self-Driving Cars – Save the Driver Or Others

This article is over 9 years old and may contain outdated information
Self Driving Car Moral Dilemma Illustration

With computers in cars becoming more intelligent, it will only be a matter of time before self-driving vehicles become faced with an impossible situation, and one that requires proper programming.

Car drivers are slowly losing various aspects of control as vehicle technology marches forward. There are cars that can parallel park themselves, set cruise control on their own and even pass other vehicles without the driver lifting a finger.

With this increase in automation, scientists are already looking into how to program smart vehicles if they ever become with an impossible, almost no-win situation. And a key to that programming will be public perception and reaction, something that has been unknown until a recent study from Jean-Francois Bonnefon of the Toulouse School of Economics in France.

Bonnefon and associates decided to tackle public opinion and draw some conclusions based on that research. Using Amazon’s Mechanical Turk to get input from several hundred individuals, the group posed several scenarios, including the potential for the driver to be killed, but many others saved, while also adding in variables such as the age of the driver or potential victims, whether children were in the car, and even if the driver was the person responding to the questions.

The results were somewhat predictable: As long as the driver was not themselves, responders were in favor of progrmming smart cars in a way that would minimize the potential loss of life.

“[Participants] were not as confident that autonomous vehicles would be programmed that way in reality-and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves,” the study concludes.

But then, that result opened up even more questions: “Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the car, than for the rider of the motorcycle? Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place? If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?”

In the end, the study says that these questions need to be addressed and algorithm form sooner rather than later as smart cars become more and more prevalent.

Source: Cornell University, via MIT Technology Review

Recommended Videos

The Escapist is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission.Ā Learn more about our Affiliate Policy