Why Self-Driving Cars Can’t Make An Ethical Judgement (& Why It Doesn’t Matter)

Why Self-Driving Cars Can’t Make An Ethical Judgement (& Why It Doesn’t Matter)

Soon you will be able safely text at the wheel and this is something Tesla CEO Elon Musk promises will be possible in the immediate future. ‘I’m very confident about full self-driving functionality being complete by the end of this year,” Musk asserted. While the promise of being able to relax while commuting or during long car trips is very alluring, the possibility of self-driving cars also presents many ethical questions. Although few people think about ethics each day as they pull out of their driveways, any driver must be prepared to make ethical judgements at a moment’s notice.

Far from being premeditated, most people wind up making ethical judgements while driving based on instinct and unconscious biases. When a driver is slowing down at a red light and sees that the car behind them is not, they have to decide in a split second whether to allow the crash from behind to occur, possibly putting themselves in danger, or to run the red and risk hitting a distracted pedestrian. Few people could say with confidence what they would do in such a situation.

The truth is that most people’s choices will depend more on biases than on anything else, as the The Moral Machine Experiment study in Nature concluded. Although this is deeply troubling, the problem does not have one clear root cause. With the onus of driving shifting from fallible humans to AI, however, those in charge of designing the algorithms now must decide in advance how best to react to all the varied iterations of the trolley problem. Many philosophers and ethicists view the issue of AI being programmed to make ethical decisions about who lives and who dies to be a challenge that cannot be overcome. If AI is programmed to always favor the driver, being a pedestrian will become too dangerous, they argue. Conversely, who would buy a car that didn’t favor protecting the life of its passengers? These dilemmas are difficult to solve and each self-driving car that ultimately ends up on the road will need to be programmed to make these choices, but is this fundamentally different from the drivers currently on the road who have to make these same moral choices anyway?

Self-Driving Cars Are Still Likely To Be Far Safer

Why Self-Driving Cars Can’t Make An Ethical Judgement (& Why It Doesn’t Matter)

When it comes to safety, AI that is carefully programmed to react quickly, and which will never be distracted or fall asleep, is clearly far superior to the assortment of human drivers who are currently on the road. Instead of inexperienced teens and frail elderly people who are likely to err behind the wheel, self-driving cars will by definition not make mistakes. The very fact that self-driving cars will never make any unexpected choices, which is what makes them so safe, is also what makes them potentially fraught with ethical quandaries.

Having to confront the biases with which humans usually make decisions and program ethical standards into self-driving cars will be difficult and problematic. But these decisions that will need to be made ahead of time in the case of self-driving cars are ones that need to be made every day on the road regardless. Ultimately, the advent of self-driving cars doesn’t present any new ethical dilemmas. Rather, it simply forces those involved in the development of these AI driven cars to confront biases and hopefully make better choices. When combined with the fact that self-driving cars will completely eliminate careless accidents, it is hard to see a legitimate moral argument against them.