The Ethics of AI-driven Vehicles

The Ethics of AI-driven Vehicles

Science Fiction writer Isaac Asimov had it all worked out in his famous “Three Laws of Robotics”. His humanoid robots were controlled by “positronic brains”, and these all had the three laws so deeply ingrained in their structure that no matter what, they were guaranteed to follow them. The first law was “A robot may not harm a human being, or, through inaction, allow a human being to come to harm”; this law took precedence over the second law, which mandated obeying the orders of humans, and that came before the third law, which required self-preservation of the robot. Which all meant that a robot was perfectly safe: it would die before harming a person. Of course, even in Sci Fi things weren’t that simple: Asimov’s robot stories would be very boring if the robots didn’t run into unforeseen conflicts involving the correct interpretation of harm, and the prioritization of different harms. But it was fun reading; and after all these robots, conceived in the 1950’s, were pure fiction.

Today, the fun begins to be eclipsed by the fact that intelligent robots and machines are no longer fiction, and the ethical implications of their actions are becoming a cause for serious worry.

Consider the self-driving car, which is a pretty intelligent mobile robot that carries humans inside it. Suppose the car – which has excellent image recognition capabilities – sees that a group of children have jumped into the road in front of it. Unlike us humans, the car has the processing speed and the dispassionate attitude to think matters through carefully, and it realizes that It can run over the kids, possibly killing them – or swerve into a parked car, harming – possibly killing – the passengers inside itself. What is a car to do?

A “positronic robot” would calculate how many people would be hurt either way and choose the path of least total harm. That is what its brain is programmed to do by the three laws. But a self-driving car does not have the three laws. It has an AI program installed in its computer by human beings, the engineers at the manufacturing company. And engineers are not ethicists; they wouldn’t know what to do even if they could foresee all the possible quandaries the car might encounter. Which is a tough call even for an ethicist:  who is more valuable – a child, an adult, or an elderly pedestrian? A rich man or a homeless one? A priest or a burglar? And if a burglar is better dead than a priest, how about five burglars versus two priests? Would you trust a human on these decisions – and would you trust the neural network in an AI program?

Worse, engineers and their bosses keep the interests of the company and of its customers in mind. A car that is designed to kill its single human passenger to save the lives of three pedestrians would not sell well. A car designed to kill pedestrians to save its owner may sell, but the design would not look good to public opinion, nor would the owner. What is a carmaker to do?

Another look at the ethics of smart cars: should they be made to obey the law? At first thought you’d say, of course! But humans don’t; cars that stayed strictly below the speed limit in an open highway would drive their owners nuts, and the owners would demand a program that allows higher speeds. Even if the carmaker refuses, you can expect before long illegal patches to that effect to become available for download from hacker sites.

Moreover, data shows that by obeying the law to the letter these cars can actually cause accidents. For instance, if a robot car comes to a full stop and waits for perfectly safe conditions before merging onto a freeway, the human-driven car behind it may rear-end it – because human drivers assume the car would act like a typical human and merge a bit recklessly without stopping. If the vendor made all cars more human-like in their reaction there might be fewer accidents (at least until all cars were driverless); but what vendor can knowingly program products to break the law?

And lastly: if a robo-car does actually kill people, who is morally – and legally – responsible? Even when the car acts correctly (assuming we decide what that means), there would be a big hue and cry. And if the car makes a mistake, God help it (and its maker). The fact that human drivers are far deadlier than AIs would be forgotten by the public. And I’d hate to be in the shoes of an engineer that programmed a buggy AI program that may have caused the accident.

Who said self-driving cars are going to be boring?…

Nathan Zeldes
Nathan Zeldes

Nathan Zeldes is a globally recognized thought leader in the search for improved knowledge worker productivity. After a 26 year career as a manager and principal engineer at Intel Corporation, he now helps organizations solve core problems at the intersection of information technology and human behavior.

Leave a Reply

Your Email address will not be published. Required fields are marked *