Google’s self-driving cars have travelled a combined 1.7 million miles on American roads, and they’ve never crashed into a vehicle before. Volvo will have a self-driving model on Swedish highways by 2017. Elon Musk’s Tesla vehicles are already driving themselves, albeit with many limits, on roads throughout the world.
When you get into a two-car crash, how do the vehicles decide who lives and who dies? Picture yourself being in charge of the switch on a trolley track. The train is due anytime, but when you peer down the line you see a school bus, filled with children, stalled at the crossing. No issue; that’s why you have a switch. Too bad your kid’s on the other side.
There’s a puzzler known as the Trolley Problem, which was introduced to the scientific community in 1967 by Philippa Foot. It’s meant to puzzle your ethics, and your answer says a lot about how you see the world. But in a modern world, we need to adopt this scenario as new technologies are introduced.
Google’s cars can handle hazards like other cars suddenly swerving in front of them, but in some cases, a crash is unavoidable. In fact, Google’s cars have been in dozens of minor accidents, most of which were caused by other drivers in non-autonomous cars. How will a Google car, or any self-driving car for that matter, be programmed to handle a no-win situation? Something like a blown tire where it must choose from swerving into oncoming traffic or steering directly into a wall? Computers can certainly make a judgement within milliseconds. They would be able to scan cars ahead and identify the most likely to survive a collision, or the one with the most humans inside. But should they be programmed to do what’s best for the owners, or the choice which does the least amount of harm, even if it means the owner will end up dead from impact?
A death in the driver's seat
There are two philosophical approaches to a situation like this. “Utilitarianism tells us that we should always do what will produce the greatest happiness for the greatest number of people,” Ameen Barghi, a recent UAB graduate said. In other words, if it comes down to sending you into a concrete wall or swerving into a path of an oncoming bus, your car should be programmed to do the former.
Every variation of the problem asks the user to pick whether he has chosen to stick with deontology or utilitarianism. Utilitarianism says that we must pick the most practical action regardless of the circumstances, so it would simply count up the individuals involved and go with the option which benefits the majority.
However a computer can’t be programmed to handle everything. We know this by considering the history of ethics. Casuistry, or applied Christian ethics based on St. Thomas, tried to give an answer for every problem in medicine. It failed miserably, because many cases are unique and because medicine changes constantly.
Prepping for the worst
To arrive at a conclusion, a UAB team engages in debates, Barghi said. “Along with Dr. Pence’s input, we constantly argue positions, and everyone on the team at some point plays devil’s advocate for the case. We try to hammer out as many potential positions and rebuttals to our case before the tournament as we can so as to provide the most comprehensive understanding of the topic. Sometimes, we will totally change our position a couple of days before the tournament because of a certain piece of input that was previously not considered.”
Barghi, who is planning to become a clinician-scientist, says that ethics debates are helpful for future healthcare professionals. “Although physicians don’t get a month of preparation before every ethical decision they have to make, activities like the ethics bowl provide miniature simulations of real-world patient care and policy decision-making. Besides that, it also provides an avenue for previously shy individuals to become more articulate and confident in their arguments.”