Experiments don’t have to take place in a laboratory: the MIT Media Lab put together the “Moral Machine” to look into how people think driverless cars should operate.
That’s the premise behind “Moral Machine,” a creation of Scalable Corporation for MIT Media Lab. People who participate are asked 13 questions, all with just two options. In every scenario, a self-driving car with sudden brake failure has to make a choice: continue ahead, running into whatever is in front, or swerve out of the way, hitting whatever is in the other lane. These are all variations on philosophy’s “Trolley Problem,” first formulated in the late 1960s and named a little bit later. The question: “is it more just to pull a lever, sending a trolley down a different track to kill one person, or to leave the trolley on its course, where it will kill five?” is an inherently moral problem, and slight variations can change greatly how people choose to answer.
For the “Moral Machine,” there are lots of binary options: swerve vs. stay the course; pedestrians crossing legally vs. pedestrians jaywalking; humans vs. animals; and crash into pedestrians vs. crash in a way that kills the car’s occupants.
There is also, curiously, room for variation in the kinds of pedestrians the runaway car could hit. People in the scenario are male or female, children, adult, or elderly. They are athletic, nondescript, or large. They are executives, homeless, criminals, or nondescript. One question asked me to choose between saving a pregnant woman in a car, or saving “a boy, a female doctor, two female athletes, and a female executive.” I chose to swerve the car into the barricade, dooming the pregnant woman but saving the five other lives…
Trolley problems, like those offered by the Moral Machine, are eminently anticipated. At the end of the Moral Machine problem set, it informs test-takers that their answers were part of a data collection effort by scientists at the MIT Media Lab, for research into “autonomous machine ethics and society.” (There is a link people can click to opt-out of submitting their data to the survey).
It will be interesting to see what happens with these results. How does the experiment get around the sampling issue of who chooses to participate in such a study? Should the public get a voice in deciding how driverless cars are programmed to operate, particularly when it comes to life and death decisions? Are life and death decisions ultimately reducible to either/or choices?
At the same time, I like how this takes advantage of the Internet. This experiment could be conducted in a laboratory: subjects would be presented with a range of situations and asked to respond. But, the N possible in a lab is much lower than what is available online. Additionally, if this study is at the beginning of work regarding driverless cars, perhaps a big N with a less representative sample is more desirable just to get some idea of what people are thinking.