One of the most pressing questions facing current AI research is: How do you program an AI to behave ethically? The phenomenon of human ethics is produced by a complex mosaic of beliefs, emotions, and ineffable motivations; a structure very different than the logical regimented thought of an advanced AI.
The difficulty of the question is further compounded by the fact that there is still fundamental disagreement among human beings on the meaning of terms like “good”, “bad”, “moral” and “immoral.”
One area where the issues involving AI making ethical decisions are most apparent is in the technology of self-driving cars. Anytime a driver makes a decision to swerve out of the way of another person, they are making an ethical decision that carries risk. Self-driving cars may soon be required to make such ethical decisions autonomously, without the input of a human driver. In the case where 2 potentially fatal outcomes are possible, how is an AI supposed to choose between the two alternate courses of action? A new study based on survey responses from people worldwide seems to imply that the search for universally accepted ethical guidelines for self-driving cars is much hairier process than one might think.
The study, published in Nature earlier this week, reports the results of a survey conducted on over 2.3 million people from 200 different countries meant to organize and systematize peoples’ ethical intuitions on the moral issues related to self-driving cars. According to Edmond Awad, one of the authors of the study, “The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to.” The results found, perhaps unsurprisingly, that peoples intuitions surrounding the ethics of autonomous vehicles differ by geographic region and culture.
Although there seems to be almost universal agreement about some basic ethical principles self-driving cars should operate according to, there is variation in the strength and ranking of ethical preferences among countries and region. For example, the study found a large amount of cross-cultural agreement that self-driving cars should prioritize human life over non-human lives, but Eastern countries tend to value the lives of the young over the old more than western countries. The results of the study seem to indicate that the search for a universally agreed upon moral code for AI might be a programmer’s fantasy.
Trolley Problems And Self-Driving Cars
In order to probe peoples’ ethical intuitions, the researchers designed a “moral machine”, an online module that asks people their ethical intuitions about the decision self-driving cars may have to make. Many of the questions are derived from the famous “trolly problem” thought experiment first presented by the moral philosopher Philippa Foot in 1967. The trolley problem puts the listener in the seat of an ethical dilemma—An out of control trolley is barreling down the track headed towards a group of 5 people. You have the option to flip the switch so that the trolley goes down another track where it will kill only one person. The question then is, should you flip the switch, killing the one to save the five?
The moral machine invented by the researchers puts a number of spins on this classic thought experiment; versions where one must choose between a crowd of younger or older people and ones where the decision is between innocent pedestrians and law-breaking jaywalkers. Other versions ditch the switch entirely and require the bystander to actively push another person in the trolley’s way to stop it from killing the 5. The general idea is that by probing peoples’ ethical intuitions in various situations, the researchers could tease out some semblance of cross-cultural ethical agreement among respondents; agreement that could guide the further development of ethical guidelines for self-driving cars. The researchers also hoped to gather some information about social dynamics that would affect peoples’ responses to the questions.
The study found that across culture, there was some agreement on basic ethical norms that self-driving cars should follow. For example, the majority of respondents independent of age, nationality, culture, or political leanings, judged that self-driving cars should always prioritize the lives of human over the lives of non-humans and prioritize the lives of groups over individuals. Despite this higher-level agreement on ethical principles, there were differences that clustered along regional and cultural divisions. For example, respondents grouped in the “Southern” region had a greater tendency to favor sparing the lives of groups of young people over older people, especially when compared with respondents grouped under “Eastern” regions, which encompasses a number of Asian countries.
The researchers also found interesting correlations between respondents answers and the particular political environment respondents find themselves in. For example, respondents from countries with strong governmental institutions like Japan or Finland showed a tendency to favor hitting people crossing the road illegally when compared to countries with weaker institutions. Additionally, the study found that respondents from countries with high levels of economic inequality, such as Colombia, more often chose to strike the “lower-status” individual over high-status individuals. These findings seem to indicate that the economic system in one’s nation affects her moral intuitions and judgments.
So what should we make of these results? On one hand, it is difficult to see how survey responses could offer a substantive answer to the primary ethical question of the trolley dilemma; that is: is killing 1 person to save 5 people ethically permissible? Simply because a large number of people agree on some moral issue does not mean that their judgments are correct; after all, things that are obviously morally wrong, like slavery, used to be agreed upon by a majority of people. Likewise, just because people might disagree over a moral issue does not necessarily mean there is no fact of the matter. Thus, a philosophically minded critic might charge that the study is nothing more than a collection of opinions with no real consequences for the main ethical issues that autonomous cars raise. Some people also argue that the overly abstract and constrained nature of the situations presented in the moral machine response survey precludes the results from being applicable to real-life situations.
On the other hand, the results indicate the kinds of moral issues that manufacturers of autonomous cars must be sensitive to when designing and selling their products. The success of self-driving cars will be dependent on peoples’ moral appraisals of the technology. In fact, a 2016 study showed that people routinely have conflicting ethical beliefs about autonomous cars. Although a majority of subjects claimed that autonomous cars should be designed to prioritize the safety of drivers over pedestrians, the same people would then go on to claim they would not buy an autonomous car because it was designed that way. A better understanding of peoples’ intuitions surrounding the ethics of autonomous cars can help introduce and acclimate people to the technology.