By Guest Author 25/10/2018


A paper published today in Nature has compiled data from millions of decisions from MIT’s Moral Machine to try to find whether people across the globe have similarities in the decisions they think a driverless car should take in the face of an unavoidable accident. Associate Professor Alex Sims discusses the findings.

As the article argues, the question is not if driverless cars will start being driven on our roads, but when. Autonomous cars raise the issue of the trolley problem, which was once just a thought experiment in ethics. You see a run trolley (in New Zealand we would describe it as a train carriage), moving towards five people lying on train tracks. Next to you is a lever that controls a switch. If you pull the lever the trolley will be diverted onto another set of tracks, saving the five people. But, there is one person lying on the other set of tracks and pulling the lever will kill that person. Which one is ethically correct?

Autonomous cars raise the stakes. If a crash is inevitable, for example, an autonomous car’s brakes fail, and the car has to choose between running over and killing three elderly people or swerving into a brick wall and killing the car’s occupants, what should the car do? The authors quite rightly state that we, as a society, cannot leave the ethical principles to either engineers or ethicists.

We need rules. It would be unconscionable for people to drive cars that were programmed to ensure that the occupant’s safety was put ahead of everyone else’s. For example, a car cannot be programmed to run three people over to avoid the car’s sole occupant crashing into a parked car.

The authors developed a Moral Machine Experiment where an online ‘serious game’ collected nearly 40 million decisions from respondents in 233 countries. The project’s scope was ambitious as it sought to explore whether a universal machine ethics was possible. While the authors acknowledged the limitations of the study, for example, significantly more men responded to the survey than women, nonetheless the number of respondents is a welcome change from experiments carried out on college students in the United States.  

Different scenarios were used which focused on nine factors: sparing humans (versus pets), staying on course (versus swerving), sparing passengers (versus pedestrians), sparing more lives (versus fewer lives), sparing men (versus women), sparing the young (versus the elderly), sparing pedestrians who cross legally (versus jaywalking), sparing the fit (versus the less fit), and sparing those with higher social status (versus lower social status).

Four levels of analysis were used. First, what is the relative importance of the nine factors, when data are aggregated worldwide? Second, does the intensity of each preference depend on respondents’ individual characteristics? Third, can clusters of countries with homogeneous moral preferences be identified? And fourth, do cultural and economic variations between countries predict variations in their moral preferences?

While there were differences between countries, the research found three strong preferences: saving human lives over the lives of animals; sparing more lives, and sparing young lives.

The study found that countries could be grouped into three clusters in terms of moral preferences. The first cluster was Western, in addition to New Zealand it included North America, with a number of European countries as well as sub-clusters containing Scandinavian countries and Commonwealth countries. The second was Eastern, including Japan, China, and Taiwan as well as Islamic countries such as Indonesia, Pakistan and Saudi Arabia. The third was Southern, which was the Latin American Countries as well as France and those that had a French influence.

There were differences between the clusters which make for fascinating reading. Respondents from countries with a strong rule of law were more likely to spare more characters, to favour humans over non-humans and less likely to favour higher-status over lower status people. Also, respondents from countries with a higher socio-economic inequality (measured by the Gini-coefficient) were more likely to spare higher-status people over lower status people. Thus the danger is that in the latter countries autonomous cars would protect their wealthy owners at the expense of others.

Respondents from individualistic cultures had a stronger preference for sparing the greatest number of people. Those from collectivistic cultures showed a weaker preference for saving younger people, which is not surprising given the respect they show to older members of their society.

Curiously while the authors were seeking to see if a universal machine ethics was possible and they found there were common preferences, they do not argue that a universal machine ethics should be used, rather each country will set their own ethics. Moreover, the authors note that ethical preferences should not necessarily dictate the ethical policy adopted, albeit the people’s willingness to buy autonomous vehicles and tolerate their use will depend on the palatability of the rules that are adopted.

To be sure, the authors are being pragmatic because attempting to impose a universal machine ethics would be difficult, but decisions are going to need to be made, such as whether the lives of a few should be sacrificed to save many. It should not be left open to individual car companies, as has been proposed in the ethical rules proposed in 2017 by the German Ethics Commission on Automated and Connected Driving that simply says that “General programming to reduce the number of personal injuries may be justifiable”. However, on an equally pragmatic note: what happens when you drive a car from one country to another country with different rules? The car would be required to update its operating system to adjust to the new countries rules, which would not necessarily always go smoothly.

What of the law? Ethical rules are interesting as there is often a large difference between ethical and legal rules. Breaking a legal rule can result in many sanctions, for example, being fined and imprisoned and even not being able to travel to certain countries as well as difficulty finding employment and obtaining insurance. Ethical breaches do not incur the same sanctions. While some legal rules are ethically based, for example, do not steal, others are not. For example, legally as an employer you are entitled to pay your employees the minimum wage when you know they are going without food so that their children eat, even though ethically you should pay them more as you are making extremely large profits. Thus when the “ethical” rules are determined, these rules must be enshrined in law.   

Dr Alex Sims is an associate professor at the University of Auckland’s Department of Commercial Law.