Menu Close
One of the self-drive cars already being used by Google in Nevada, in the US. EPA/Google

Self-driving cars need ‘adjustable ethics’ set by owners

One of the issues of self-driving vehicles is legal liability for death or injury in the event of an accident. If the car maker programs the car so the driver has no choice, is it likely the company could be sued over the car’s actions.

One way around this is to shift liability to the car owner by allowing them to determine a set of values or options in the event of an accident.

People are likely to want to have the option to choose how their vehicle behaves, both in an emergency and in general, so it seems the issue of adjustable ethics will become real as robotically controlled vehicles become more common.

Self-drive is already here

With self-driving vehicles already legal to drive on public roads in a growing number of US states, the trend is spreading around the world. The United Kingdom will allow these vehicles from January 2015.

Before there is widespread adoption, though, people will need to be comfortable with the idea of a computer being in full control of their vehicle. Much progress towards this has been made already. A growing number of cars, including mid-priced Fords, have an impressive range of accident-avoidance and driver-assist technologies like adaptive cruise control, automatic braking, lane-keeping and parking assist.

People who like driving for its own sake will probably not embrace the technology. But there are plenty of people who already love the convenience, just as they might also opt for automatic transmission over manual.

Are they safe?

After almost 500,000km of on-road trials in the US, Google’s test cars have not been in a single accident while under computer control.

You’ll be amazed by what you find out about the man in the driving seat.

Computers have faster reaction times and do not get tired, drunk or impatient. Nor are they given to road rage. But as accident-avoidance and driver-assist technologies become more sophisticated, some ethical issues are raising their heads.

The question of how a self-driven vehicle should react when faced with an accident where all options lead to varying numbers of deaths of people was raised earlier this month.

This is an adaptation of the “trolley problem” that ethicists use to explore the dilemma of sacrificing an innocent person to save multiple innocent people; pragmatically choosing the lesser of two evils.

An astute reader will point out that, under normal conditions, the car’s collision-avoidance system should have applied the brakes before it became a life-and-death situation. That is true most of the time, but with cars controlled by artificial intelligence (AI), we are dealing with unforeseen events for which no design currently exists.

Who is to blame for the deaths?

If car makers install a “do least harm” instruction and the car kills someone, they create legal liability for themselves. The car’s AI has decided that a person shall be sacrificed for the greater good.

Had the car’s AI not intervened, it’s still possible people would have died, but it would have been you that killed them, not the car maker.

Planning for the unpredictable accident – so who’s to blame? Flickr/Johannes Ortner, CC BY-NC

Car makers will obviously want to manage their risk by allowing the user to choose a policy for how the car will behave in an emergency. The user gets to choose how ethically their vehicle will behave in an emergency.

As Patrick Lin points out, the options are many. You could be:

  • democratic and specify that everyone has equal value
  • pragmatic, so certain categories of person should take precedence, as with the kids on the crossing, for example
  • self-centred and specify that your life should be preserved above all
  • materialistic and choose the action that involves the least property damage or legal liability.

While this is clearly a legal minefield, the car maker could argue that it should not be liable for damages that result from the user’s choices – though the maker could still be faulted for giving the user a choice in the first place.

Let’s say the car maker is successful in deflecting liability. In that case, the user becomes solely responsible whether or not they have a well-considered code of ethics that can deal with life-and-death situations.

People want choice

Code of ethics or not, in a recent survey it turns out that 44% of respondents believe they should have the option to choose how the car will behave in an emergency.

About 33% thought that government law-makers should decide. Only 12% thought the car maker should decide the ethical course of action.

In Lin’s view it falls to the car makers then to create a code of ethical conduct for robotic cars. This may well be good enough, but if it is not, then government regulations can be introduced, including laws that limit a car maker’s liability in the same way that legal protection for vaccine makers was introduced because it is in the public interest that people be vaccinated.

In the end, are not the tools we use, including the computers that do things for us, just extensions of ourselves? If that is so, then we are ultimately responsible for the consequences of their use.


NOTE: This article was amended on 1 September 2014 at the Author’s request to include additional attribution.

Want to write?

Write an article and join a growing community of more than 191,400 academics and researchers from 5,063 institutions.

Register now