Menu Close

Driverless cars are a catch 22: we do none of the driving, but take all of the responsibility

Keep your eyes on the road. Paleofuture

The utopian vision of the motor vehicle is an onboard autodriver much like that of the autopilot in aircraft which takes over the task of driving, freeing up the human driver to work, rest or play. This is becoming an engineering reality, with technological achievements rapidly approaching those of aircraft autopilots.

Yet while technology can certainly support some of our driving shortcomings, the hands-off vision of the autopilot for cars is marred by concerns about the situational awareness of the driver, how they would take control in case of an emergency and, while the car is still equipped with steering wheel and pedals, the extent to which the human driver will be responsible for the vehicle. And so it appears a “Catch 22”: drivers are no longer required to drive, but are still required to monitor the computer that drives for them.

It’s true that driverless cars are likely to be highly reliable in most situations, most of the time. But this reliability cannot be guaranteed all of the time and the auto-driver will encounter situations that the programmers and engineers have not anticipated. The trouble is that highly reliable automatic pilots will lead even the most observant driver’s attention to wander – once the novelty has worn off, it will be like watching paint dry and decades of research have shown that humans are extremely poor at maintaining extended periods of vigilance.

So how can working, reading, using email and the internet, the envisaged benefits of driverless cars, be reconciled with the need to keep an eye on the vehicle? The truth is nobody really knows.

I’ve researched vehicle automation for 20 years and it’s clear that, in an emergency humans, are more effective than automatic pilots. Up to a third of drivers of automated vehicles did not recover from emergencies in our simulator studies, and I have witnessed human drivers fail to intervene when automatic systems fail. The concern is that driver and driverless act at cross purposes, with the driver believing the automated vehicle is in control of the situation when in fact it has not.

My research has shown that if we design the vehicle to provide continuous feedback to the driver – analogous to a chatty co-driver – we can reduce this kind of error substantially, but not completely. Drivers of automated vehicles take, on average, five times as long to apply emergency braking than manual drivers.

On the other hand, if drivers are forced to continually monitor the vehicle’s automation this does not diminish their workload at all. In fact, we know this monitoring cannot be sustained, with driver attention falling with increasing automation.

When required suddenly, a human driver is ill-prepared to take control from the vehicle. This means we’re asking the impossible, by taking away control from the driver while leaving them with all the accountability. Lessons learned from the introduction of aircraft automation appear to be going unheeded.

It seems drivers of the future will be held responsible for something over which they have little or no control. Not that this means we should stop researching and building automated vehicles – quite the opposite. We need to learn and apply the lessons of automation as used elsewhere (such as aviation) to the problems of driverless vehicles.

This means designing vehicle automation in such a way that engages the driver and accommodates gradual hand-over and hand-back processes in order to successfully integrate human drivers into the system. We need a chatty co-pilot, not a silent auto-pilot.

Want to write?

Write an article and join a growing community of more than 182,100 academics and researchers from 4,941 institutions.

Register now