Between driverless cars, autonomous weapons and AI-powered medical diagnostic tools, it seems there will be no shortage of ethically-complex situations involving AI in the future.
Tesla’s Autopilot enables hands-free driving, but it’s not meant to allow drivers to take their eyes off the road.
Marcus Zacher/Flickr
An autonomous vehicle expert explains how Tesla’s Autopilot works, what prompted US authorities to investigate the system and what changes might be in store for the company.
The real ethical challenge of driverless cars is not deciding how they respond in emergencies – it’s facing up to the failings of human drivers.
People expect drivers to stop for them at pedestrian crossings, but what if they know autonomous vehicles will stop any time someone chooses to step in front of them?
Varavin88/Shutterstock
How will people respond once they realise they can rely on autonomous vehicles to stop whenever someone steps out in front of them? Human behaviour might stand in the way of the promised ‘autopia’.
An increase in the use of self-driving cars will change parking requirements in the city.
Shutterstock
An increase in the use of self-driving cars will change parking infrastructure in cities, and hopefully result in more colourful character neighbourhoods.
Just like teenagers, robot drivers need lots of practice.
iurii/Shutterstock.com
Autonomous cars need to learn how to drive just like people do: with real-world practice on public roads. It’s key to safety, and to public confidence in the new technologies.
Governments have started to see automation as the key to brighter urban futures. But what will this look like?
On March 18 in Tempe, Arizona, an Uber self-driving car struck and killed Elaine Herzberg, who was walking her bicycle across a street. The human driver was supposed to be monitoring the car’s behaviour, but did not do so. Its systems apparently did not detect the victim, as it neither slowed down nor tried to avoid hitting Herzberg
Wikimedia