The crash highlighted the need to reconsider the relationship between human behaviour and technology. Self-driving cars change the way we drive, and we need to scrutinise the impact of this on safety.
Tesla’s autopilot does not make the car truly autonomous and self-driving. Rather, it automates driving functions such as steering, speed, braking and hazard avoidance. This is an important distinction. The Autopilot provides assistance to, but is not a replacement for, the driver.
Given that the technology – and the drivers of these vehicles – are still in their infancy, the risks involved could be greater than they first appear. David Lyall, a PhD candidate in Health Informatics at Macquarie University, recently published comments following the Tesla fatality. Here are some extracts:
Evidence suggests that humans have trouble recognising when automation has failed and manual intervention is required. Research shows we are poor supervisors of trusted automation, with a tendency towards over-reliance.
Known as automation bias, when people use automation such as autopilot, they may delegate full responsibility to automation rather than continue to be vigilant. This reduces our workload, but it also reduces our ability to recognise when automation has failed, signalling the need to take back manual control.
Automation will work exactly as programmed. Reliance on a spell checker to identify typing errors will not reveal the wrong words used that were spelt correctly. For example, mistyping “from” as “form”.
Likewise, automation isn’t aware of our intentions and will sometimes act contrary to them. This frequently occurs with predictive text and autocorrect on mobile devices. Here over-reliance can result in miscommunication with some hilarious consequences as documented on the website Damn You Autocorrect.
Sometimes automation will encounter circumstances that it can’t handle, as could have occurred in the Tesla crash. GPS navigation has led drivers down dead-end roads when a highway has been rerouted but the maps not updated.
When automation gets it right, it can improve performance. But research findings show that when automation gets it wrong, performance is worse than if there had been no automation at all. Tasks we find difficult are also often difficult for automation.
In medicine, computers can assist radiologists detect cancers in screening mammograms by placing prompts over suspicious features. These systems are very sensitive, identifying the majority of cancers.
But in cases where the system missed cancers, human readers with computer-aided detection missed more than readers with no automated assistance. Researchers noted cancers that were difficult for humans to detect were also difficult for computers to detect.
Technology developers need to consider more than their automation technologies. They need to understand how automation changes human behaviour. While automation is generally highly reliable, it has the potential to fail.
Read the rest of David Lyall’s article here.