A magazine article co-authored by Duke University robotics professor Mary Cummings and Jason Ryan, then a PhD candidate at MIT in 2014, laid out three examples in which even trained professional pilots failed in their interaction with machines.
- A faulty indicator light that appeared on final approach caused the 1972 crash of Eastern Airlines Flight 401. The crew, distracted by the disagreement between the warning light and other gauges, failed to notice that the autopilot had been disengaged accidentally. No alert or warning notified the pilots, who focused on the indicator problem and failed to notice that the aircraft was descending steadily into the Everglades.
- Air France 447, which crashed off the coast of Brazil in 2009, involved two failures: failure of the automation and a failure of the displays to present information to the operator. A clogged pressure sensor caused the autopilot system to act as if the altitude of the airplane was too low.The autopilot put the aircraft into an increasingly high climb, eventually triggering the stall warning alert. With the aircraft on autopilot, the pilot was distracted and was not fully engaged in monitoring the aircraft; this is a common occurrence. When the stall warning activated, the pilot was not aware of what was happening and made the worst of all possible decisions—he attempted to increase the aircraft’s climb angle, which worsened the stall and contributed to the crash.
- Northwest Flight 188 overshot Minneapolis, Minnesota, by roughly an hour in the fall of 2009 as a consequence of operator boredom and resultant distraction. With the aircraft on autopilot, both pilots became distracted by their conversation and failed to monitor the aircraft and its status. As they opened their laptops to obtain information to supplement their conversation, they misdialed a radio frequency change, missed at least one text message sent by air traffic control inquiring about their location, and only realized what was occurring when a flight attendant asked about the landing time. Luckily, the result was only a late landing; more severe consequences could have occurred.
In examining those accidents, the authors concluded that "boredom and distraction, mode confusion, recovery from automation errors, skills degradation, and trust issues are major concerns" even for trained pilots.
What could also happen to drivers
Clamann said that what happened in Flight 188 to Minneapolis (the third example above) is likely to happen to human drivers in autonomous cars. While the car is on autopilot, the driver will tend to relax, no longer paying attention to instrument clusters. It’s only human nature.
Programming a wrong address or entering incorrect information is technically a human error. But when an operator isn’t aware of data on a screen (because “mode awareness” has already changed) or if the system might not explain what’s going on, serious safety problems could ensue.
Automotive digital cockpit envisioned by Nvidia (Source: Nvidia)
Autonomous or semi-autonomous cars are great at handling easy situations like going from point A to point B. But when an emergency emerges and control of the vehicle must revert to the driver, there is really no easy way for the human to get involved at that point.
Consider also the issue of “skill decay,” noted Clamman. “If you don’t practice enough, you lose your skill.” If someone else was always driving Miss Daisy before, you don’t want to see Miss Daisy at the wheel.
Next page: Hand fly