We're still trying to understand the horrific Germanwings tragedy. But already, some people are suggesting it could have been prevented if a computer had been flying the plane. But that's not the solution. We spoke to an expert about why an A.I. pilot would open up an entirely new set of risks and complications.

Airbus cockpit (Photo: Andrew McMillan/CC)

To tackle this issue, I recruited the help of Dr. Patrick Lin. He's the director of the Ethics + Emerging Sciences Group at California Polytechnic State University and co-author of Robot Ethics: The Ethical and Social Implications of Robotics. As he pointed out to me, AI pilots may eventually take over the cockpit without any human involvement, but that day isn't coming anytime soon.

Taking Humans Out of the Loop

As an idea, robotic pilots are nothing new. But this latest disaster is giving the idea of taking the controls out of human hands an added sense of urgency.



When we talk about the fallibility of human pilots, it's usually in regard to things like fatigue and user error. The unpredictability of a pilot's psychological status, by comparison, is rarely discussed – but Tuesday's crash highlights it as deserving of consideration. It now appears that Germanwings co-pilot Andreas Lubitz was hiding an undisclosed medical condition, one that may have been directly or indirectly related to his mental state.

"Pilots can be error-prone, because humans are error-prone," Lin told io9. "Some airplane crashes happen, because today's pilots are asked to sit and stare at a computer system for hours at a time, and humans aren't built for that kind of work. Pilots today also don't have enough opportunities to physically or mentally train for emergency situations any more, since so much of the flight is already automated."

Hence the calls to remove the weakest link: humans. An artificially intelligent system would never get tired, depressed, or make mistakes.


Sounds great in principle — but as Lin says, the handoff of control between human pilot and auto-pilot is especially tricky.

"Even if AI pilots are safer than human pilots, the only way AI pilots would be able to avoid the Germanwings [crash] and other human-caused accidents is not only to be in control of every part of the flight — from taxiing at the airport to takeoff and landing — but also to lock out all humans from the cockpit entirely, so that they can't hit an override button and take control," he says.

Lin likens this to Google's decision to remove the steering wheel altogether from its low-speed autonomous car, the so-called "Koala Car." Similarly, if the goal is to prevent human-caused disasters, airlines are going to have to prevent humans from entering into the AI-plane loop at any point. That's a rather unsettling prospect.

Levels of Risk

Moreover, the risk-profiles of robot planes are totally different than slow-moving robot cars.


"What makes sense for Google doesn't necessarily make sense for, say, Lufthansa," says Lin. "Technology will fail at some point — whether it's some programming error in the millions of lines of code that operate an autonomous vehicle, or sensor error, or unforeseen event — and it matters whether technology fails on the ground or in the air. Where Google might not need a back-up piloting plan, airlines do. And this means keeping at least one human in the cockpit for emergencies."

Google's Koala Car: A much lower risk-profile than an airplane (Photo: Google)

Airplane crashes are relatively rare, especially when you consider the number of flights per year. The chances of dying in a plane crash are phenomenally thin and too small to justify the extra risk created by eliminating humans from the equation. Nor would moving to robotic pilots save much money, since labor is a relatively small fraction of airlines' operating costs.

AI Crashes

"Maybe someday AI pilots will be perfected enough to completely take over the cockpit without a back-up plan, but that day is not in our foreseeable future," Lin tells io9. "We can't even build much simpler devices, such as laptop computers and smartphones, that never crash in much more ordinary conditions."


To which I would add: robotic planes could also be potentially hacked. Unless a completely failsafe system is developed, there' a chance that someone could disrupt functioning and/or assume control externally, and take down the plane. It's an incredibly frightening prospect.

Technology and humans are both prone to failure, noted Lin. "But working together, there's a redundancy in the system that can help compensate for those failures."