Published in July 2016 issue
Recently, the families of the victims of last year’s Germanwings 9525 crash received the devastating news that the suicidal coPilot could have been stopped—if only confidentiality laws had not been so strict.
By Eric Auxier
Issues have naturally arisen from this tragedy, such as a knee-jerk call for more thorough Pilot screening. Some have even gone so far as to ask: do we still need to put our lives in the hands of human Pilots? Hasn’t technology advanced to the point where they are an unnecessary hazard in the sky? Shouldn’t we be striving to make our cockpits human-free—and thus human-error-free?
Before we can intelligently answer that dilemma, a Little perspective.
Your odds of being killed on an airline flight are 1 in 29.4 million. Airline travel is far and away the safest form of travel ever devised by man, safer even than walking. Of course, a single fatality is one too many, and any improvement in safety is always welcome. And with Boeing projecting a need for over half a million new commercial airline Pilots over the next 20 years, something, somewhere, has got to give.
But is supplanting Pilots with automation truly the answer? It’s only natural at this time to explore the potential benefits of reduced/no Pilot aircraft. But we must not be too eager to jump the gun by a century or two.
Since the Industrial Revolution, man has contemplated the challenge of creating flawless machinery. Oliver Wendell Holmes’ humorous 1851 poem, The Deacon’s Masterpiece envisioned building the perfect ‘one-hoss shay’ (one-horse carriage) that would never fail, by strengthening each weakest part until all parts were equally strong. The poem’s carriage lasted 100 years to the day—until it collapsed in shambles, all parts failing at once, and its bewildered ‘Pilot’ suddenly finding himself sitting perplexed on the ground.
At least since the classic 1920 Czech play R.U.R. (Rossom’s Universal Robots) coined the term ‘robots’—and warned of their inherent threat— man has held a healthy skepticism of technology. From Arthur Clarke and Stanley Kubrick’s homicidal HAL 9000 computer in 2001: A Space Odyssey to Star Trek’s Borg, from Ahhnold’s Terminator to Futurama’s Bender (who wanted to “kill all humans”—when he got around to it), we’ve thoroughly explored the theme of technology run amok. So, when we hear of projects that aim at manning cockpits with robots in lieu of men and women, most of us share a collective hesitance.
But is that hesitance founded on fact, or pop fiction?
In 1997, IBM’s computer Deep Blue beat chess champ Garry Kasparov, and the human race saw the writing on the screen: computers are getting ‘smarter’ than their human masters. After being defeated in 2011 by supercomputer Watson, Jeopardy champ Ken Jennings etched a quote from The Simpsons on his monitor: “I, for one, welcome our new computer overlords.”
So, are the computers’ human masters destined to become their slaves?
Well, that deep subject is far beyond the scope of this article, but I can say that we Pilots are not so welcoming of our pending overlords. For, as our HAL 9000s, Terminators and Borg have warned, this increase in technology can be its own doubleedged sword—in more ways than one.
Indeed, a long-standing joke in the industry has been the automated voice over the cabin PA saying, “You’re flying on a fully-automated airplane, and nothing can go wrong—click!— go wrong—click!—go wrong.”
Pilots and Automation: a Rocky Marriage
The marriage of automation and piloting has long been a rocky one, but there is no doubt that the overall march of technology has been toward vastly improved cockpit safety.
In the 1970s, advances in automation and technology all but eliminated the third Pilot—the Flight Engineer—from cockpits. TCAS, EGPS, and high-tech autopilots capable of landing in 0/0 conditions (ceiling and visibility nil) have not only relieved Pilot workload— thus helping to increase Pilot situational awareness—but have also helped to expand the Pilots’ eyes and ears to alert of potential threats.
In fact, technology has advanced so greatly that the Pilot may now be considered the airplane’s weakest safety link. According to the ACROSS Project (see below), ‘Pilot error’ now accounts for over 60%, or nearly two-thirds, of all aviation accidents (although our own research indicates this figure to be less than one-third). Whatever the actual number, ‘Pilot error’ is increasingly being attributed to humans conflicting with automation.
“What’s it doing now?” is a common phrase that researchers and investigators have noted from flight deck recorders— suggesting, according to one NASA study, two problems: “A lack of understanding due to automation complexity, inadequate training, or both… Or, the automation does not help them to understand by telling them what it’s doing.”
One of the first incidents attributed to this rocky marriage was 1979’s Aeromexico (AM) Flight 945, flying 30,000ft over Luxembourg. During climb, the autopilot reverted to a vertical speed mode instead of the intended and expected airspeed mode, resulting in a high-altitude stall. Fortunately, the crew recovered, and continued to destination without injuries or further incident.
But more accidents attributed to Pilot error vs. automation have piled up—with much more disastrous results—such as the 1990s fatal Indian Airlines (IC) A320 crash in Bangalore (again attributed to incorrect vertical mode and Pilot inaction). Add to that the more recent Air France (AF) 447, Asiana Airlines (OZ) Flight 214 and AirAsia (QZ) 8501 accidents, which were caused, at least in part, by Pilots not fully understanding their sophisticated instrumentation. And yet, the march toward increasing automation continues.
Enter several ‘pilot’ projects (no pun intended). NASA has long studied human factors in cockpits, leading most recently to the well-intended but politically flawed FAR 117 rules governing Pilot duty limitations and rest requirements. Now, NASA’s research is in SPO (single Pilot ops) Phase 3 (SPO-3), the agency shelling out a $4 million commission to Rockwell to study the concept of a ground-based ‘Super Dispatcher’, playing virtual FO to up to 12 inflight aircraft.
The EU’s Advanced Cockpit for Reduction of Stress and Workload (ACROSS) is a $35 million project by the European Commission and a consortium of 35 industry and research partners, including Thales Group, Airbus, and Boeing. While headlines have claimed that the ACROSS project’s objective is to eliminate co-Pilots, its stated objective is to first reduce Pilot workload during high stress times (high density traffic, inclement weather, or emergencies), and, in its Phase 3, explore the possibility of eliminating First Officers (‘co-Pilots’) from the airline cockpit as well.
While ACROSS is careful to state that it is merely exploring the “open issues of single Pilot operations,” its banner website also curiously includes a quote from Michel Ziegler, former Technical Director of Airbus: “One day, there will not be any Pilots in the cockpit.”
One tends to wonder just how “exploratory” this Phase 3 actually is.
All this research is ostensibly aimed at increasing safety, but the companies fueling the researchers’ tanks are motivated by the bottom line of airlines: chuck a Pilot, save some serious dough— and thus sell more planes to said airline. Ultimately, however, it will boil down to the safest system, as dictated by the insurance companies’ actuary table statistics; once it’s ‘proven’ safer—at least on paper—the flood gates may open.
But the hardest sell may be with the general public. To save a few bucks, will the traveler be willing to buy into the concept? Historically, consumer has voted with their dollars based on the perception of the safest form of travel. Even today, thousands of wouldbe nervous flyers elect to drive to their destinations—an act that is riskier tha flying by an order of magnitude.
July 2016Add to cart | View Details
Hudson Miracles and Black Swan Events
R&D is the heart of progress, but, despite all these studies, the simple truth remains: just as Pilots are prone to failure or incapacitation, so, too, is automation. And this is where the human factor really shines.
For, while the Pilot may be an airplane’s weakest link, we must not forget that the human team up front, however flawed, is still its greatest safety asset as well.
Of airline Pilots, a recent Forbes article said: “The job is easy and routine, especially with today’s highly automated cockpits. But 1% of the time, you can’t pay them enough because of an emergency, bad weather, or other critical situation. This is where it really helps to have two Pilots and so these are the áreas where ACROSS needs to deliver.”
While suicidal Pilots grab headlines, there have been vastly higher numbers of incidents in which vigilant Pilots have saved the day. 2009’s ‘Miracle on the Hudson’ is now a household phrase. Last year, an American Airlines (AA) A321 First Officer Steven Stackelhouse diverted and landed after his Captain had suffered a heart attack (Airways, February 2016).
And 1989’s United Airlines (UA) Flight 232 crash is a testament to human spirit and ingenuity in the face of certain disaster.
After the DC-10’s Number 3 Engine exploded over Sioux City, Iowa, severing all hydraulic lines, Captain Al Haynes enlisted the help of his co-Pilot, Flight Engineer, and a deadheading Check Airman. The team troubleshot the severely damaged plane, and discovered that they could control its path by differentiating thrust on the two wingmounted engines. Flying a circular path, they made a spectacular crashlanding at an abandoned airport in Sioux City. Of the 296 people on board, 111 people died—but 185 survived. It was a textbook example of the flight crew being able to put their heads together, to think outside the flight envelope, and come up with a solution that saved so many. This teamwork—called CRM, or Crew Resource Management, has been the greatest human factor improvement to cockpit safety since the invention of the escape slide.
Artificial Intelligence is… Artificial
“Those autopilots practically fly themselves,” is a cringe-worthy statement today’s Pilots often hear from passengers. My reply? Does your Cruise Control drive your car for you?
For all its sophistication, the modernday autopilot is nothing more than a 3D Cruise Control. It can fly you from here to TIM (Timbuktu) with uncanny precision, but it can’t decide what to have for lunch. It will do what it’s told, but nothing more.
For all the Deep Blues and Watsons in the world—wondrous feats of technology, to be sure—artificial intelligence is just that: artificial. A computer simulates human thought, but doesn’t engage in human thought.
It is estimated that, in terms of computational power, your current laptop now beats out a mouse, but is still 1,000 times below a human. Using Moore’s Law (the doubling of computational power every two years), we can expect your laptop to beat your brain fairly soon. But computational power alone by no means directly equates to sentient ability. Computers may be able to beat us at chess or Jeopardy, they may even be able to ‘learn’—that is, change behavior as a result of experience—but they lack the proactive judgement that is critical to any high-reliability operation.
In short, computers process, but they can’t think.
And therein lies the magic bullet. No computer ever made can think.
When you make a mistake, will your Robo-co-Pilot be able to shout out, “Danger, Will Robinson!”—i.e. trap your errors, or those of his fellow cyberpilot, Captain Robby the Robot? Can it think outside the flight envelope to, say, use differential thrust to save a crippled airliner?
In a recent statement to the Senate, ALPA, the Air Line Pilots Association, was equally skeptical: “A Pilot on board an aircraft can see, feel, smell or hear many indications of an impending problem and begin to formulate a course of action before even sophisticated sensors and indicators provide positive indications of trouble.”
For nonhuman flights, such as drones and military fighters, the highest stakes for failure are a few measly million in taxpayer dollars. For passenger planes… the cost is a lot higher.
“You need humans where you have humans,” Dr. Mary Cummings, Director of the Humans and Autonomy Laboratory at Duke University, said in a New York Times article. “If you have a bunch of humans on an aircraft, you’re going to need a Captain Kirk on the plane.”
One final, Kirk-worthy example of human ingenuity is the amazing story of 2010’s Qantas (QF) Flight 32.
Seven minutes after takeoff from Singapore, the A380—the world’s largest airliner—suffered a Number 2 engine explosion, damaging all but one system. Hundreds of warnings were generated by a computer system that was not only detecting actual failures but reporting false ones as well, as wires short-circuited.
By luck, Richard De Crespigny, Captain of the flight, had no less tan four other highly experienced Pilots in the cockpit with him: his First Officer Matt Hicks, their relief First Officer, a Check-Airman giving Captain De Crespigny a line check, and yet another Check Airman giving the line Check Airman a flight check; in total, over 100 years of aviation experience aboard one flight deck. As Captain De Crespigny explained in our three-part Black Swan Event interview (Airways, July, August, September 2015) and in his excellent book QF32, two hours of mind-numbing emergency airmanship and technical triage ensued—with warning bells constantly blaring—as the Pilots sorted out what had failed and what was left.
At one point, Captain De Crespigny said, First Officer Hicks used his intuition and judgment as an experienced Pilot to override the false approach speed calculated by the computer—a number that would have been dangerously slow.
“He knew it was wrong, he said it was wrong, and he was correct,” Captain De Crespigny proudly said.
In the end, Captain De Crespigny landed safely back in Singapore, severely out of c.g. (center of gravity) and vastly overweight, with damaged brakes, thrust reversers, and flight controls. The plane stopped 100 meters short of the end of the runway.
All 469 passengers and crew safely walked away, with not a single injury.
To date—and in the foreseeable future—there is not a single SPO-3 or C-3PO made yet that could have done what Captain Richard De Crespigny and his team did that day.
As the sophistication of automation increases, so, too, does its complexity, and its potential for failure. As Captain De Crespigny explained, “When those systems fail—and they do fail—it’s up to the Pilot now to recover an aircraft that is very complex.”
For the near future, at least, the mere thought of a single- or no-Pilot passenger plane remains an artificially intelligent— and unacceptably unsafe—idea.