Airways Magazine

QANTAS A380: ‘‘Black Swan’’ event

 Breaking News

QANTAS A380: ‘‘Black Swan’’ event

QANTAS A380: ‘‘Black Swan’’ event
June 03
09:42 2016

Published in July 2015 issue

His name is Captain Richard de Crespigny. He flies one of the world’s largest airliners, the Airbus A380, for Qantas Airlines. And in November 2010, on Flight QF32 from Singapore to Sydney, he and his crew suffered what is called a “black swan event.”

By Eric Auxier

A “black swan” is an event so rare as to be unpredictable, but one that comes with major consequences—like 9/11, or the Black Monday financial market meltdown.

On November 4, 2010, Captain de Crespigny’s black swan came in the form of an engine failure. A simple engine failure on a four-engine jet such as the A380—or even on a two-engine, for that matter—would not be much of an event. Moreover, it is extremely rare; only one out of every five pilots will experience one in their career.

Even so, flight crews train for engine failures all the time. So, too, hydraulic, electrical, flight control failures— you name it, the professional airline pilot has practiced it. But, how about all system failures at once? Surely, nobody would ever face such an extreme emergency—-only the most sadistic simulator instructor or flight examiner would demand such a thing of a flight crew, and he’d probably be fired for doing it.

But this was no ordinary engine failure. The Number Two engine exploded, sending shrapnel through the wings, severing cables and wires, interrupting communications between the onboard sensors, and causing major damage to all but one system—and rupturing fuel tanks to boot.

For two grueling hours, Captain de Crespigny and his gallant crew fought to restore control and safely land the crippled, massively overweight plane. Even after a dramatic landing back in Singapore—with a mere 100 meters of runway left—Captain de Crespigny and his crew wrestled with the plane for two more grueling hours, while the Number One engine refused to shut down.

In the end, all passengers and crew walked away safely, with not a single injury.

Best-selling author of the award-winning book, QF32, Captain de Crespigny is now a worldwide soughtafter speaker. He sat down with Airways on an exclusive three-part interview where he narrates his black swan experience on Airbus’s finest, the A380. This is his story.

Airways: Welcome aboard, Captain. Thank you for joining us. A hundred systems failures on that day. Is that correct?

De Crespigny: We had about a hundred failures in the air and about 20 more on the ground.

You’re now a worldwide, soughtafter speaker, and no doubt you have this story down. Can you tell us what happened on that fateful day?

Well, it was the 4th of November 2010. We were flying from Singapore to Sydney, and seven minutes after takeoff, engine number two exploded. It was the turbine disk itself that exploded. It broke off in three pieces. Two pieces missed the airplane. One piece hit the airplane and caused some shrapnel. A bit like a cluster bomb or like a grenade that caused some 500 impacts on the airplane and on the fuselage that were detected. It also made some major holes in the aircraft, and it cut about 650 wires and damaged 21 out of 22 systems.

The failure of the engine was not the critical problem. The problem was the loss of the systems, so we really had to assess what we had left of the computerized aircraft and find the best way on how to get it down on the ground. That took us two hours in the air, and on the ground there was another two hours of decision-making to really guarantee the safety of the passengers as much as possible. And that was a very difficult time in terms of decision-making. And then we got the passengers off, they all got home, there were absolutely no injuries for our black swan event.

And so, there are a lot of lessons out of all the things that we did, which is really an amalgamation of all the skills you’ve learned through osmosis in your flying career. I’ve been flying for almost 40 years now, so you learn things during that career. So, all of the decisions you make are a culmination or a result of all the knowledge that you assemble and the experience and the training, and there’s teamwork.

TEAMWORK, CRM AND PILOT VS. COMPUTER

Speaking of teamwork, there were five pilots in the cockpit that day, the culmination of some 150 years of experience. But you all had to put all of your heads together to get the plane down on the ground.

That’s right. What’s important in aviation is to create a “shared mental model.” In fact, you even want the cabin crew to have that shared mental model. During QF32 we didn’t talk to the cabin crew that much because the intercom wasn’t working properly [having been damaged during the event]. But we knew what they would be doing and when, and that’s again where trust comes in.

So, when you’re this high-reliability, highrisk organization, you need procedures and practices, you need knowledgeable people all having assigned tasks and responsibilities. And if they do their job, it all comes together. And on QF32, it all came together because everyone did their jobs.

QF32 really is a story of teamwork. Aircraft these days are too complicated to be flown by one pilot. It’s a team-based organization where you trust everyone to do their jobs, and they all did. We had eight teams, and they all did their job. It was a great outcome. QF32 is team excellence.

This ties into CRM, or Crew Resource Management. For example, you credit First Officer Matt Hicks for pointing out that a 146-knot approach speed was going to be too slow. This was a number spit out from a computer, and basically he used his judgment as a pilot to overrule the computer.

Absolutely. We’re dealing now with highly computerized systems, and if you don’t make an effort to get to understand the core of these systems, then you might become a victim or you might think the airplane is flying you.

In the old days, we could dial the telephone by spinning that dial, and anyone could do it. Now, to use an iPhone to make a telephone call, we’ve got to turn it on, we’ve got to understand all the processes to turn our iPhone on. So the computerization of the telephone has responsibilities, if you want to use it to make a telephone call.

So, if you want to go to a high-tech aircraft that is run by computers, there is a responsibility to understand the underlying systems if you want to use them. Because when those systems fail—and they do fail—it’s up to the pilot now to recover an aircraft that is very complex and much more sophisticated. And we’ve only got two pilots these days flying the aircraft, where in the past, with simpler aircraft, we had many more pilots. But the automation now lets us have two pilots to run a very complex aircraft.

So what happens when automation fails?

When the automation fails, it’s still the pilot’s responsibility to get the passengers down on the ground. And he hasn’t got the excuse to let go or scream, or run away or panic. He has to take what he’s given and work out what he’s got left, do the Threat and Error analysis, and work on how he’s going to recover the aircraft. This takes time. So the new aircraft today, if it’s not on fire, then you generally have time to sort these things out.

Many people said we should’ve just thrown the aircraft down on the ground. I disagree.

We had to work out what we had left, and how we would get that down onto the ground.

So these new, very complicated aircraft need highly skilled people who can get to the core, understand the core, and that means when these computers fail, they can reverseengineer these machines and treat it just like a flying lawn mower, and work out how to get the machine down onto the ground. It’s very simple, but you need a sense of reasonableness.

And that’s what Matt had, a sense of reasonableness to know that, when the slats didn’t come out, the approach speed had to increase. So in amongst all the stress of things going wrong and all the alarm bells going off, when he was given an approach speed that was similar to the normal approach speed, he knew instinctively it was wrong, he said that is wrong. His gut feeling, his instinct, said it’s wrong. And he was correct.

And he did that sitting in his seat under high stress. And so Matt did a great job. I’m very proud of Matt.


July 2015Add to cart | View Details


Yes, and kudos to you, Captain, for pointing that out. In our new, enlightened CRM environment, it takes everybody to fly that airplane, and the Captain is not just God.

As you know, we have a pilot flying and a pilot not flying. And the pilot not flying carries out all the checklist and he’s tunnel-visioned and he’s focused and looking after all the systems.

And the pilot flying, the Captain, really has to aviate, navigate and communicate. He’s monitoring what the other pilot is doing; he’s monitoring what’s happening in the cabin. He has to keep a situational awareness. He’s got to have spare processing brainpower to keep an awareness of all the things going on around him.

He can’t afford to be tunnel-visioned, because then he’s going to lose the situational awareness. Maybe lose sight of what’s happening to the fuel quantity that’s slowly decreasing every minute they stay in the air. So the Captain must work hard to stay unloaded, so that he can command the situation and be on top of it.

So, Matt was extraordinarily busy, did a fantastic job, and my job was to try to sit back as much as I could and keep a broad awareness.

We recently featured a story entitled “Medical Emergency” (Airways, April 2015), which emphasizes CRM and the fact that the captain has to make difficult decisions in limited time, all while traveling at 10 miles a minute. But it all boils down to exactly what you said, something that students are taught from Day One: aviate, navigate, communicate.

That’s right. Your job is to fly the airplane, and that was our first reaction: level the aircraft and sort out the fact that we were flying. We had the auto thrust fail. And to stabilize the aircraft, make sure we weren’t going to fly into the hills to which were aimed towards. So, you aviate first, then navigate and then communicate. And so, we kept that priority the whole time.

And it’s important that the Captain really does keep his mind where he has to delegate. Now, we had five pilots on the crew that day. And all five pilots were very, very busy, and we delegated. So, the Captain has to assign tasks to other people and then let them do their job—not micromanage. If he’s micromanaging, that means he’s tunnel-visioning or focusing on that, to the detriment of monitoring the rest of the situation. So, you have to delegate, you have to put trust in your people to do their job. The expression is, “Trust but verify,” so you can go back and verify every now and then. And that’s what we were doing.

[tribulant_slideshow gallery_id=”215″]

What was happening on the flight deck?

Everyone was busy on the flight deck. We had Matt head down, doing ECAMs [Electronic Checklists. ECAM stands for Electronic Centralized Aircraft Monitoring, i.e., the computers that detect failures and present the appropriate checklists on a display screen.] I was head up, keeping a global situational awareness. I told Mark, who was the third pilot, “Mark, if Matt and I are looking down, then you look up. If we’re looking up, then you look down.” You should never have everybody looking at the same thing at the same time.

So, the role of the Captain is really a leadership role, and it’s getting the best out of your team. Your job is to enable the team to do their job.

They’re all trained, they’re all qualified. We get assessed seven times a year in my airline, and I know they are fully qualified to be in that position. They don’t have to prove themselves, and so I let them do their jobs, and they did a great job.

And, again, this is why the QF32 story is so interesting, because of the leadership and the teamwork and the decision-making. In the book, I mentioned that, when I couldn’t guarantee the fuel—I didn’t understand the fuel page, and no one else really did—I decided to climb into what we call an Armstrong spiral, named after Neil Armstrong. And there’s a long history to that, and that’s in the book.

So, I requested to climb to 10,000 feet, and we were within 30 miles [of the airport], so I was setting up to be able to do a glide landing if all the engines failed. Now, I did that instinctively, because my gut feeling was I had to mitigate the loss of all engines.

I was in charge of the radio, so I requested that climb and they gave it to me. And all the other pilots, when they thought I was going to climb, called out, “No! Stop!”

What did you think when they asked you to stop?

I thought, “Clearly, the other pilots don’t know what I want to do. They don’t have the same feeling for wanting to get the safety of the height.” They probably also thought that, if I set the engines to high thrust, I would destroy more engines, which I wasn’t planning to do. Because, if we had firewalled the engines, I think we would have destroyed two more engines.

But the point is, I’m really glad they did say, No. And it’s important that they say no. And when the other pilots in the crew say no, or when someone in any organization says, “Stop, this is not right,” you have to listen to them, because these are perhaps experts in their fields. Like, when they say,

“No, the space shuttle is not cleared to take off on this cold day,” or, “Maybe we shouldn’t be drilling oil in the Mexican Gulf using this type of mud mix.”

So, there are people out there who are experts, and when they say stop, they are probably giving you good advice, and you have to listen to it. So, I was very lucky that they did say stop, and I considered I didn’t have time to discuss it. So, I thought, “They’re not as worried as I am, so let’s continue, and if gets worse, I will climb later.” But I deferred to their group mental model and group expertise, thinking, “Maybe I should listen to them, and we’ll discuss it later in time.”

When they said stop, and I didn’t climb, that is not a point of failure for any leader. That is actually a point of success for the team. And that’s the point. A win, in a situation like any aircraft situation, is a team win. Failures are because of the Captain, because they haven’t led it. So, it was good they said stop and I’m very proud that they did. And you never want to be on an aircraft with people who are afraid to say stop.

A very good point, as well as your point about delegating to unburden yourself as a Captain, to maintain that global awareness and focus on the big picture.

You only have a certain amount of brainpower. Your brain doesn’t really work all that fast. It’s got billions of sensors, inputs coming from all over your body, and it’s got your eyes trying to recognize the environment, working out on what’s going on. Your brain is working 80% of the time just to keep the situational awareness up.

Think about when you’re driving in a car, and you’re being told to multiply some numbers. In a complex multiplication, people will close their eyes, because they don’t have the brainpower to carry that out. They wouldn’t be able to drive the car and make the multiplication. But if you had a complex multiplication to do when you’re sitting down, you close your eyes, you can take it all in. So, the Captain or the leader must delegate to keep that precious extra mind space free.

ECAMS, INVERTED THINKING AND THE GLORIFIED CESSNA

Speaking of that extra, precious mind space, there came a point when you, yourself, said, “Stop. Let’s invert the thinking.” Tell us about that.

Well, that was because I couldn’t resolve it. The Airbus has a philosophy of providing notification of failures, and we have the ECAM in the Airbus; it’s EICAS [Engine Indications and Crew Alerting System] in a Boeing.

The A380 has 250,000 sensors feeding in, and when there’s an error, the flight warning computers look up the error message in the database and sees if there is a procedure. We have 1,225 checklists in the A380. So, it would pull up the checklist procedure to handle that particular error.

So, when we had 650 broken wires and half the network slashed, the numbers of either errors that came through, or inconsistent messages— because wires weren’t just being cut, they were shorting with other wires—it created an overload of ECAM messages.

What do you think about ECAM?

ECAM was designed to make your life easy. But the ECAM, in our case, made life hard, for three reasons: One, it was an overload; there were too many. And the second reason is because some of the ECAMS were wrong, because the sensors were faulty. And if we would have followed the checklist, perhaps we wouldn’t be here today. So, we were very critical and skeptical of the systems, and many checklists we actively did not do.

The third thing is that the ECAM checklist is really designed for a single point of failure.

So, we had lost 65% of our roll control, and that might be OK if we were in balance. But we had three fuel imbalances that were all out of limits. And the ECAM didn’t suggest the natural follow-on, which would be, will we be able to control the aircraft when we come in to land?

There was no ECAM that said, “Do a control check.” In fact, up until about a year ago, there was really no damage assessment or aerodynamic flight control check for an aircraft in the Airbus manuals. There is now.

So, the ECAM would give single errors that we were having trouble keeping up with. But then, to take the combination of errors producing a third error… the ECAM doesn’t give any guidance. So, I’d lost my mental model of where the aircraft was, and how I was going to be able to control it later on. ECAM didn’t work, and I’d lost the picture. And at that point, I inverted the logic and said, “Instead of looking at what’s failed, start looking at what’s working.”

This was indeed a very difficult situation. Do you recall anything symilar happening in the past?

That is a pure follow-on from Gene Kranz in the situation of Apollo 13. Where Gene Kranz told his mission controllers in Houston, who were also melting down because Apollo 13 had all these failures, and the mission controllers didn’t know how to resolve it all, Gene said, “Stop wondering at what’s failed, and let’s focus on what’s working.”

And that’s what we did. And that made a complex A380, with four million parts and with 1,225 checklists and 250,000sensors, it turned it into a glorified Cessna. And so, we built our fuel, hydraulics, our landing gear, our electrics, our pneumatics… I basically built a Cessna in my mind, and at that point it became simple.

0

About Author

Eric Auxier

Eric Auxier

Airbus A321 Captain with over 21,000 hours flying in 35 years for a major US airline. Plenty of experience in Alaskan and Caribbean skies, and popular aviation blogger and author of eight books. Scored Amazon’s “Top 100 Breakthrough Novels” for “The Last Bush Pilots.”

Related Articles

0 Comments

No Comments Yet!

There are no comments at the moment, do you want to add one?

Write a comment

Only registered users can comment.