Tag Archives: Joseph Shea

Two Doors

This paper was researched and written while I was working at Defence Research & Development Canada (DRDC) in Toronto, Canada. DRDC is the national leader in defence science and technology. They develop and deliver technical solutions and advice to the Department of National Defence, the Canadian Armed Forces, other federal departments, and the safety and security communities.

The paper was prepared for Dr. Ming Hou, Senior Defence Scientist at DRDC and the Principal Authority of Human Factors Research and Development on Human Technology Interactions within the Department of National Defence.

The paper examines Human Factors Engineering as it relates to bridging the gap between Human Factors requirements and Systems Engineering. The contents of the paper are my own and do not reflect DRDC opinion or policy.

Focus

I will examine two machines (two doors) and the two tragedies that resulted from doors that would not open when they needed to be opened. The resulting failures resulted in the deaths of three astronauts in Apollo 1 on January 27, 1967 and 150 passengers and crew onboard Germanwings Flight 9525 on March 24, 2015.

The Apollo 1 crew. Astronauts (left to right) Gus Grissom, Ed White and Roger Chaffee. (NASA)

Human Factors Engineering (HFE) should have played a more significant role in risk reduction for both events. These events would have had better outcomes, i.e., no loss of human life, if areas of concern had been implemented as preliminary requirements prior to system validation.

Cockpit door control. Similar to the one used on Germanwings Flight 9565.

Human Factors Analysis (HFA)

Background

Apollo 1

Apollo 1 was the first crewed mission for the United States Apollo program. The objective of the program was to land the first human being on the moon.

The crew of Apollo 1:

  • Virgil I. “Gus” Grissom – Command Pilot
  • Edward H. White II – Senior Pilot
  • Roger B. Chaffee – Pilot

Before Apollo 1, however, it is relevant to track the journey of Command Pilot, Gus Grissom, because his journey had significant impacts on the tragedy that unfolded on January 27, 1967. Grissom was one of NASA’s seven original Mercury astronauts. Project Mercury was the first human spaceflight program in the United States. It ran from 1958 until 1963. The project was fictionalized in Tom Wolfe’s novel, The Right Stuff, and subsequent film by the same name.

Grissom received a degree in Mechanical Engineering from Purdue University. He flew 100 combat missions during the Korean War. Grissom was selected to fly the second manned suborbital flight aboard Mercury spacecraft #11, renamed Liberty Bell 7 by Grissom himself.

The Liberty Bell 7 was the first Mercury operational spacecraft with a centerline window instead of two portholes. The spacecraft also had a new explosive hatch release. This would enable the pilot to exit the spacecraft quickly in the event of an emergency. As we will discover later, this type of engineering requirement was not implemented for Apollo 1 – with devastating consequences.

Liberty Bell 7 (Mercury) MR-4 Explosive Hatch Diagram. (NASA). Click to enlarge.

“The explosive hatch design used the 70 bolts of the original design, but each quarter-inch (6.35 mm) titanium bolt had a 0.06 in (1.5 mm) hole bored into it to provide a weak point. A mild detonating fuse (MDF) was installed in a channel between the inner and outer seal around the periphery of the hatch. When the MDF was ignited, the resulting gas pressure between the inner and outer seal would cause the bolts to fail in tension.” (Information from http://www.wikiwand.com/en/Mercury-Redstone_4)

“There were two ways to fire the explosive hatch during recovery. On the inside of the hatch was a knobbed plunger. The pilot could remove a pin and press the plunger with a force of 5 or 6 lbf (25 N). This would detonate the explosive charge, which would shear off the 70 bolts and propel the hatch 25 ft (7.6 m) away in one second. If the pin was left in place, a force of 40 lbf (180 N) was required to detonate the bolts. An outside rescuer could blow open the hatch by removing a small panel near the hatch and pulling a lanyard. The explosive hatch weighed 23 lb (10 kg)” (Information from http://www.wikiwand.com/en/Mercury-Redstone_4)

A fictional scene from the film The Right Stuff (1983) explores the dynamics of Human Factors and Systems Engineering. The confrontation between stakeholders (pilots) and engineers is shared to highlight how the stakeholders pushed for a window and a hatch with explosive bolts that the astronauts on the Mercury spacecraft could open themselves.

The scene presented below is heavy-handed and it doesn’t properly convey the time line of changes made to the Liberty Bell 7 at the urging of the pilot-astronauts. The window and hatch redesigns were discussed and implemented over a longer period of time, and the interplay between the pilots and the engineers was rarely this dramatic or confrontational. But the scene does convey the importance of the new hatch design and the tension between Human Factors requirements and the Systems Engineering team.

The Liberty Bell 7 with Gus Grissom aboard was launched on July 21, 1961 at Cape Canaveral in Florida.

Video of the launch with full audio interaction between Grissom and ground control can be experienced here at the 2:15 mark until 3:15.

The man Grissom is speaking with is Col. John A. Powers, NASA Information Officer, the “voice of Mercury Control.” The audio recording of Grissom’s flight is fascinating in and of itself.

This is Grissom’s first experience in space and is constantly being tempted to look out the window at the views from his spaceship. He has tasks to accomplish but the visual experience of outer space is too good to pass up. The visual portions of the following clips are obviously not from the flight. They are simulations. But the dialogue exchanges between Grissom and Powers are real. Here is Grissom talking about being drawn to the window, the window that the astronauts fought so hard to get.

From 6:30 – 7:10:

Powers is sympathetic to Grissom’s desire to look out his spacecraft window. At 9:55 Powers tells Grissom he can spend more time looking out his window, “You have more time to look if you like.”

From 9:40 – 10:10:

As Grissom and the Liberty Bell 7 begin re-entry into the Earth’s atmosphere Grissom begins calling out the g-forces. For perspective, an astronaut might experience around 3gs during lift-off. Fighter pilots can pull up to 9gs vertically. As Grissom hits 10gs on re-entry you can hear the considerable strain in his voice.

From 10:28 – 11:30:

This was a short flight experience for Grissom. From take-off to touchdown in the ocean the flight only lasted 15 minutes and 30 seconds. Once Grissom had landed in the ocean with a “mild jolt” the following describes what was supposed to happen according to Eric Berger: “Grissom was ready to press ahead with the final stage of his mission. ‘I felt that I was in good condition at this point and started to prepare myself for egress,’ he said. Before firing the hatch, Grissom was supposed to wait for a rescue helicopter to fly over, hook into the lifting loop on top of the capsule, and raise it out of the water. Once clear, he was to remove the cap from the detonator, pull the safety pin, and activate the firing mechanism. Then he could step onto the sill of the hatch, climb into a horse collar lowered from the helicopter, and be pulled to safety without ever getting wet.”

The key to recovering Grissom and the Liberty Bell 7 was to have the helicopter hook the spacecraft’s lifting loop prior to the hatch being fired open. This was critical because the helicopter was supposed to lift Liberty Bell 7 slightly up from the ocean so that once the hatch was blown open no ocean water would get into the capsule. But the hatch to Grissom’s spacecraft blew early and Grissom’s capsule was flooded with ocean water. He managed to get free of the capsule but his spacesuit began taking on water and he started to drown.

This short clip from the film, The Right Stuff, shows the terrifying experience that Grissom endured. One of the reasons he almost died was because the helicopter crew was focused on recovering the Liberty Bell 7 and was not aware of the life-and-death struggle that Grissom found himself in. Unlike the previous clip from The Right Stuff, this is probably an accurate depiction of what Grissom endured. Actor Fred Ward plays Gus Grissom in the film.

Grissom had nearly died in the ocean. And he lost his spacecraft. NASA concluded that there had been a malfunction with the capsule hatch and Grissom had done nothing wrong. There is an element of tragic journey with Grissom who would escape death in water to eventually perish in fire. The other tragic outcome of the hatch failure of Liberty Bell 7 was that it provoked the NASA engineers to redesign the hatch system on their spacecraft. They didn’t want, understandably, another occurrence of a hatch blowing open on its own. This “correction” would have a devastating impact on Apollo 1 when Senior Pilot Ed White exclaimed, “Fire!”

Gus Grissom, flanked by military medical officers, on deck of the USS Randolph after his 15-minute, 37-second suborbital space mission. Grissom’s expression conveys the emotion of nearly drowning in the Atlantic Ocean after the hatch of the Mercury spacecraft, Liberty Bell, blew open prematurely in the choppy waters. (NASA)

One of the unfortunate outcomes of Grissom’s near-death experience with the hatch failure was the manner in which it was depicted in Tom Wolfe’s novel, The Right Stuff, which became Philip Kaufman’s film by the same name.

George Weigel expressed his disappointment in Wolfe’s characterization of Gus Grissom, “Of the ‘Original Seven’ Mercury astronauts, Gus Grissom, the runt of the litter, has also gotten the shortest shrift in the public mind. Regarded at the time of his death in the January 1967 Apollo 1 fire as a prime candidate to be the first man to walk on the Moon, Grissom was posthumously eviscerated by Tom Wolfe in The Right Stuff, as Wolfe created a foil for his heroic portrait of all-star test-pilot Chuck Yeager. There, and in the movie made from Wolfe’s bestseller, Grissom was transformed in the public mind into ‘Little Gus’ or ‘Gruff Gus,’ the plodding, Hoosier-dull, slightly incompetent antithesis of superhero Yeager. Wolfe’s caricature did both history and the memory of Gus Grissom a terrible disservice.”

Weigel’s comments are especially relevant to my own experience with the history of Gus Grissom. I wrote a short essay on the fire in Apollo 1 when I was in the fifth grade of elementary school. I don’t recall how I came to know about the tragedy but I know, at that young age, I was deeply affected by the deaths of those three astronauts.

Betty Grissom sons Scott (L) and Mark in 1965 after her husband, astronaut Virgil “Gus Grissom, completed a three-orbit flight. (CBS News)

I was also affected by the film, The Right Stuff, and accepted as fact that Gus Grissom had panicked and “screwed the pooch” by accidentally triggering the mechanism that blew open the capsule hatch of Liberty Bell 7. I can see now that the portrayal of Gus Grissom in the book and the film was unfortunate. As a professional author and screenwriter it is an emotional reminder that great care must be taken when portraying real people, especially when someone like Gus Grissom was not around to defend himself.

In 1964 Grissom was selected as the command pilot for Gemini 3, which flew on March 23, 1965. With this mission Grissom became the first NASA astronaut to fly into space two times. The two-man flight with Grissom and John W. Young made three revolutions of the Earth and lasted for 4 hours, 52 minutes and 31 seconds. Grissom would have had much more time to “take in the views” from his window than he had on Liberty Bell 7.

Unlike the experience with the Liberty Bell 7 Grissom had a perfect landing in the ocean with Gemini. And he refused to open the hatch until the helicopter had gripped onto his capsule.

From 3:25 to 3:45:

It is relevant to point out how much engineering inputs Grissom provided to his NASA teams. He worked very closely with the technicians and engineers from McDonnell Aircraft, the company that was building the Gemini spacecraft. His involvement in the design and engineering was so prominent that fellow astronauts referred to the craft as “the Gusmobile.”

D.C. Agle writes about Grissom’s design and engineering influence. Grissom, as Human Factors stakeholder, was critically involved in everything. According to Agle, “What Grissom would tell the engineers at McDonnell’s St. Louis plant, where the Gemini was being built, was how to make it a pilot’s spacecraft. ‘Since we had to fly the beast, we want one that will do the best possible job,’ he wrote in Signal magazine before his Gemini flight. Grissom … became a spokesman for the astronauts during the design of the vehicle. And he was determined to see that the limitations of Mercury were not repeated. In a Gusmobile, the astronaut was going to be an integral part of the system rather than a backup.”

During this time, according to Agle, “Grissom invented the multi-axis translation thruster controller used to push the Gemini and Apollo spacecraft in linear directions for rendezvous and docking.” Without this mechanism there would have been no “man on the Moon” because rendezvous and docking was a critical procedure for getting astronauts up from the Moon and back to Earth.

In this June 1966 photo, the Apollo 1 crew practices water evacuation procedures with a full scale model of the spacecraft at Ellington AFB, near the then-Manned Spacecraft Center, Houston. In the rafts at right are astronauts Ed White and Roger Chaffee, foreground. In a raft near the spacecraft is astronaut Virgil Grissom. (NASA via AP)

Grissom was transferred to the Apollo program and was assigned as Command Pilot of the first manned mission.

Joseph Shea was the Apollo Spacecraft Program Office (ASPO) manager. He was responsible for managing the design and construction of both the Command and Service Module and the Lunar Module.

According to Charles Murray and Catherine Bly Cox, “In a spacecraft review meeting held with Shea on August 19, 1966 (a week before delivery), the crew expressed concern about the amount of flammable material (mainly nylon netting and Velcro) in the cabin, which both astronauts and technicians found convenient for holding tools and equipment in place. Although Shea gave the spacecraft a passing grade, after the meeting they gave him a crew portrait they had posed with heads bowed and hands clasped in prayer, with the inscription: It isn’t that we don’t trust you, Joe, but this time we’ve decided to go over your head (184).”

The Apollo crew (left to right, White, Grissom, Chaffee) expressed their concerns about their spacecraft’s problems by presenting this parody to ASPO manager Joseph Shea on August 19, 1966. (NASA)

The engineering challenges of Apollo 1 were vast. According to the Report of Apollo 204 Review Board the Apollo 1 spacecraft was shipped to Kennedy Space Center (KSC) in Florida on August 26, 1966, only five months prior to the fatal fire. There were “113 significant incomplete engineering changes” that had to be completed at KSC and an additional “623 engineering change orders” that had to be made and completed after delivery.

According to Mary C. White’s biography of Gus Grissom, there were continuous problems with the engineering on Spacecraft 012 (Apollo 1), “The arrival of Spacecraft 012 to the Cape only brought more problems. It soon became obvious that many designated engineering changes were incomplete. The environmental control unit leaked like a sieve and needed to be removed from the module. As a result, the launch schedule was delayed by several weeks.”

The flight crew was at the mercy of a tight time schedule to get Apollo 1 in the air. According to White, five days before the fatal fire, Grissom went home to see his family, including Betty, his wife, “Grissom made a brief stop at home before returning to the Cape. A citrus tree grew in their backyard with lemons on it as big as grapefruits. Gus yanked the largest lemon he could find off of the tree. Betty had no idea what he was up to and asked what he planned to do with the lemon. ‘I’m going to hang it on that spacecraft,’ Gus said grimly and kissed her goodbye. Betty knew that Gus would be unable to return home before the crew conducted the plugs-out test on January 27, 1967. What she did not know was that January 22 would be ‘the last time he was here at the house.’”

Human Factors Analysis (HFA)

Background

Germanwings Flight 9525

On March 24, 2015, the co-pilot* of Germanwings Flight 9525 murdered 149 passengers and crew when he crashed an Airbus A320-211 into the mountains in France in the remote commune of Prads-Haute-Bléone, 100 kilometers northwest of Nice. The co-pilot was alone in the cockpit after locking out the aircraft’s pilot, Captain Patrick Sondenheimer.

*For the purposes of this paper I will not be referring to the individual who perpetrated the murder-suicide by name. The motivations behind these senseless massacres are very often fuelled by a desperate desire for notoriety. Prime Minister Jacinda Arden of New Zealand recently made it a point of emphasis when she refused to use the name of the individual responsible for the Christchurch mosque shootings.

The circumstances that allowed the co-pilot to be alone in the cockpit of a civilian passenger plane behind a locked, impenetrable cockpit door are varied.

The co-pilot suffered from severe depression and his condition, according to Lufthansa (the company that owned Germanwings), was not known to them. There are many aspects of this information sharing, or lack thereof, of the co-pilot’s mental state that have been explored as contributing factors to this mass murder but I will not be detailing those in this paper. It is a simple fact that the co-pilot should not have been working as an airline pilot. But the questions of predicting psychotic, criminal behaviour can best be explored in other research.

For this paper I am more focused on the technology that allowed the co-pilot to be alone in the cockpit. Predicting psychotic, criminal behaviour is challenging. It is an argument put forth by gun’s rights groups such as the National Rifle Association in the United States who argue for better prevention when it comes to individuals who might be capable of mass murder and argue against common-sense gun control. Obviously society would benefit from a reliable method that could predict the individuals who will be committing mass murder but such a methodology is not currently in place or predictably reliable.

The question of whether a psychiatrist should report a pilot’s mental state to his/her airline is a very large question with many implications. Is it fair for someone seeking help with a mental health issue to have that issue brought to the attention of his/her employer, even when public safety is considered?

The German Depression Foundation conducted a study and concluded that 5.3 million Germans suffer from depression each year and “around 17 percent of German adults will experience a persistent disorder in their lifetime.” What the co-pilot did is extremely rare and it would be unfair to stigmatize other individuals suffering from depression. Employer engagement with a psychiatrist might cause individuals to avoid treatment for fear that they will lose their jobs.

As mentioned earlier, the mental state of the co-pilot is a very complicated narrative. What is not complicated is the technology pathway that allowed him to commit mass murder.

A relevant question about the Germanwings murder-suicide is related to the population in the cockpit. How did it happen that the co-pilot was alone in the cockpit? Automation played a role in this result.

Prior to the expanding automation of passenger aircraft, the cockpit was traditionally populated by a pilot, co-pilot, and either a flight engineer or a flight navigator. Through the early 1950s, a typical long-range transport crew had four people in the cockpit – a pilot, co-pilot, flight engineer, and a navigator. Three-person crews for large civilian passenger aircraft were common until the 1980s when the role of the flight engineer became redundant.

The flight engineer (foreground) was a part of the cockpit flight crew until the 1980s. Photo by Rainer Spoddig.

I should mention that I do recall a flight from Calgary, Canada to Los Angeles, California, in the early 1990s. I was allowed into the cockpit to “have a look” and there were three crew members in the cockpit, pilot, co-pilot, and flight engineer.

The consequences of increased automation making the flight engineer’s role redundant are obvious. It would have been much more difficult for the co-pilot to lock two members of his flight crew out of the cockpit or to overpower one of them and seize control of the aircraft.

The most significant event that impacted the circumstance of Germanwings Flight 9525 was the terrorist attack in the United States on September 11, 2001. On this day, the terrorist group al-Qaeda hijacked four passenger airliners by storming the cockpit and seizing control of the cockpits. The airliners were then intentionally crashed resulting in the deaths of 2,977 people on the ground and passengers and crew on the airliners. This casualty total does not include the 19 hijackers.

Following 9/11, cockpit doors on passenger planes were redesigned. They were reinforced and bulletproofed to prevent unauthorized access. Most aircraft were also equipped with CCTV cameras so the pilots can monitor cabin activity, including the activity of anyone standing directly outside the cockpit doors.

The U.S. Federal Aviation Administration (FAA) was responsible for setting the rigorous security standards to protect cockpits from intrusion and attacks. The fortifying of the cockpit doors was deemed to be essential to the safety and security of the U.S. aviation system. Other countries followed this lead and many of them didn’t have a choice because their aircraft would not be allowed into U.S. airspace without the design changes.

Max Kutner of Newsweek provides a brief history of the security requirements for cockpit doors after 9/11. According to Kutner, “It wasn’t always so difficult for pilots to move in and out of the cockpit during flight; airlines began adding extra security measures to cockpit doors in the immediate aftermath of 9/11. ‘The policy before 9/11 was if you got hijacked on the plane, you accommodate them, you do what they want,’ says Peter Goelz, who served as managing director of the National Transportation Safety Board from 1995 to January 2001. After 9/11, he says, ‘there was a major reassessment of policy.’”

Kutner goes on to describe what happened within weeks of 9/11, “That reassessment came on September 28, 2001, when President George W. Bush announced that the government would award $100 million in government grants to airlines to help fund upgrades to cockpit doors. Manufacturers such as Triad International Maintenance and Advance Composite Technologies raced to provide stronger doors and reportedly experienced a bump in sales. Between October 2001 and January 2002, airlines completed upgrades on 4,000 planes, according to the Federal Aviation Administration (FAA).”

Kutner explains that even more stringent security measures were taken, “In January 2002, the FAA announced that it would require higher ‘standards to protect cockpits from intrusion and small arms or fragmentation devices.’ Airlines would need to install reinforced doors on more than 6,000 airplanes by April 2003, and within 45 days of the announcement, they would need to put ‘temporary internal locking devices’ in place on all passenger and cargo planes with cockpit doors.”

Image from aero-news.net. (2003). Retrieved March 21, 2019. Click to enlarge.

Kutner highlights the fact that total control of cockpit security is confined within the cockpit itself, “Reinforced cockpit doors are ‘designed to resist intrusion by a person who attempts to enter using physical force’ and ‘minimize penetration of shrapnel,’ the 2002 memo states. ‘The door will be designed to prevent passengers from opening it without the pilot’s permission. An internal locking device will be designed so that it can only be unlocked from inside the cockpit.’”

The internal locking device can only be unlocked inside the cockpit. This is a feature of the security system that will be looked at more thoroughly. John Magaw was the Transportation Security Administration undersecretary and according to Kutner, Magaw says what he learned after 9/11, “Don’t lock those doors so that you can’t get in from the outside if something happens.” Kutner is referring to an interview Magaw gave to CNN in 2014 in response to the disappearance of Malaysia Airlines Flight 370.

Image source: New York Times and Airbus. Click to enlarge.

The cockpit door system on the Airbus 320 that the co-pilot was piloting is shown in the infographic (above). The infographic shows that if there is one person in the cockpit and that person does not want anyone else to enter the cockpit then that person in the cockpit has complete control over the situation. This cockpit security system contributed to the tragedy of Germanwings Flight 9525.

Unfortunately, there was a lengthy history of pilot suicides that preceded Germanwings Flight 9525. The following is a list of these suicides provided by Wikipedia in a section titled “Suicide by Pilot“, retrieved March 21, 2019. This list is provided to show that pilot suicide was an existing risk prior to the Germanwings crash:

  • September 26, 1976: Pilot crashes his plane into an apartment in attempt to kill his wife and two-year-old son in the apartment.

5 fatalities: (Pilot, 4 on ground)

  • January 5, 1977: A disgruntled former employee of Connellan Airways (Connair) crashed his Beechcraft Baron into the Connair complex at the Alice Springs Airport in Australia.

5 fatalities: (Pilot, 4 on the ground)

  • August 22, 1979: Aircraft mechanic stole a plane and crashed it into a Bogota suburb.

4 fatalities: (Pilot, 3 on ground)

  • June 1, 1980: Pilot crashes his plane after an argument with his wife and mother-in-law.

7 fatalities: (Pilot, 4 passengers, 2 on ground)

  • February 9, 1982: Japan Airlines Flight 350. Pilot engaged number 2 and 3 engines’ thrust-reversers in flight. First officer and flight engineer partially regained control but plane landed in water.

24 fatalities, 150 survivors

  • September 15, 1982: Pilot crashes his plane at Bankstown Airport in Australia.

1 fatality

  • July 13, 1994: Russian air force engineer steals plane in Moscow, circles until he runs out of fuel, and crashes.

1 fatality

  • August 21, 1994: Royal Air Maroc plane crashed intentionally by pilot.

44 fatalities

  • September 12, 1994: Pilot crashes his plane into White House lawn in Washington, D.C.

1 fatality

  • April 2, 1997: Military pilot crashes plane on purpose in Colorado.

1 fatality

  • December 19, 1997: SilkAir plane crashed intentionally by pilot.

104 fatalities

  • October 11, 1999: Pilot crashes aircraft in Botswana.

1 fatality

  • October 31, 1999: Relief first officer crashes EgyptAir Flight 990.

217 fatalities

  • January 5, 2002: Teenager crashes plane into plaza in Tampa, Florida.

1 fatality

  • July 22, 2005″ Pilot crashes his plane in Berlin.

1 fatality

  • February 18, 2010: Pilot crashes plane in Austin, Texas.

2 fatalities: (Pilot, 1 on the ground)

  • November 29, 2013: LAM Mozambique Airlines Flight 470 was crashed by pilot with co-pilot locked out of the cockpit.

33 fatalities

  • March 8, 2014: Malaysia Airlines Flight 370 (probable suicide)

239 fatalities (presumed)

If we focus exclusively on commercial aircraft crashes that resulted from flight crew murder-suicide we can narrow this list to the following list of aircraft and resulting fatalities:

  • Royal Air Maroc Flight 630: 44
  • SilkAir Flight 185: 104
  • EgyptAir Flight 990: 217
  • LAM Mozambique Airlines Flight 470: 33
  • Malaysia Airlines Flight 370: 239

As of Tuesday March 24, 2015, in Barcelona, Spain, there had been 637 fatalities involving large commercial aircraft that were the result of flight crew murder-suicide. The combination of increased cockpit security and the history of flight crew murder-suicide would have devastating results for the passengers and crew of Germanwings Flight 9525, which lifted off from Barcelona-El Prat Airport at 10:01 a.m.

The Events

Apollo 1

Friday, January 27, 1967

Cockpit – Emergency Egress

The first manned Apollo mission was scheduled for launch on February 21, 1967 at Cape Kennedy Launch Complex 34.

On January 27, 1967, Command Pilot Gus Grissom, Senior Pilot Ed White, and Pilot Roger Chaffee, arrived at Cape Kennedy’s Launch Complex 34 in Florida to participate in a Plugs Out Integrated Test of Apollo 1. According to NASA, “The purpose of this test was to demonstrate all space vehicle systems and operational procedures in as near a flight configuration as practical and to verify systems capability in a simulated launch.” This was a full dress rehearsal for launch.

Passing this test was essential to meeting the February 21 launch date. The test was considered non-hazardous because neither the launch vehicle nor the spacecraft was loaded with fuel or cryogenics. The umbilical power cords that usually supplied power were removed, i.e., the plugs were out, and the launch vehicle and command module were no longer power-harnessed to the umbilical tower.

In a gripping series of videos, astronaut Donald K. Slayton (Deke), narrates what happened the day of the fatal fire. Slayton, along with Grissom, was one of the original Mercury 7 astronauts. Slayton’s experience of that day provides a deeper understanding of what happened and equally important, what was being felt. It is appropriate that a fellow astronaut narrate the final hours of the Apollo 1 crew. Emotion is a critical factor when trying to understand how Human Factors fits into the complex schemes of engineering, especially when engineering gaps result in tragedy.

All that remains of Pad 34, from where Apollo 1 was supposed to lift off. New York Times

Slayton calls the launch area for Apollo 1 Pad 34. The memories of January 27, 1967, are seared into his memories. “Pad 34,” says Slayton. “Man, if this place could talk.”

In order to better understand the emotions that were present on the day of the Apollo 1 fire, it is relevant to look back at the emotions of Gus Grissom following his near-fatal drowning in the Atlantic Ocean. Slayton’s video provides audio of Grissom being questioned by the press after he nearly drowned.

Grissom admits to the world that he was scared when he was in the ocean. He nearly died so he should have been scared. But the way the press gallery laughs when he admits to being scared is revealing. Test pilots and astronauts are not supposed to be afraid, according to a skewed perception. You can hear Grissom’s discomfort with the laughter when he says, “Okay?”

From 1:47 – 2:20:

Why is this relevant? The mood on the day of the plugs-out test was grim. All three astronauts, but especially Grissom, had hundreds of concerns about the engineering gaps that existed on Apollo 1. But after Grissom’s experience of losing Liberty Bell 7 and admitting fear he would not allow anything stand in his way. He had what the test pilots called, Go Fever.

In Part 2 of the “Story of Apollo 1,” Slayton called Apollo “a moon ship for a half-million mile journey … the most impressive flying machine ever built.” He goes on to describe the complexity of the machine, “This was probably the most complex thing put together by humans. As much technology as a nuclear submarine crammed into a package the size of a minivan.”

Slayton comments on the tragic day, “It was a Friday at the end of an awfully long week and we were tired. We all had our eyes on the weekend.”

Lola Morrow, astronaut secretary, vividly recalled the mood of the three astronauts on that fateful Friday. Morrow had worked very closely with Grissom, White, and Chaffee, and knew them, and knew their emotions very well. “In the morning when the crew came in to the office, you know, I sensed something … I don’t know what it was that I sensed. But I picked up something from all three of them. There was a quietness about them. Instead of being ready for a test when they usually just get up and bounce out the door, it was, it was something they didn’t want to do. Their attitude was 180 from anything I’d ever seen before.”

There is video of the three astronauts dressing for the plugs-out test. It is haunting to watch these three men in such a pensive state. There is almost palpable fear in their expressions. One thing to keep in mind: This was a test they were dressing for. Think about that. Gus Grissom had pushed all possible envelopes of danger as a fighter pilot and astronaut, including his near-death experience in the ocean. Ed White had been the first American to walk in space during the Gemini 4 mission.

Ed White during his spacewalk on June 3, 1965. (NASA)

Roger Chaffee had flown a U2 spy plane over Cuba during the missile crisis. He took the pictures of the Russian rockets that President Kennedy showed on TV.

And yet … look at them.

From: :42 to 1:00:

According to Slayton, “It would be actual flight conditions. Crew in full suits. Capsule under its own power. 100% oxygen inside and the hatch … sealed.” The hatch consisted of three parts: a removable inner hatch, which stayed inside the cockpit, a hinged outer hatch, which was part of the spacecraft’s heat shield, and an outer hatch cover, which was part of the boost protective cover enveloping the entire command module to protect it from aerodynamic heating during launch, and from launch escape rocket exhaust in the event of a launch abort. The hatch with explosive bolts that Grissom had used on Liberty Bell 7 was no more.

In this Jan. 27, 1967 photo, astronauts Virgil Grissom, right, and Roger Chaffee walk across the ramp leading from the gantry elevator to the Apollo I spaceship in Cape Kennedy, Fla., before the plugs-out test. This is the last photograph taken of them. (NASA)

Communication problems plagued the test all afternoon. The hours dragged on. At one point in the Slayton film, a frustrated Grissom remarks, “Hey, guys, how are we going to get to the Moon if we can’t talk between two buildings?”

According to Slayton, “The test was dragging on one glitch after another. Most of us had already called to say we wouldn’t be home for dinner. And then it happened.”

Image from “The Story of Apollo 1, Part 3.” (YouTube)

“Down under Gus’ seat,” according to Slayton. “Somewhere in 30 miles of wire. There was a short circuit. In the block house all we saw was another glitch on the meter.”

Image from “The Story of Apollo 1, Part 3.” (YouTube)

“Nobody knew it then,” explains Slayton. “But a spark had jumped out. It landed and sat there. In the pure oxygen. At 15 pounds pressure. It glowed. Brighter. And brighter. And then it went. Like a blow torch.” The time was 6:31 p.m.

According to the Report of Apollo 204 Review Board, somebody (some listeners and laboratory analysis indicate Grissom) exclaimed, “Hey!”, “Fire!” or “Flame!” – this was followed by two seconds of scuffling sounds through Grissom’s open microphone. This was immediately followed by someone (believed by most listeners, and supported by laboratory analysis, to be Chaffee) saying, “[I’ve, or We’ve] got a fire in the cockpit.” After 6.8 seconds of silence, a second, badly garbled transmission occurred, interpreted by various listeners as:

  • “They’re fighting a bad fire – Let’’ get out ….Open ’er up”
  • “We’ve got a bad fire – Let’s get out … We’re burning up,” or
  • “I’m reporting a bad fire … I’m getting out …”

This transmission lasted 5.0 seconds and ended with a cry of pain. The garbled transmission communicates vividly the priority of the moment: A desperate need to escape from the command module as quickly as possible through the hatch.

According to Report of Apollo 204 Review Board, “The intensity of the fire fed by pure oxygen caused the pressure to rise to 29 psi (200 kPa), which ruptured the command module’s inner wall. Flames and gases then rushed outside the command module through open access panels to two levels of the pad service structure. As the pressure was released by the cabin rupture, the convective rush of air caused the flames to spread across the cabin, beginning the second phase. The third phase began when most of the oxygen was consumed and was replaced with atmospheric air, essentially quenching the fire, but causing high concentrations of carbon monoxide and heavy smoke to fill the cabin, and large amounts of soot to be deposited on surfaces as they cooled. It took five minutes for the pad workers to open all three hatch layers.”

Five minutes. Five minutes to open a hatch that, in an emergency, should have been opened in seconds … from inside the cockpit.

“I called for medics and raced over to the pad,” says Slayton. “The radio was dead but I hadn’t given up on the crew. In their suits I figured maybe they still had a chance. The pad guys were burning their hands trying to get the damn hatch off, choking on toxic smoke. Almost as fast as it started, the fire was out. Finally they got the hatch off. And then we knew.”

According to astronaut Stu Roosa in the Slayton film, “Most of their suits were still white. You did not look in and see charred bodies.”

Richard Orloff has written that, “Some of the NASA control room witnesses said they saw Ed White on the television monitors, reaching for the inner hatch release handle.” Obviously as the fire spread through the cockpit the first choice for survival was the hatch, to open the hatch, and escape the fire. There were several contributing factors to the deaths of the astronauts but the hatch design had made quick escape impossible.

The film First Man (2018) depicted the journey of astronaut Neil Armstrong as he became the first human to set foot on the Moon. The film includes the friendship of Armstrong and Gus Grissom. It also contains a scene that depicts the Apollo 1 fire.

https://youtu.be/i_YrGKVGgwA

Slayton sums up the mood after that terrible night, “We always expected to lose someone. Some day. But not on the ground. That was not a way to die. Not for a test pilot. The Moon that had seemed so close now had vanished from sight.”

Germanwings Flight 9525

Tuesday, March 24, 2015

Cockpit – Emergency Ingress

Germanwings Flight 9525 was a regularly scheduled flight from Barcelona, Spain to Dusseldorf, Germany, a short, two-hour flight. Germanwings was a low-cost carrier owned by the German airline Lufthansa. The aircraft involved was an Airbus 320-211.

The cockpit crew on that morning consisted of Captain Patrick Sondenheimer and his co-pilot. Sondenheimer had 10 years of flying experience (6,000 flight hours) flying A320s for Germanwings, Lufthansa, and Condor. The co-pilot had joined Germanwings in September 2013 and had 630 flight hours of experience.

Captain Patrick Sondenheimer, in a photograph placed on a table at a shrine at the Germanwings headquarters in Cologne, Germany, commemorating the plane’s crew.

I have placed this grainy, out-of-focus image of Sondenheimer for a reason. I had hoped to find a good, quality photograph of Sondenheimer but a lengthy search using Google image search resulted in the photograph (right)  coming up a few times but there were no other images of Sondenheimer available.

Patrick Sondenheimer. Image retrieved from the Patrick Sondenheimer Foundation Fund page.

I did eventually find a better-quality photograph of Sondenheimer from a Foundation Fund page in his honor. Sondenheimer is survived by his wife, Annika, and a son and daughter.

Contrast these results with an image search of his co-pilot. There are many, many photographs of the co-pilot when a Google image search is conducted. Pictures of the co-pilot jogging, traveling the world, even pictures of him as a toddler, a baby.

It is a sad reflection of societal priorities that Sondenheimer, a pilot who responded heroically to a catastrophic situation, remains anonymous to the world. And the man who perpetrated a senseless, brutal mass murder has his image history splashed all over the Internet. It also speaks to one of the motivations behind the co-pilot’s murderous intent – he wanted to be remembered.

There are small details that emerge in any tragedy – details, and circumstances … that had they gone another way, had a small choice been made differently, disaster might have been averted. This occurred on the morning of Flight 9525. Captain Sondenheimer needed to use a washroom. This seemingly insignificant human moment loomed large as the flight journey unfolded. Instead of using the bathroom at the airport he pushed ahead and boarded his flight.

According to Joshua Hammer, “The Germanwings gate staff at Terminal 2 in Barcelona’s El Prat Airport began the boarding process for Flight 9525. Martyn Matthews, a 50-year-old engineer for the German auto-parts giant Huf, was among the first of the 144 passengers to board, taking a seat at the front of the plane. Matthews, a soccer fan, hiker, and father of two grown children, was heading home via Düsseldorf to his wife of 25 years in Wolverhampton, a city in the British Midlands. Maria Radner, a prominent opera singer who had just finished a gig performing Richard Wagner’s Siegfried in Barcelona, sat in row 19, along with her partner, Sascha Schenk, an insurance broker, and their toddler son, Felix. Sixteen high school students and two teachers from the German town of Haltern am See, exhausted after a weeklong exchange program, filled up the rear rows of the full flight. The students included Lea Drüppel, a gregarious 15-year-old with dreams of being a professional musician and stage actress, and her best friend and next-door neighbor, Caja Westermann, also 15.”

Flight 9525 was scheduled to depart at 9:35 a.m. It was delayed 26 minutes and finally lifted off at 10:01 a.m.

Hammer summarizes the next 29 minutes. He describes how Sondenheimer, “apologized for the delay and promised to try to make up the lost time en route. At one point, Sondenheimer mentioned to his co-pilot that he forgot to go to the bathroom before they boarded. ‘Go any time,’ the co-pilot told him.”

Hammer then describes the critical moment when Sondenheimer left the cockpit, “At 10:27, after the Airbus had reached its cruising altitude of 38,000 feet, Sondenheimer told the co-pilot to begin preparing for landing (it was only a two-hour flight), a routine that included gauging the fuel levels, ensuring that the flaps and landing gear were working, and checking the latest airport and weather information. The co-pilot’s response was cryptic. ‘Hopefully,’ he said. ‘We’ll see.’ It’s unclear if Sondenheimer noted his co-pilot’s odd language, but he said nothing in response. A minute later, Sondenheimer pushed his seat back, opened the cockpit door, closed it behind him, and ducked into the lavatory. It was 10:30 a.m.”

It is important to note that all descriptions provided for actions taken inside the aircraft from the moment Sondenheimer left the cockpit until the aircraft crashed 11 minutes later are imagined. There were no survivors so therefore, no eyewitnesses. Actions and movements have been inferred based on sounds interpreted from the cockpit voice recorder.

At 10:31 a.m., one minute after Sondenheimer left the cockpit, the aircraft left its assigned cruising altitude of 38,000 feet (12,000 m) and without approval began to descend rapidly. The air traffic controller declared the aircraft in distress after its descent and loss of radio contact. The French national civil aviation bureau, the Bureau of Enquiry and Analysis for Civil Aviation Safety (BEA) analyzed the aircraft’s flight data recorder and concluded that the co-pilot had set the autopilot to descend to 100 feet (30 m) and accelerated the speed of the descending aircraft several times thereafter.

Sondenheiner would have realized immediately that something was wrong. Why was the aircraft descending? He returned to the cockpit door and tried to get back in. It was locked.

Image from TomoNews US. YouTube. Published on March 26, 2015. Retrieved March 23, 2019 from https://www.youtube.com/watch?v=9sRxVdDYCMg.

For the next 9-10 minutes Sondenheimer tried to get back into the cockpit. It can be reasonably assumed that Sondenheimer would have tried to gain entry back into the cockpit by using the keypad pathway. Sondenheimer would have entered a numeric code that would trigger an override of the locking system.

Image from TomoNews US. Published on March 26, 2015. Retrieved March 23, 2019.

An alert would have sounded for 30 seconds before the door would unlock. Once the door unlocked Sondenheimer would have five seconds to enter. But this emergency unlock process can be overridden inside the cockpit by manually switching the control to the lock position, something the co-pilot must have done or else Sondenheimer would have been able to access the cockpit.

Control of cockpit door from inside the cockpit. Image from TomoNews US. Published on March 26, 2015. Retrieved March 23, 2019.

Once the co-pilot kept the system locked, the keypad and buzzer would have been deactivated and the cockpit door would remain locked another five minutes unless otherwise unlocked from inside the cockpit. There was no way for Sondenheimer to override the cockpit door security system because the co-pilot, inside the cockpit, had complete control of that system.

Even if Sondenheimer attempted several times to use the keypad located just outside the cockpit door, the co-pilot would have received this alert. The co-pilot would then have toggled the “LOCK” mechanism each time he heard the alert.

Based on the cockpit voice recorder, Hammer recreates the moments occurring once Sondenheimer, aware that the aircraft was descending, returned from his bathroom break, “Sondenheimer returned three minutes later, at 10:34. On a keypad outside the cockpit, he punched in his access code, then hit the pound sign. Access denied. ‘It’s me!’ he exclaimed, rapping on the door. Flight attendants—preparing to wheel their snack-and-beverage carts down the aisle now that the plane had reached cruising altitude—looked toward the commotion. A closed-circuit camera transmitted the captain’s image to a small television screen inside the cockpit; the co-pilot didn’t react. Alarmed, Sondenheimer started hammering on the door. Still, the co-pilot didn’t respond. ‘For the love of God,’ the pilot yelled. ‘Open this door!’ The plane was at about 25,000 feet. Passengers, feeling the steep decline now and gripped by the first wave of panic, began leaving their seats and moving through the aisles.”

Daily Mail. May 12, 2015. Retrieved March 18, 2019.

There is some speculated rendering here on the part of Hammer. The voice of Sondenheimer was loud enough that it was picked up by the co-pilot’s communication system so his words were transcribed. Responses of flight attendants, while not seen or heard by Hammer, would have been reasonably expected. Sondenheimer was also heard banging on the cockpit door. He could have been using a fire extinguisher or oxygen tank.

Hammer continues his speculated narrative at 10:39 a.m., two minutes before crashing, “Sondenheimer called for a flight attendant to bring him a crowbar hidden in the back of the plane. Grabbing the steel rod, the pilot began smashing the door, then trying to pry and bend it open. The plane had dropped to below 10,000 feet, the snow-encrusted Alps looming closer. Inside the cockpit, the co-pilot placed an oxygen mask over his face. ‘Open this fucking door!’ Sondenheimer screamed as passengers stared in bewilderment and mounting terror.

Sources: Flightradar24 and Aviation Herald.

The co-pilot breathed calmly. At 10:40, an alarm went off: ‘TERRAIN, TERRAIN! PULL UP, PULL UP!’ The plane dipped to 7,000 feet. The alarm signaled a shrill ‘ping-ping-ping,’ a warning of approaching ground. Sixty seconds later, the Airbus’s right wing clipped the mountainside at 5,000 feet. The only further sounds picked up by the voice recorder were alarms and screams.”

The aircraft was traveling at 700 kilometers per hour (430 mph) when it crashed into the mountain.

There is something disturbingly primitive about a man, a pilot, Captain Sondenheimer … reduced to kicking at a cockpit door, smashing at a cockpit door with an axe or a fire extinguisher or an oxygen tank, desperately trying to pry open a cockpit door with a crowbar. Where was the human-machine interface during this critical moment of need? Where was the technology pathway that could have rescued the desperate, catastrophic incident?

One hundred and forty-nine people were murdered on Flight 9525. Each victim had family and friends. The proportion of grief for the relatives and friends is incalculable.

Martyn Matthews, 50, died in the Germanwings Airbus crash, pictured with his wife Sharon, 48, (centre right) and children, Jade, 20, and Nathan, 23. Martyn Matthews sat up front, steps from where the captain tried to crowbar the cockpit open. Image retrieved from the Daily Mail,  March 23, 2019.

Martyn Matthews was one of the victims. Hammer of GQ reached out to his wife who, after the crash, had been escorted into a conference room at the French Ministry of Foreign Affairs in Paris, “With Lufthansa officials and other family members of the victims, she listened in silence to an electronically enhanced cockpit recording of the final eight minutes, accompanied by a video showing the flight path of the doomed jet. ‘We heard the door of the cockpit open and close while the pilot went for a toilet break … the rapid banging and shouting … the plane [alarm] saying ‘lift up, lift up,’ she recounted. ‘All I could imagine was my Martyn sitting there [just a few feet from the cockpit], watching, listening to what was going on, seeing the mountains at the side of the plane.’ Matthews ran out of the room, unable to bear it, seconds before the officials stopped the recording.”

Opera singer Maria Radner, her partner, Sascha Schenk, and their son, Felix, were in row 19. Courtesy of The Radner Family. GQ.

Hammer also reached out to Klaus Radner. He is the father of Maria Radner, who was killed in the crash. Radner died alongside her partner, Sascha Schenk, and their toddler son, Felix. Families and friends of victims suffer unspeakable torment, “Klaus Radner, a trim, powerfully built figure in his early sixties, has also been tormented by such thoughts. He pictures Maria busily entertaining her restless son at first, not focusing on the ruckus up front. He envisions Sascha scrambling out of his seat and running down the aisle, desperately trying to intervene. ‘Sascha was an impulsive, strong guy, and he would never have just sat there,’ Radner tells me as we sit at a corner table at the Maritim Hotel bar in Düsseldorf Airport—steps away from the arrivals hall Radner rushed into that early spring morning after hearing about the crash. ‘He would have wanted to do something. He would have taken action.’ Every night, he told me, his mind leads him back to the same image—the passengers’ final screams, and the moment of impact. ‘I have a picture in my head of Maria, Sascha, and Felix exploding,’ Radner told me, sucking in a quick breath to tamp down his emotions. ‘Of their bodies, exploding.’”

Klaus Radner’s descriptions are graphic, gut-wrenching, and hard to read. And at first glance they might seem out of place in a technology paper. But this outcome, this perpetual nightmare for human beings like Klaus Radner is precisely the type of impact that Human Factors Engineering is supposed to prevent. And you can absorb Klaus Radner’s words and know that they are applicable to the families and friends of all the victims of Germanwings Flight 9525 and Apollo 1.

The stakes are high when engineering for mission-critical and safety-critical systems.

Human Factors Analysis

Areas of Concern

Representation of Human Factors Analysis (HFA) outputs as “pre-requirements” and not “preliminary” system requirement.

Apollo 1

It is disappointing to look back and see that “getting the astronauts out of the cockpit in an emergency” was clearly not a priority for NASA engineers – not an area of concern. Given the engineering challenges of putting a “man on the moon” it is perhaps too easy to forgive the gaps that existed in the engineering process.

According to Colin Burgess et al, “Following the funerals of the Apollo 1 crew, the investigation into the fire began in earnest … Dr. Floyd Thompson set up twenty-one panels to assist the review board in investigating every aspect of the fire (151).” This was to be comprehensive accident investigation. Burgess et al describes the scope of the review process, “Dr. Thompson informed Seamans that fifteen hundred people were supporting the investigations – six hundred from government, and another nine hundred from industry and various universities (152).”

At the end of February 1967, Seamans put together a memorandum for Jim Webb. In this memorandum was a list of early recommendations that the administrator could present to Congress.

According to Burgess et al, some of these recommendations were:

  • That combustible materials now used be replaced wherever possible with non-flammable materials, that non-metallic materials that are used be arranged to maintain fire breaks, that systems for oxygen or liquid combustibles be made fire resistant and that full flammability tests be conducted with a mock-up of the new configuration.
  • That a more rapidly and more easily opened hatch be designed and installed.
  • That on-the-pad emergencies be revised to recognize the possibility of cabin fire.

It is clear that from the very beginning the hatch was determined to be a significant, contributing factors in the deaths of Grissom, White, and Chaffee.

Burgess et al explains that the review board “recognized that there had been sloth, ignorance, and carelessness, associated with the Apollo 1 craft (152).” Furthermore, the Summary Report of the Board (released April 5, 1967) stated, “it seemed no one had realized the extent of fire hazards in the overpressurized, oxygen-filled spacecraft cabin on the ground (152).”

It is interesting to note that the Board was not able to conclusively determine the exact cause of the Apollo 1 fire. Indeed, as of the writing of this paper, there has been no explanation of the specific initiator of the fire. This seems extraordinary but no explanation has ever been found. The Board did, however, identify the five conditions that led to the disaster:

  1. A sealed cabin, pressurized with an oxygen atmosphere.
  2. An extensive distribution of combustible materials in the cabin.
  3. Vulnerable wiring carrying spacecraft power.
  4. Vulnerable plumbing carrying a combustible and corrosive coolant.
  5. Inadequate provisions for the crew to escape.

According to Burgess et al, “Having identified the conditions that led to the disaster, the Board addressed to the question of how these conditions came to exist. Careful consideration of this question leads the Board to the conclusion that in its devotion to the many difficult problems of space travel, the Apollo team failed to give adequate attention to certain mundane but equally vital questions of crew safety (152).”

Emergency exit from the Apollo 1 capsule had been deemed a mundane issue. If the question had indeed come up during the engineering process it was obviously not addressed. If it had become a Human Factors Analysis output it did not advance to the point of becoming a pre-requirement and certainly not a preliminary system requirement.

It wasn’t just that the hatch took a long time to open. It also opened inward into the cockpit. This was not recognized as a problem. In an emergency, in the cramped quarters of the cockpit, an inward-opening hatch presented one further obstacle to a quick and safe exit from the cockpit.

Human Factors Analysis

Areas of Concern

Representation of Human Factors Analysis (HFA) outputs as “pre-requirements” and not “preliminary” system requirement.

Germanwings Flight 9525

In the rush to fortify cockpit doors of passenger aircraft after 9/11 it is not known how much Human Factors Analysis outputs factored into the new system requirements. It is therefore impossible to gauge these outputs as pre-requirements or preliminary system requirements. One pre-requirement they did not implement was emergency access to the cockpit that would bypass cockpit control.

The priority in the new cockpit design was focused on keeping intruders from gaining access to the cockpit. This meant giving all authority and control to the cockpit crew inside the cockpit. It is hard to find fault with the engineering design when the system requirements were motivated by the shocking loss of life in the 9/11 attacks.

There were provisions for emergency access to the cockpit. One of these provisions is shown in this Airbus A320 Cockpit Doors training video. In this example the purser has been unable to get a response from the cockpit. She is able to enter an emergency code on the code pad. This action triggers the timer for 30 seconds. In the cockpit the buzzer sounds continuously. This is what happened when Sondenheimer was trying to get into the cockpit using the emergency code on the code pad.

This video shows that Sondenheimer’s co-pilot would have been alerted for 30 seconds that the cockpit door was about to open. From the time the emergency code is entered it takes 30 seconds for the door to unlock. During this time Sondenheimer’s co-pilot simply overrode the emergency unlock procedure by switching his toggle to the LOCK position.

In this video the purser gains access because the crew is incapacitated and there is nobody to override the door-unlocking mechanism. From 3:59-5:26:

It’s not clear from the Airbus training video how the purser was planning to land the plane with both pilots incapacitated.

Following the Germanwings crash one of the revealing aspects has been the preponderance of focus on the mental health condition of the co-pilot and the questions about whether or not Lufthansa should have been aware of his condition. There are in fact lawsuits pending regarding this matter. Questions regarding the mental health of pilots and information sharing are important and obviously need to be explored thoroughly. But there has been very little in the way of rigorous enquiry regarding emergency access to the cockpit when the cockpit is under the control of somebody intending to crash the aircraft.

This is a problem. And it remains a problem that is unresolved. How long before the next murder-suicide involving a pilot of a large passenger aircraft?

I am including a photograph of the Germanwings co-pilot on a trip he made to San Francisco. I have cropped the photograph to remove his face.

Image of the co-pilot in front of the Golden Gate Bridge, San Francisco, from the co-pilot’s Facebook page. (BBC)

Copycat suicide is a documented, recurring reality. The reason I am showing this photograph is because of the background – the Golden Gate Bridge. According to John Bateson there have been an estimated 1,600 suicides at the Golden Gate Bridge between 1937 and 2013.

Certain locations attract a larger number of suicides. The Nanjing Yangtze River Bridge in China had more than 2,000 suicides from 1968 until 2006. The Prince Edward Viaduct in Toronto, Ontario, had 492 suicides before a barrier was constructed. The infamous Aokigahara Forest in Japan has up to 105 suicides a year. There are many more locations worldwide. Copycat suicide is a documented fact and murder-suicide involving pilots remains a clear and present danger.

It is understandable, to some extent, that provisions to cockpit door security engineering were not implemented soon after 9/11. At this point, in 2019, it is negligent to not have emergency access technology that does not rely on the authority pathway of the cockpit as a preliminary system requirement.

Human Factors Analysis

Inputs of Requirements Engineering (RE)

Apollo 1

Types of Requirements

The original design for the Apollo 1 hatch was directly related to some of the problems that Gus Grissom had encountered after landing the Liberty Bell 7 in the ocean.

Original hatch design of Grissom’s Liberty Bell 7 (with explosive charges). (NASA). Click to enlarge.

The new design, a three-door hatch design was modified for many reasons but one of the main requirements was to ensure the hatch did not blow open on its own. All Apollo flights were planned to land in water so fixing this issue was a priority.

According to an Apollo Information Sheet, “The original hatch consisted of three doors: an inner structure (main) hatch; a middle heat shield hatch; and a lightweight outer hatch hinged to the Boost Protective Cover, which was jettisoned with the escape system shortly after launch. The inner and middle hatches had to be manually unlocked and removed to egress. The hinged outer hatch was unlocked by striking a plunger through the middle hatch that unlocked the outer hatch latches. Under good conditions the crew could unlock the doors, remove them, and egress in 60 to 90 seconds.”

Obviously, and tragically, Grissom, White, and Chaffee, did not have 60 to 90 seconds to egress the command module when the fire broke out. Cockpit crew evacuation in the event of an emergency had not been a requirement in the engineering process for the command module of Apollo 1. And keep in mind that this emergency could have been caused by fire, water, or some other factor. There were many engineering mistakes made on Apollo 1 but this hatch design flaw was a significant error.

This diagram shows the main hatch exterior of Apollo 1:

Image from http://www.space1.com/pdf/news1296.pdf.

This diagram shows the main hatch interior of Apollo 1:

Image from http://www.space1.com/pdf/news1296.pdf.

New requirements emerged after the Apollo 1 fire. Gus Grissom almost lost his life because of the design problems with the Liberty Bell 7 hatch and he, along with Ed White, and Roger Chaffee, did lose their lives because of the design problems with the Apollo 1 hatch. It is obviously tragic that these men had to die to get a hatch design done correctly.

Mapping Human Factors Analysis outputs to System Requirements

The tragedy of Apollo 1 prompted an urgent redesign of the Apollo command module side hatch. According to the Apollo Information Sheet, “Pacing the redesign effort was the need to complete the modifications, test the hardware, and fly the preliminary missions for a lunar landing before the end of the decade.”

Image from http://www.space1.com/pdf/news1296.pdf.

According to the Apollo Information Sheet, “After the accident the crew egress requirements were drastically changed. The crew had to be able to open the hatch in 3 seconds and egress within 30 seconds.”

The Apollo Information Sheet describes the engineering of the new hatch that was built after Apollo 1, “The selected design combined the inner and middle hatches into a unified hatch. The outer hatch, part of the Boost Protective Cover, was only slightly modified. The unified hatch mounted 15 latches linked together around the hatch perimeter. The latches applied enough force from inside the hatchway to seal the hatch. A ratchet handle allowed the crew to open or close the latches in five strokes of the handle. The handle also triggered a striker plunger to unlock the outer hatch latches (while the Boost Protective Cover was still attached). A counterbalance improved the opening time in emergency situations. Once the latches were unlocked a cylinder pressurized with gaseous nitrogen would operate a piston to force the combined 350 pound hatch open and lock it in position. (The total weight added by the new design was 253 pounds.)”

It is not known if the new hatch design would have saved the three astronauts in Apollo 1 but they would have had, at the very least, a fighting chance.

According to Allan Needell, NASA set about addressing the Human Factors Analysis outputs after the Apollo 1 fire and installed these into the System Requirements of a new hatch system for all future Apollo flights. According to Needell, “To permit more rapid escape in an emergency, NASA and contractor North American Rockwell engineers developed the Block II ‘Unified Hatch.’ In this version, the pressure vessel and heatshield hatches are combined (hence the ‘unified’ label). While on the launch pad, astronauts could unlatch the entire assembly by activating a pump handle and then pushing open the carefully counterbalanced combined unit. Alternatively, launch crews could insert a tool to open the hatch from the outside.”

Unified Hatch, Apollo 11. (NASA)

According to Needel, “All the Apollo crews flew in redesigned spacecraft equipped with the new hatch and other safety features. This one is from the Apollo 11 command module Columbia. The hinges are on the right and the pump handle is on the left. The handle is connected by a ratchet mechanism to all of the latches. Five pumps was all it took to free the hatch, which could then be swung open with a moderate push from inside.”

Human Factors Analysis

Inputs of Requirements Engineering (RE)

Germanwings Flight 9525

Types of Requirements.

The types of requirements for better cockpit security were implemented and described earlier in this paper. These requirements would place all authority control within the cockpit. What these requirements did not do, however, was to provide flexibility in authority control when the threat was located inside the cockpit. Given the circumstances following the 9/11 attacks it is hard to find fault with the Requirements Engineering at the time. But it needs to be said, again, that this engineering pathway did not include much thought put into the Human Factors Analysis outputs of a suicidal pilot or an equivalent threat originating in the cockpit.

Mapping Human Factors Analysis Outputs to System Requirements.

To date, the cockpit security architecture has not changed since the crash of Germanwings Flight 9525. The authority pathway still runs through the cockpit. It is relevant to look at possible engineering solutions that might prevent a replication of the Germanwings tragedy. Of course, no system is 100% reliable or safe but this should not stop further research into this critical area of airline travel safety.

It should be noted that not all scenarios of a cockpit threat would involve a pilot. There could be circumstances where an armed intruder is able to get access to the cockpit and is able to remove both pilots and lock them out. What then? That armed intruder now has command and control of the aircraft. And given the publicity surrounding the mechanisms of cockpit security after the Germanwings crash, that armed intruder would know how to keep the cockpit door locked.

How can a pilot get back into the cockpit during an emergency situation like the one that unfolded on the Germanwings flight?

In a paper published in 2017, Ming Hou, Derek McColl, Kevin Heffner, Simon Banbury, Mario Charron, and Robert Arrabito, outlined the parameters of authority pathways as they related to intelligent adaptive automation for an unmanned aerial system ground control station. According to Hou et al, “The Authority Pathway uses intelligent adaptive automation technology to adapt to dynamically changing mission goals, provide a variety of views for different users, and allow for varying degrees of automation technology to be consistent with future requirements.”

The requirements for the Authority Pathway described in the paper by Hou et al are not specifically related to cockpit door security but some of the philosophies used can be very useful.

The Authority Pathway paper refers to the use of technology to adapt to “dynamically changing mission goals.” There were no mission goals on the Germanwings flight but the technology surrounding cockpit security should have been able to adapt to dynamically changing flight circumstances.

The Authority Pathway paper also refers to provisions that account for a “variety of views for different users,” a philosophy that will be explored shortly.

The Authority Pathway paper also argues for the allowance for “varying degrees of automation technology to be consistent with future requirements.” If the requirement demands that emergency access to the cockpit be achieved without cockpit control or authority approval then the technology must be developed to make this happen.

“Authority Pathway: Intelligent Adaptive Automation for a UAS Ground Control Station.” Engineering Psychology and Cognitive Ergonomics – Performance, Emotion and Situation Awareness. 14th International Conference, EPCE 2017.
This chart has been adapted from Taylor, R.M.: Capability, cognition and autonomy. In: NATO RTO-HFM Symposium on the Role of Humans in Intelligent and Automated Systems, pp 1-27. Defence Technical Information Centre Report (2002). ADA42249

The chart (above) from the Authority Pathway paper represents “the six PACT [Pilot Authority and Control of Tasks] levels of human/machine control in terms of human-machine responsibility sharing. The third column indicates a level of authority that includes the notion of automation management strategies.” The chart shows the varying degrees of Computer Authority versus Pilot Authority. Level 0 represents Full Pilot Authority and Level 5 represents Full Computer Authority.

The cockpit security architecture on the Germanwings flight had Full Pilot Authority, or to be more precise – Full Cockpit Authority. And it led to an airliner crash. How can this be fixed? With respect to the security of the cockpit door, there must be a transfer of control, of power, from the cockpit to somewhere else. But where?

The first place to look is the keypad outside the cockpit door. There is already a provision for emergency access (shown before in the Airbus A320 video) in the event of crew incapacitation. But there remains Full Cockpit Authority to override this emergency access. Should this override option be removed?

It seems highly unlikely that pilots in the cockpit would abdicate their authority to someone standing outside the cockpit using the keypad.

There are other options for cockpit access including door locks that rely on fingerprints, handprints, face recognition, and iris-activated door locks. There are probably many other options as well. This technology could be designed in such a way that only a pilot (outside the door, perhaps on a bathroom break like Captain Sondenheimer) could activate the cockpit door with facial-activated or some other means of activation. This would maintain cockpit security within the pilot spectrum.

One immediate concern that crops up has to do with a situation in which a pilot has left the cockpit, is confronted by someone intending to do harm, and forces the pilot to place his hand or face on the recognition-activated cockpit door machine. It might also be possible for an intending-harm individual to kill the pilot and still be able to use the pilot’s hand or face to activate the cockpit door mechanism.

There is also another possibility. In an emergency like the one that unfolded on the Germanwings flight, perhaps there could be technology designed that could open the cockpit door remotely. This machine and operator could be employed by the airline or be part of air traffic control systems.

In this configuration Captain Sondenheimer would have had the option to contact this emergency system and be identified in some reliable way, either video or some other means. Once this communication occurred (would need to happen in seconds) and his identity verified, and circumstances confirmed (co-pilot is intending to crash the plane) then this remote operator could have opened the cockpit door for Sondenheimer. Once the door was opened, Sondenheimer would have needed the assistance of other passengers to remove the co-pilot from the cockpit controls and Sondenheimer could have resumed control of the aircraft. This Authority Pathway might work for all stakeholders, including pilots.

As with all things, cost would be a consideration. Would airlines invest in this life-saving technology when instances of pilot suicide are comparatively rare. And it should be mentioned that the circumstances of what the Germanwings co-pilot did are rare. He set the autopilot to descend to 100 feet, which provided plenty of time for an emergency intervention. Would any system work when a pilot lowered the elevators and thrust the aircraft into a nose-ward dive? Perhaps not, but the technology should be developed so that a pilot can at least have a “fighting chance” to recover the flight.

One of the hallmarks of aircraft engineering is that after every major plane crash or incident there is a full and thorough investigation. Whatever was broken or not working or could have helped – gets fixed. It’s what makes airline travel so safe. So why not do this after the Germanwings disaster?

Human Factors Analysis

How to go from Human Factors Analysis Outputs to System Requirements

One aspect of Human Factors Analysis that looms large when examining Apollo 1 and Germanwings Flight 9525 is the lack of outputs from Scenario Requirements. Had Scenario Requirements been an active participant in the analysis of the door systems for Apollo 1 and the Germanwings aircraft then perhaps there would have been more attention paid to solutions in the System Requirements.

Annie I. Antón et al describe some of the management challenges associated with Scenario Requirements, “Scenarios are valuable for eliciting information about system requirements, communicating with stakeholders and providing context for requirements. Although valuable, scenarios together with their associated use cases can be difficult to manage. This explains why scenario management is receiving increased attention among researchers in the software engineering community (71).”

Iain S. MacLeod was a Royal Air Force Officer/Air Navigator for over 26 years. He has had a multi-disciplinary background as an aviator, engineer, scientist, engineering /applied/occupational psychologist, and human systems integrator. His 2008 paper, “Scenario-based Requirements Capture for Human Factors Integration,” offers insights into how Scenario Requirements could have helped the engineering pathways of the door designs for Apollo 1 and the Germanwings aircraft.

MacLeod quotes Ian Alexander and Neil Maiden who argue that Scenario Requirements can be a powerful tool to see the obvious when it is hidden under layers of complexity, “Scenarios are a powerful antidote to the complexity of systems and analysis. Telling stories about systems helps ensure that people – stakeholders – share a sufficiently wide view to avoid missing vital aspects of problems.”

Macleod acknowledges that scenarios have limitations, “Like all methods or hypothetical constructs, scenarios are not on their own currently capable of satisfying every purpose that people might place on them.” But he goes on to reference Roger Schenk when he notes that “there is a school of thought that all human knowledge encapsulated through stories.” MacLeod concludes by arguing, “that the derivation of human system related requirements should be assisted through a more rigorous understanding of stories through the use of scenarios.”

Scenarios can play an important role in mission-critical and safety-critical systems like those found on Apollo 1 and the aircraft involved in the Germanwings crash. MacLeod states, “All human and system work if mediated by influences from the system operating environment; work is both constrained and directed by the nature of that environment. It is thus argued as sensible to assist the definition of the requirements for work and systems from a basis of chosen scenarios representing some of the anticipated operations, operating environments, actors, and hazards influencing system usage.”

A thorough investigation of all possible scenarios is required and systems can be updated as new scenarios are discovered or explored. MacLeod explains, “Furthermore, scenarios can be used and updated throughout a system life-cycle whether they be used as a basis for the formulation of a system acceptance test schedule or as a basis for considering the forms of incorporation of ‘through life’ system enhancements. However, as inherent with any form of analysis, coming up with complete and sufficient scenario specification is difficult. Basic scenarios are easy to write down, but more complex or less common scenarios are more difficult.”

Apollo 1

Macleod has already stated that, “Scenarios are a powerful antidote to the complexity of systems and analysis.” It would be hard to find a system more complex than Apollo 1. And yet, you have to wonder … was there any scenario-based requirement work done in the event of a cockpit fire in the command module? Fire drills are run in elementary schools, businesses, etc.

How did it happen that engineers and astronauts from Apollo 1 did not run a fire-drill test in the event of a fire in the command module to see how quickly the astronauts could get out and what challenges they faced in getting out? They would have discovered that under good conditions it took 60-90 seconds to get the three-hatch system opened and escape the cockpit. Much too long in the event of a fire. They would have discovered that an inward-opening hatch was not what was needed when the cockpit quarters were so tight and cramped and in a critical situation like a fire, more space was needed to escape the cockpit.

This is an area that should be explored in future research. What types of Scenario Requirements work was done during the engineering process of Apollo 1? And what explanations are there for the lack of Scenario Requirements for obvious and critical scenarios like a fire in the cockpit?

How should the hatch problem been handled? The following is fictional exchange at an Apollo 1 engineering meeting, which includes a Human Factors Scenario Requirements specialist (HF) , a systems engineer (SE), and Gus Grissom (GRISSOM):

HF: So Gus, can you talk about what you just told me?

GRISSOM: It’s a lemon.

SE: What is?

GRISSOM: The module. It’s a death-trap. There’s been too many engineering trade-offs. We’ve got a pure-oxygen atmosphere in the cockpit. It’s a tinder-box. We’ve got faulty and exposed wiring. A single spark? That’s it. We got a fire in the cockpit.

HF: Gus asked me about what happens in the event of a cockpit fire.

SE: Emergency procedures call for the senior pilot …

GRISSOM: Ed.

SE: In this case, Ed White, occupying the center couch; he is to unlatch and remove the hatch while retaining his harness buckled.

HF: How long does it take to open the hatch?

GRISSOM: There’s three.

HF: Three hatches?

SE: Correct.

HF: How long does it take?

SE: During our testing we’re not latching the outer hatch, the BPC hatch.

HF: What’s BPC?

SE: It’s the boost protective cover hatch. It’s the outer hatch.

GRISSOM: It’s part of the cover that shields the command module during launch. It gets jettisoned prior to orbital operation. The middle hatch is called the ablative hatch. It becomes the outer hatch when the BPC gets jettisoned after launch.

SE: When we do the plugs out test the BPC hatch will be in place but it’s not gonna be fully latched because of distortion in the BPC caused by wire bundles temporarily installed for the test.

HF: Alright, so let’s assume the crew is in the module, the two hatches are closed. And there’s a fire.

SE: Where?

HF: Inside the cockpit, inside the command module.

SE: The senior pilot unlatches and removes the interior hatch. And then the second hatch.

HF: How long does that take?

SE: It could take up to ninety seconds.

GRISSOM: Holy shit.

HF: That’s too long.

GRISSOM: We won’t survive ninety seconds in a fire.

HF: Can you run a fire test? See how long it takes the crew to get out?

SE: We’ve got a launch date. February 21.

GRISSOM: We need to run a fire test. It’s our asses in there. If it takes too long to get out we need a new hatch design.

SE: February 21. That’s our launch date.

It’s painful to write a scenario requirement scene that should have happened. And this was such a basic scenario: Crew egress in the event of cockpit fire.

Another scenario that could have happened with the three-hatch design of Apollo 1 was the potential for problems once the command module landed in the ocean after a successful mission. What if the craft began taking on water at a high rate and the cockpit was quickly filling up? Would the crew have survived if it had taken 90 seconds to get out?

There is an abundance of tragedy surrounding Apollo 1 and first and foremost is the loss of three astronauts. There are additional outcomes, which are smaller in nature but have historical value. There is persuasive evidence that had Gus Grissom not died in the Apollo 1 fire he would have been the first human being to set foot on the Moon. Not Neil Armstrong. Gus Grissom.

First step on the Moon. (NASA)

Deke Slayton was the person responsible for NASA crew assignments. He determined the astronauts that would fly on the Gemini and Apollo missions. In his autobiography, “Deke!” it was very clear who the first human being on the Moon would be. Eric Berger, Senior Space Editor at Ars Technica, references Slayton’s book, “One of the Mercury 7 astronauts who would become chief astronaut, Deke Slayton, later wrote in his autobiography Deke! That he wanted one of the original seven to take the first step on the Moon. His first choice was Grissom, which both Chris Kraft and Bob Gilruth agreed upon.” Kraft was NASA’s first flight director. Gilruth was head of the NASA center running the Apollo program.

Slayton writes in his autobiography, “I felt pretty strongly that the ones who had been with the program the longest deserved first crack at the goodies. Had Gus been alive, as a Mercury astronaut he would have taken the step.”

Germanwings Flight 9525

The primary scenario requirement for cockpit doors after 9/11 was based on an event occurring in which terrorists (or others intending harm) attempt to get into the cockpit in order to take control of the aircraft. All engineering specifications were designed to prevent this from happening.

It is not known if other scenarios were investigated at the time of the redesign efforts but this is another area in need of future research.

Max Kutner of Newsweek points out that some safety experts have doubts about the cockpit security systems, “As new, tragic details emerge about the Germanwings plane crash, aviation safety experts are questioning whether post-9/11 security measures render airplane cockpits too inaccessible.”

Kutner quotes Jeff Price, co-author of Practical Aviation Security: Predicting and Preventing Threats, who says, “The big question that comes up is how can you get back in the door if the pilot decides to lock you out.”

Yes, indeed. This is the big question: How do you get inside the cockpit if the pilot or co-pilot or someone else locks you out?

Approximately two years before the 9/11 attacks, on October 31, 1999, an EgyptAir Boeing 767 bound for Cairo from New York crashed with 217 people on board. Although disputed by Egyptian investigators, the U.S. National Transportation Safety Bureau found that the relief first officer had deliberately plunged the jet to the ground while the relief captain was out of the cockpit on a toilet break.

Post 9/11 it is reasonable to ask whether EgyptAir Flight 990 was mentioned during the design phase of the new cockpit doors. As a side note, it is also reasonable to wonder whether or not Osama Bin Laden and Al-Qaeda took note of what happened on Flight 990, i.e., taking control of a passenger aircraft and intentionally crashing it.

It is difficult to speculate about scenario requirements for cockpit security in the weeks following 9/11 that involved “getting into the cockpit” during an emergency. If they had happened it is unlikely they gained much traction. The focus, emotional and otherwise, but laser-focused on keeping “bad guys” out of the cockpit, not rescuing a flight from a “bad guy” inside the cockpit.

As MacLeod pointed out, “Basic scenarios are easy to write down, but more complex or less common scenarios are more difficult.” Suicide by pilot is a less common scenario. It still remains, however, a devastating scenario.

So what would an engineering meeting that was tabled to discuss cockpit security in the weeks after 9/11 have looked like? This fictional exchange includes a Systems Engineer (SE), a Human Factors Engineer (HFE), FAA, a Pilot (P), and the FBI.

HFE: What about pilot suicide?

FAA: What about it?

HFE: Imagine a scenario in which a pilot, left alone in the cockpit, decides to commit suicide. The pilot can lock everyone out of the cockpit and crash the plane.

FBI: Too rare.

HFE: It happened less than two years ago. EgyptAir Flight 990. Suicide by pilot. 217 dead.

P: It’s way too rare. You can’t make everything perfect. I don’t want people getting into the cockpit. End of story.

HFE: Why can’t we look at ways to get back into the cockpit in the event of an emergency?

SE: We could look at that. We could try and come up with something that works.

FBI: We’re trying to keep people out of the cockpit. Not give them a loophole to get in.

FAA: You can’t build a secure cockpit and then make it weaker.

P: I’m very happy with the plans for this. All control must remain in the cockpit. Any deviation from that is asking for trouble. And it’s my ass in the cockpit.

HFE: If we can find a solution it might save lives. There might be a way to design the system so that emergency access to the cockpit can be implemented without compromising cockpit security.

NTSB: We have a solution already. And it will save lives. Nobody gets in the cockpit unless the pilots let them.

P: I decide who gets in the cockpit. End of story.

HFE: Why can’t we do both? Why can’t we design a system that is reliably secure but has a mechanism in place in the event access to the cockpit is needed? How do you get inside the cockpit if the pilot locks you out?

P: You don’t.

HFE: You said earlier it was your ass in the cockpit. I would like to remind you that on an average commercial flight there are over 100 passengers. They have as much at stake during the flight as you do.

It has been four years since the crash of Germanwings Flight 9525. In that time there has been no information to indicate that any changes have been made to the cockpit security systems on passenger aircraft. The scenario that played out aboard that flight should be fresh in the minds of engineers. It does not have to be imagined.

About nine months before the Germanwings crash there was an incident that occurred on an Air New Zealand flight that deserves scrutiny. According to Michael Koziol of the Sydney Morning Herald, “A 13-minute takeoff delay caused so much tension between two Air New Zealand pilots that the first officer was locked out of the cockpit on a flight between Perth and Auckland. On May 21, flight NZ176 was delayed after the first officer was asked to undertake a random drug and alcohol test. This was enough to enrage the captain, for whom timeliness was a matter of great pride.”

The captain locked the first officer out of the cockpit for two minutes once the plane lifted off. According to Koziol, “An unnamed expert told the NZ Herald two minutes is ‘an eternity’ on board a flight.”

What is fascinating about Koziol’s reporting is the following description, “Crew on board the packed Boeing 777 became concerned when the captain did not respond to three requests to open the cockpit door. The first officer then entered the cockpit by an alternative method, which was not disclosed for security reasons.”

The first officer was able to enter the cockpit by an alternative method. What was this alternative method? This incident predated the Germanwings crash. Captain Sondenheimer could have used “an alternative method.” The method was not disclosed to the media for security reasons. Is there a method in place? It’s possible that alternative method was simply using a keypad but we don’t know what that alternative method was. It should also be pointed out that the cockpit security system on the Air New Zealand flight (Boeing 777) might be very different than the Germanwings flight (Airbus A320).

Cockpit security designers should look no further than Germanwings Flight 9525 for a scenario that should provide a requirements capture for system integration.

And finally, there could be a scenario in which terrorists gain control of the cockpit but the pilots are still alive. Shouldn’t the pilots be able to access the cockpit? Terrorists could see what the co-pilot of the Germanwings flight did and replicate it, knowing that once they have control of the cockpit … nobody can get in. If I watched the Airbus cockpit door training video referenced earlier, anyone can. There needs to be more urgency on this issue.

Human Factors Analysis

Validation

George M. Samaras describes the challenges relating to validation of complex systems when human factors are included, “Humans increase system complexity. Properly validating the Human Factors issues of the system reduces the degree of uncertainty in system behavior. Systems engineering (SE) employs validation to demonstrate that the proper system was constructed. Validation is based on requirements; faulty requirements result in faulty validations. Complete and correct requirements satisfice all stakeholders, inform system designers, and provide a basis for quantitative validation studies. Validation of hardware and software is well developed; validation of human factors is not!”

Samaras describes the challenges that some designers have when it comes to Human Factors, “Even formulation of human factors-related system-focused requirements remains problematic. Designers continue to have difficulty integrating human factors engineering requirements into system designs.”

One of the explanations for the problem of Human Factors integration, according to Samaras, is that, “Introducing human actors into any system significantly increases system complexity. Unvalidated or improperly validated, systems have a high degree of uncertainty (complexity) in their behavior.”

Samaras explains the process of systems engineering as a voyage of discovery, describing it as a “learning process [Eisner, 1997 (157)]. In each iteration, some needs, wants, and desires (NWDs) are identified or discovered. Following a hazard analysis (HA), some or all of the NWDs are selected and formulated as requirements for the product or process under development. The evolving requirements (somemorefinal) define the stakeholders’ evolving understanding of the ‘correct design project.’”

Samaras compares the validation process to a theoretical one from the field of science, “Validation is not essentially different from the general scientific procedures for developing and supporting theories (Cronbach & Meehl, 1955).”

The validation of system engineering is defined by Samaras is based upon, “proper, operationally defined formulation of system-focused requirements; it consists of empirical measurements to corroborate compliance with these requirements. By ‘operationally defined,’ we mean you must be able to design a test for it. As the development process progresses, the set of requirements evolve with each iteration (and intermediate validations occur) until the final set of requirements are developed (and validated) and the product or process is deployed.”

It is doubtful that any testing was done for the Apollo 1 hatch to see how quickly astronauts could escape the cockpit in the event of a fire. If this testing had occurred the engineers and designers would have realized immediately that the hatch took far too long to open in an emergency situation. Likewise, it is doubtful that any testing was done to see how fast an airline crewmember, i.e., pilot or co-pilot, could gain access to the cockpit of a passenger plane in an emergency situation where the person in the cockpit is denying access.

Samaras explains the function of requirements in the validation process and describes what is not a requirement, “Consider a frequently stated ‘requirement’ imposed upon a design team: The system must be easy to use! This statement if not a ‘requirement’; it may be an NWD, but absent operational definitions, it is NOT a requirement.” He then explains that a systems engineering requirement is “a natural language statement that operationally defines the validation measurement(s).”

Samaras also provides a succinct definition of Requirements Engineering, calling it, “that engineering activity of discovering stakeholder NWDs, selecting those NWDs that will be translated into requirements, and formulating the requirements, so that they satisfice stakeholders, inform designers, and provide a complete and correct basis for validation.”

Samaras dismisses the idea that consequences such as the slow-hatch opening of Apollo 1 or the inability for the Germanwings pilot to access his cockpit are not avoidable, stating “There are no ‘unintended’ consequences, only unanticipated consequences that are usually unwelcome! A cardinal rule of system development has to be ruthless enforcement of requirements engineering. Complete and correct requirements satisfice all stakeholders, inform designers, and provide a basis for validation measurements. This has been traditionally difficult in human-centered design.”

Olivier L. de Weck provides some basic background for the verification and validation process. de Weck is a Professor of Aeronautics and Astronautics and Engineering at the Massachusetts Institute of Technology (MIT) and an Adjunct Professor at École polytechnique fédérale de Lausanne (EPFL).

Validation and verification are often difficult to distinguish. de Weck references the NASA Systems Engineering Handbook (referred forthwith as the NASA handbook) for a definition of verification testing, which “relates back to the approved requirements set (such as an SRD [Systems Requirement Document]) and can be performed at different stages in the product life cycle. Verification testing includes: (1) any testing used to assist in the development and maturation of products, product elements, or manufacturing or support processes; and/or (2) any engineering-type test used to verify the status of technical progress, verify that design risks are minimized, substantiate achievement of contract technical performance, and certify readiness for initial validation testing. Verification tests use instrumentation and measurements and are generally accomplished by engineers, technicians, or operator-maintainer test personnel in a controlled environment to facilitate failure analysis.”

In short, verification, according to de Weck, occurs “during development,” is used to “check if requirements are met,” is conducted “typically in the laboratory,” and is “component/subsystem centric.”

Conversely, the NASA handbook defines validation testing in the following way, “Validation relates back to the ConOps document. Validation testing is conducted under realistic conditions (or simulated conditions) on any end product to determine the effectiveness and suitability of the product for use in mission operations by typical users and to evaluate the results of such tests. Testing is the detailed quantifying method of both verification and validation. However, testing is required to validate final end products to be produced and deployed.”

The NASA handbook points out that it is the question of objectives that distinguishes verification from validation, “It is essential to confirm that the realized product is in conformance with its specifications and design description documentation (i.e., verification). Such specifications and documents will establish the configuration baseline of that product, which may have to be modified at a later time. Without a verified baseline and appropriate configuration controls, such later modifications could be costly or cause major performance problems. However, from a customer point of view, the interest is in whether the end product provided will do what the customer intended within the environment of use (i.e., validation). When cost effective and warranted by analysis, the expense of validation testing alone can be mitigated by combining tests to perform verification and validation simultaneously.”

de Weck provides the following chart to explain the verification process, which includes validation:

Slide 6. Fundamentals of System Engineering. Professor de Weck. 2015.

The following paraphrases de Weck’s lecture comments on this section, “The inner loop is the verification loop. You verify whether your design satisfied the requirements as written. That’s what the verification is. And then there’s an outer loop where you take your implemented design solution and you essentially take it all the way back to the stakeholders and you are deploying it in a realistic environment. In the environment in which the stakeholders will actually use the system. Not in a pristine lab environment. The stakeholders try out the system and you get to see if they are satisfied. This is called validation. A lot of people who don’t know system engineering and haven’t been exposed to it and they hear verification and validation and they think it is two different words for the same thing. It is different. It is not the same thing. If you successfully verify and validate you end the SE process and you deliver.”

In his lecture de Weck goes on to explain more about the contrasts between verification and validation, “the real distinguishing factor is whether this activity happens in a lab, in a very controlled environment under stylized conditions or whether you are actually going out in the field in a realistic mission environment with real users or real potential users who not especially knowledgeable and not especially trained about the system.”

The plugs-out test on Apollo 1 could be argued as verification because of the controlled nature of the test. But then again, the real users, in this case the astronauts, were involved. In this case, however, the astronauts were highly knowledgeable and highly trained. So the plugs-out test was probably a verification step, not a validation step. I think it is very difficult to “validate” spacecraft until the craft is actually exploded up into space. But then again, there would have been lots of testing of the command module from Apollo 1 that occurred without the astronauts’ participation. The fact that the command module was populated by the three astronauts and the fact that the command module was running independently would suggest a validation process.

Indeed, the definition of the validation process in the NASA handbook would seem to support this, “The Product Validation Process is the second of the verification and validation processes conducted on a realized end product. While verification proves whether “the system was done right,” validation proves whether “the right system was done.” In other words, verification provides objective evidence that every “shall” statement was met, whereas validation is performed for the benefit of the customers and users to ensure that the system functions in the expected manner when placed in the intended environment. This is achieved by examining the products of the system at every level of the structure.”

It would seem that the Apollo 1 plugs-out test was performed for the benefit of the astronauts to ensure that the “system functions in the expected manner when placed in the intended environment.” As it happened, the system did not function in “the expected manner.”

Post 9/11, did the designers of the reinforced cockpit doors fail to anticipate a situation in which a pilot or co-pilot might need to gain access to the cockpit in an emergency? Did the designers of the Apollo 1 hatch fail to anticipate a situation in which the flight crew might need to escape the cockpit quickly?

Samaras concludes one of his discussion regions with a question that has resounding relevance to the door-design failures of Apollo 1 and Germanwings Flight 9525, “This discussion focuses on the HFE aspects of one sub-process of the SE process, validation and its pre-requisite – properly formulated requirements. What we mean by validation is the empirical comparison of the implementation against the properly formulated requirements. The question we are asking, ‘Did we build the right system?’”

Apollo 1

The hatch design for Apollo 1 should never have been validated. It opened inward, into the command module, and it should have opened outward. It took between 60 and 90 seconds for the crew to open it. This is far too long in the event of a cabin fire. And a cabin fire was always a concern with the command module. It just wasn’t predicted to happen on the ground during testing.

Joseph Shea, who had received the photograph prayer parody from Grissom, White, and Chaffee, referenced earlier in the paper, was a brilliant engineer who would suffer greatly after the Apollo 1 fire. He was, in many ways, the man in charge. Shea had worked as a systems engineer on the radio guidance system of the Titan I intercontinental ballistic missile (ICBM) and was also the program and development manager on the inertial guidance system of the Titan II ICBM. Shea’s area of expertise was systems engineering, a relatively new discipline in the 1950s that focused on the management and integration of large-scale project, turning the work of engineers and contractors into a unified, function whole. There would be no bigger large-scale project than Apollo.

Joseph Shea with models of the command module and lunar module. (NASA)

North American Aviation was the primary contractor that NASA used for the building of the command module. Shea was interviewed twice in 1998 by Michelle Kelly as part of NASA’s Oral History Project. During this interview the normally reticent Shea revealed some of the issues he had, as Apollo Program Manager, with North American. Shea passed on less than three months after the conclusion of a November 1998 interview.

SHEA: North American was a very difficult company to work with. The night they won the contract for the [unclear] service module, they had a party. They gave out hats, [unclear] hats. Do you know what was on the hat?

KELLY: What’s that?

SHEA: Here’s what the that looked like. [Drawing: NA$A]

KELLY: Oh, no.

SHEA: Yes, ma’am. That’s right. Honest to God, that’s how –

KELLY: Just for the tape I’m going to say that it says NASA with a dollar sign as the S (NA$A).

SHEA: Dollar sign in the middle. And they acted that way most of the time.

According to Ben Evans, North American was not happy with the pure-oxygen environment of the command module, “To be fair, North American had faced their own technical challenges. NASA had mandated that the Apollo command module should operate a pure oxygen atmosphere – an extreme fire hazard, admittedly, but infinitely less complex than trying to implement an oxygen-nitrogen mix, which, if misjudged, could suffocate the men before they even knew about it. In space, the cabin would be kept at a pressure of about a fifth of an atmosphere, but from ground tests would be pressurised to slightly above one atmosphere. This would eliminate the risk of the spacecraft imploding, but at such high pressures there remained the danger that anything which caught fire would burn almost explosively. At an early stage, North American objected to the use of pure oxygen, but NASA, which had employed it without incident on Mercury and Gemini, overruled them.”

This photograph is from an earlier test inside the command module. It shows the cramped quarters of the command module. The hatch is behind Ed White (center). (NASA)

The Apollo hatch was just one of many issues Shea was juggling. According to Evans, “Other worries surrounded Apollo’s hatch: a complex device which actually came in two cumbersome pieces – an inner section, which opened into the command module’s cabin, overlaid by an outer section. North American wanted to build a single-piece hatch, fitted with explosive bolts, but NASA felt that this might increase the risk of it misfiring on the way to the Moon. By adopting an inward-opening hatch, cabin pressure would keep it tightly sealed in flight…but notoriously difficult to open on the ground.” It should be mentioned that the hatch with explosive bolts is what got Grissom into so much trouble after the Liberty Bell 7 landed in the ocean.

Shea expanded on the possibility of a cockpit fire during his November 23, 1998 interview with Michelle Kelly:

SHEA: Well, fire was always a concern. At the acceptance test for the spacecraft, we had a discussion—[Virgil I. “Gus”] Grissom brought it up initially—about there being too much Velcro and too much other stuff around. The fire rule was that anything that might respond to a spark and start a fire should be—it was four inches in Mercury, I think, and it was like ten or eleven—ten and a half inches in Apollo. The crew liked to customize the spacecraft, and they would put Velcro wherever they wanted. Nobody was checking on that. They had this other thing called Rocelle [phonetic] netting, where they’d put their books and so on and so forth. And so the issue was brought up at the acceptance of the spacecraft, a long drawn-out discussion. I got a little annoyed, and I said, “Look, there’s no way there’s going to be a fire in that spacecraft unless there’s a spark or the astronauts bring cigarettes aboard. We’re not going to let them smoke.” Well, I then issued orders at that meeting, “Go clean up the spacecraft. Be sure that all the fire rules are obeyed.” That was in like October. The fire was, what, January something. North American was a slow contractor. Their response to that direction which we gave them the Monday after the spacecraft was delivered, their response in that direction got to the Cape the day of the fire, and, of course, they never had time to work on it. They never worked on it. So, the fire happened.

The burden of system engineering for a project as ambitious as Apollo 1 is enormous. It’s not just engineering so that machines do what they are supposed to do. It is engineering to ensure as much risk reduction as possible. And it’s not just about the users of the machines. In high-risk engineering domains such as the Apollo 1 project, each user, each astronaut, has a family.

From rear to front, Ed White, Patricia (wife), Edward (son), and Bonnie (daughter). (NASA)

The validation process doesn’t necessarily take family members into account. But Betty Grissom (wife), Mark Grissom (son), Scott Grissom (son), Patricia White (wife), Bonnie White (daughter), Edward White (son), Martha Chaffee (wife), Stephen Chaffee (son), and Sheryl Lynn Chaffee (daughter), all had a vested interest in a good outcome. And these numbers multiply exponentially when you add the fathers, mothers, brothers, sisters, aunts, uncles, and friends of Gus Grissom, Ed White, and Roger Chaffee. Engineers are not just bridging design gaps to ensure the survival of the human in the machine. In the case of Apollo 1 they were supposed to be bridging gaps so that sons and daughters had a father who came home that January weekend in 1967.

Andrew Chaikin of Air & Space Magazine recounts the pressures of time in the months leading up to the Apollo 1 fire, “Meanwhile, throughout the fall of 1966 Joe Shea and his staff battled a formidable array of problems with Apollo 1, everything from an environmental control system that had burst into flames during a test to indications that when the service module’s propellant tanks were pressurized, they might suddenly explode. Eclipsed by such threats, the situation with flammable materials was rarely on Shea’s radar.” And the situation with the hatch door was not on anyone’s radar.

In his August 26, 1998 interview with Kelly, Shea explains the differences between incremental testing and all-up testing, different methodologies to achieve system validation. Shea explains, “Well, obviously we had to be safe, and obviously we had to put a man on the moon and come back. So, safety had to be, we thought, or I thought, [balanced] against mission success, but they both had to be important. So the first specs I wrote said that safety had to be .999; in other words, one chance in a thousand of losing the astronauts.”

Shea shifted away from traditional testing, explaining that, “we went away from what had been the more traditional kind of testing, which I would call incremental testing, to what was called all-up testing.”

Shea explains the origins of the program philosophy, “That [is] very important for a program to have a philosophy. The program philosophy came out of three sources: the von Braun German team, the NACA aerodynamicists, and the few of us like George [E.] Mueller and myself, who had worked on the Air Force ballistic missile program. So it was the synthesis of those three philosophies. And we decided this incremental approach didn’t make any sense at all, that what we would do would be to go to ‘all up.’”

It should be mentioned that Shea was testing rockets and it was impossible to do incremental testing because of the time and financial commitment dedicated to each launch. Shea talks about the various launch strategies, “In other words, [on the first launch, all stages] would be a real stage… If [the first] one works, by God, you’re now ready to [ignite and] test the second stage. You’re ahead of the game already. [If second stage] one works, you can test the third stage and you’re ahead of the game again. There’s no need to do it incrementally. So that was really the origin of the ‘all up’ philosophy.”

The all-up philosophy dovetailed with the Go Fever that fuelled the NASA program so the United States could get a human being on the Moon before the Russians. Shea describes the all-up philosophy, “The second part of it was, it saves a lot of schedule time, because every launch we took out was saving three months, and the rest of the schedule was slipping anyhow, and we could still hold in the first manned flight date, because we could sort of adjust the schedule.”

Caption from LIFE: In seclusion at a private home high in the Hollywood Hills, the prime crews for the first and second flights spent two days earlier this winter at an intensive but informal review of flight plans. Grouped around the table are Gus Grissom (far left), Ed White, Roger Chaffee (back showing). Photograph by Ralph Morse – The LIFE Picture Collection/Getty Images.

Shea remained adamant that the philosophy was a thrusting force that enabled NASA to get to the Moon, “So ‘all up’ was the decision that enabled Apollo to get to the moon on time. The first schedules had about ten [incremental vehicle tests] in them. We finally cut it down to two. The philosophy we used—this was Mueller and myself mostly—‘We’ll shoot one all up, put a lot of heavy telemetry on. If everything works, we’re ahead of the game. We’re going to shoot a second one to make sure the first one was not a random success.’ You talk about random failures. You’ve got to have random successes. ‘So you shoot a second one, and again, if everything works all right, then we will put men or a spacecraft aboard a third.’ That’s the whole genesis of the so-called ‘all up’ philosophy … ”

In spite of the random successes there remained the nagging problems being overlooked because of the breakneck speed of the program. The issues in the command module were so bad that Shea almost ended up taking a seat inside for the plugs-out test. According to Chaikin, “Shortly before the plugs-out test, Gus Grissom had asked him to join the astronauts in the spacecraft, to see for himself how “messy” the procedures were. Shea had considered it and decided that since there was no way to provide a communications line for him inside the command module, it wasn’t a workable idea. What if he had made a different decision? Sitting on the cabin floor during the test, would he have noticed the first spark before it became a blaze, and been able to smother it?”

Did the all-up testing methodology contribute to the fire in the command module? Kelly asked Shea about this in her November 23, 1998 interview with him.

KELLY: I understand that you were, I guess, in the midst of doing your all-up testing at that point, which really saved the Apollo Program itself, and they were doing one of the all-up tests during the time of the fire. Would you like to talk at all about how that played a part in what happened or how it didn’t play a part, to set the record straight?

SHEA: There must have been a door, probably the door that opened to the canisters that scrubbed the carbon dioxide out of the—that door had been opened many times and probably had scraped the insulation from the wire and caused a spark. I’d always said we’d find all problems on the ground. We found that problem on the ground. It is a part of the program I am particularly bitter about because of typical North American slow response. Then I don’t understand why, after everything I had done for the program, why I was only one that was removed. That’s the end of the program for me.

Shea does not explain whether the all-up testing philosophy contributed to the fire inside Apollo 1. He does, however, provide his own explanation for the cause of the fire. This is, in fact, the only time that I have seen a credible explanation for the spark’s origin, the spark that would ignite the fire. The fire’s origin, as of the writing of this paper, remains unknown.

This is the only plausible cause of the fire I have read and it’s coming from the man who was in charge of everything. And in spite of this he was not called to testify before congress in the aftermath of the tragedy. In Shea’s November 23, 1998 interview with Kelly he says, “It was as if NASA was trying to hide me from the Congress for what I might have said.”

Joseph Shea (left) with Gus Grissom. (NASA)

The fire inside Apollo 1 took a heavy toll on many people. The most devastating effects were felt by the families of Chaffee, Grissom, and White. But people like Shea were also devastated. Chaikin describes that, “After the tragedy, Joe Shea fell into a deep depression, suffering what some have called a breakdown. In the spring of 1967 he was transferred to NASA headquarters but found himself, as he later wrote, wandering the gardens at Washington’s Dumbarton Oaks, ‘alone with a life I wished had ended with the three [astronauts].’ Even after he left NASA to return to private industry, the accident tormented him. He would sit in his den at night, going over the events in his mind again and again.”

At the conclusion of Shea’s November 23, 1998 interview with Kelly she asked him, essentially, if there was anything he wanted to get off his chest.

KELLY: Would you like to talk about anything that maybe I haven’t brought up or haven’t talked about?

SHEA: No, I don’t think so, my dear…

[End of Interview]

There’s something so heartbreaking about Shea’s answer, “No, I don’t think so, my dear.” There was so much I’m sure he wanted to say, words he had told himself privately, the anguish, the guilt. But that was not the time nor the place. Shea, already sick, would pass away within three months of this interview. What a terrible burden he carried for over thirty years.

Chaikin offers his explanation for the major contributing factor of the fire, “… when I talked to Gemini and Apollo astronaut Michael Collins about the fire in 1988, he spoke not of incompetence or negligence but rather a kind of blindness. ‘Given the sophistication of NASA, given the intelligence of its engineers, given the keen, in-depth analysis that they applied to various problems, it’s just amazing that the most simple, elementary things in the world are what bit them,’ he said. ‘Putting a hatch on with about 28 goddamn [latches], where you couldn’t get it off! … And all of this just, somehow, I don’t know why, we’re blind to them. I mean, it makes us think that the quality of our engineering across the board was juvenile, yet it wasn’t! It was very good engineering.’ Collins was right: The fire’s root cause lay in what cognitive scientists call perceptual blindness, in which even very smart people, sure that they are paying attention, can miss what is right in front of them.”

The burial services of Roger Chaffee at Arlington National Cemetery with son, Stephen, daughter, Sheryl Lynn, and wife, Martha. President Johnson sits beside Chaffee’s father and mother. Photo credit: The Grand Rapids Public Museum and City Archives, Roger B. Chaffee Collection.

Germanwings Flight 9525

It is understandable that the cockpit security systems installed immediately after 9/11 were validated as they were. There was a well-placed urgency to secure cockpits worldwide. According to Steve Raabe of the Denver Post, “The virtual inability to breach a cockpit door from the passenger compartment resulted from the Sept. 11, 2001, terrorist hijackings and federal regulations for reinforcing doors, door frames and locks. The rules from the Federal Aviation Administration and international regulators are designed to prevent unauthorized access to cockpits and protect from penetration by small-arms fire and fragmentation devices such as grenades.”

The cockpit security systems that operated on Germanwings 9525 remain validated and verified. There is, at present, no provision for emergency access to the cockpit if the occupant inside the cockpit refuses entry. In the wake of Germanwings 9525, there needs to be an urgent effort by engineers to redesign the cockpit security system so that cockpit ingress is possible in an emergency.

Regrettably, as the Germanwings crash fades in time so has the urgency to fix the problem. In fact, conditions appear to be getting less safe. One of the measures some countries took after the Germanwings crash was to order all carriers to have two people in the cockpit at all times. This didn’t last in Canada.

As of June 2017, according to Ashley Burke of CBC, “Canadian airlines are no longer required to have two crew in the cockpit at all times, following the expiration today of an order issued by the government after a co-pilot deliberately downed a Germanwings jet in 2015.” Transport Canada’s explanation to CBC was that “having two crew members in the cockpit at all times could “reduce the number of flight attendants in the cabin, which could potentially have an impact on passenger safety, especially in an emergency.”

It’s not clear why Transport Canada emphasizes “at all times” as if this cockpit requirement means that flight attendants are in the cockpit for the duration of the flight. Surely, in the wake of the Germanwings crash it is prudent, at the very least, that a member of the flight crew be in the cockpit for a pilot bathroom break that might last two minutes.

According to Burke, “Leon Cygman, chair of the Mount Royal University aviation program, said he’s a ‘little surprised’ by the decision. ‘I would support a two-crew environment at all times,’”

Canadian pilots have enthusiastically supported the decision to lift the “two crew members in the cockpit” order. Burke reveals that, “Two major pilot associations said they are satisfied that Transport Canada is dropping the two-crew cockpit rule. ‘The Air Canada Pilots Association is pleased,’ it wrote in a statement to CBC. “We think that this strikes the right balance of aircraft safety while also ensuring adequate supervision in the passenger cabin.’ Canada’s president of the Air Line Pilots Association, Dan Adamus, said the pilot community has confidence in the current system. ‘The procedures in place will continue to be safe and secure,’ said Adamus.”

This is an unusual move by Transport Canada. Isn’t this extra layer of security a sensible measure for risk reduction? According to Steve Raabe of the Denver Post, the U.S. requires two crew members in the cockpit at all times, “Domestic carriers have rules that require a crew member, such as a flight attendant, to enter the cockpit when a pilot or co-pilot temporarily leaves, said Greg Fieth, a Golden-based former senior investigator for the National Transportation Safety Board. That procedure is designed primarily as an additional layer of cockpit security and as a precautionary measure if a pilot becomes incapacitated.”

It should be noted that many airlines, including Lufthansa (parent company of Germanwings) also lifted the two-person cockpit rule in 2017. This came about after the European Cockpit Association (ECA) lobbied the European Aviation Safety Agency (EASA) to rescind the 2-person-in-the-cockpit rule, which came in the form of a Safety Information Bulletin (SIB). The ECA represents the collective interests of professional pilots at European level.

According to the ECA, all of this came about after “a stakeholder consultation where a majority of aviation stakeholders, including more than 3,000 pilots, spoke out against the measure.” The survey received 3783 responses from 56 countries. The majority of the respondents are from the following countries: Germany with 1461 answers, UK with 506 answers, Netherlands with 498 answers, France with 338 answers, Spain with 127 answers and Belgium with 100 answers.

Source: https://www.eurocockpit.be/news/end-2-persons-cockpit-rule-sight.

There were several questions asked but the one that is relevant to this paper was:

Do you agree that the measures proposed by the EASA SIB are effective and appropriate to mitigate the risks associated with flight crew members leaving the cockpit due to operational or physiological needs?

Source: https://www.eurocockpit.be/news/end-2-persons-cockpit-rule-sight.

The chart is clear. 54% Strongly Disagreed with the EASA SIB and 28% Disagreed. 82% of respondents disagreed with the policy of always having two people in the cockpit. Given that 86% of the respondents were pilots – one can reasonably conclude that pilots in Europe hated this policy.

According to the ECA, “There are a number of arguments that weigh against the rule. First, inferring that pilots require monitoring when they are on their own on the flight deck has a potential to reduce passenger confidence. Second, it is highly doubtful that the presence in the cockpit of a person with no operational knowledge will actually improve security and safety. In fact, as most stakeholders agree, it might actually create new safety and operational concerns. And third, the more people get in and out the cockpit, the higher the risk to seriously compromise in-flight security.”

The ECA does make some valid points but I would argue that the co-pilot of the Germanwings flight wanted an empty cockpit. Another person in the cockpit after the pilot left might have been enough to deter his mass murder. That is reason enough to have two people in the cockpit. Also, it should be noted that two people in the cockpit is still required in the United States, the country that was hit hardest on 9/11.

The efforts by the ECA highlight the challenges that exist when stakeholders resist change. In this case they managed to overturn policy that was geared toward safer air travel. What about other stakeholders? What about passengers? They too are stakeholders. Without passengers there is no airline industry and pilots have no jobs. I wonder what the results of a similar consultation would be with a different group of stakeholders – passengers. I wonder what percentage of them would prefer two people in the cockpit.

Perhaps there are other Human Factors in play with respect to pilot resistance to some forms of cockpit security. Pilots have already lost, and continue to lose, some measures of control when it comes to flying an aircraft. Automation has decreased the aircraft’s reliance on their inputs. Do they now resist any additional measures that reduce their control, even measures as seemingly innocuous as having two people in the cockpit at all times?

It is incumbent upon the airline industry to invest in research and development of a revised cockpit security apparatus that allows for access to the cockpit in an emergency situation. Perhaps pilots and engineers can put their heads together and come up with a workable solution.

Did they build the right cockpit security system? Most pilots think so. But I can’t shake the image of Captain Sondenheimer attacking the cockpit door with an axe, a crowbar, while his plane was rapidly descending to a crash. That’s all he had to work with. Human vs. machine. And he was reduced to using the most primitive tools to avert a disaster. The airline industry can do better.

Conclusion

In the aftermath of the Germanwings crash, system engineers of airline cockpit security should take their cue from the aftermath of the Apollo 1 fire. NASA had a problem. Their door didn’t open fast enough and three astronauts died. They fixed it.

Caption from James E. Webb’s History of NASA: Inspecting the new hatch, Wally Schirra makes sure his crew cannot be trapped as was the crew that died in the terrible Apollo spacecraft fire. Opening outward (to swing freely if pressure built up inside), the new hatch had to be much sturdier than the old inward-opening one. The complicated latch sealed against tiny leaks but allowed very rapid release. (NASA)

NASA implemented changes to the hatch, “As a result of the investigation, major modifications in design, materials, and procedures were implemented. The two-piece hatch was replaced by a single quick-operating, outward opening crew hatch made of aluminum and fiberglass. The new hatch could be opened from inside in seven seconds and by a pad safety crew in 10 seconds. Ease of opening was enhanced by a gas-powered counterbalance mechanism.”

A revealing aspect of human nature has surfaced while writing this paper. When I have mentioned to people what my paper is about and when I describe the details of the Germanwings crash there has been a prevailing response along the lines of … “You can’t prevent everything, you can’t account for every possibility.” A more basic rendering would show someone shrugging, hands to the sky, saying, “Waddya gonna do?”

There is an almost fatalistic belief that there are some bad things that can’t be prevented. But they can. And they should. It requires meaningful response to events like Germanwings Flight 9525. A response like the NASA engineers had after Apollo 1. To date, nothing has changed in cockpit security on major passenger aircraft. How long until the next pilot suicide? Will things change after that?

This paper has been very difficult to write. Real people died. Real people who, through research, I felt I got to know a little bit. I care about them. In this spirit I will conclude my thoughts by imagining.

I imagine, in some alternate universe, the Apollo 1 engineers redesigning the hatch before January 27, 1967. There is a fire in the command module. Ed White does what he was trained to do. He opens the hatch in under seven seconds and scrambles out. Roger Chaffee and Gus Grissom quickly follow. They suffer minor injuries but continue in the Apollo program. Shortly thereafter Roger Chaffee pilots the command module around the Moon while Gus Grissom and Ed White descend to the Moon’s surface in the Lunar Module. Gus Grissom climbs out from the Lunar Module and is the first human being to step on the Moon’s surface.

I imagine, in some alternate universe that prior to March 24, 2015, systems engineers designed an emergency provision whereby a cockpit door could be opened remotely. Germanwings Flight 9525 is rescued by Captain Patrick Sondenheimer after he gets the cockpit door open with the help of air traffic control in France. He is assisted by passengers Martyn Matthews and Sascha Schenk who pull the co-pilot from the cockpit. Maria Radner, son Felix, and the other Germanwings passengers, are rattled but unharmed.

The commemorative sculpture “Sonnenkugel” (meaning solar sphere), placed at the crash site of Germanwings Flight 9525, is in the form of a gold-plated sphere with a diameter of five meters, made up of 149 different elements. The interior of the sphere contains a crystal-shaped cylinder which in turn contains wooden spheres where the relatives of the victims can place their own personal mementos. ©Boris Horvat / AFP. Retrieved from Francetv,  March 18, 2019.
Fourteen astronauts (including the crew of Apollo 1) and cosmonauts who died are listed on a plaque behind the sculpture, Fallen Astronaut (8.5 cm). Located on the Moon, Hadley Rille, 26.13222ºN 3.63386ºE. Commissioned and placed on the Moon by the crew of Apollo 15 on August 1, 1971. (NASA)
References

Agle, D.C. (1998) “Flying the Gusmobile.” Air & Space Smithsonian. September 1998. Air & Space Magazine. Retrieved March 20, 2019 from https://www.airspacemag.com/flight-today/flying-the-gusmobile-218187/.

Alexander, I., Maiden, N. (2004). Scenarios, Stories, Use Cases Through the Systems Development Life-cycle. Wiley, New York.

Antón, A.I., Carter, R.A., Dagnino, A., Dempster, J.H., Siege, D.F. (2001). “Deriving Goals from a Use-Case Based Requirements Specification.” Requirements Engineering, College of Engineering, North Carolina State University (63-73).

Apollo Information Sheet. (1996). “Apollo Hatch Redesign – A Matter of Urgency.” December 1996, Issue 2. Retrieved March 25, 2019 from http://www.space1.com/pdf/news1296.pdf.

Bateson, J. (2012). “The Golden Gate Bridge’s Fatal Flaw.” Los Angeles Times. May 25, 2012.

Berger, E. (2016). “Gus Grissom taught NASA a Hard Lesson: ‘You Can Hurt Yourself in the Ocean.’ 11/8/2016. Retrieved March 19, 2019 from https://arstechnica.com/science/2016/11/with-every-splashdown-nasa-embraces-the-legacy-of-gus-grissom/.

Burgess, C., Doolan, K., Vis. B. (2003). Fallen Astronauts – Heroes Who Died Reaching for the Moon. University of Nebraska Press, Lincoln, Nebraska.

Burke, A. (2017). “Rule Requiring Airlines to Keep 2 Crew in Cockpit at All Times Lifted by Transport Canada.” CBC News. June 16, 2017. Retrieved March 28, 2019 from https://www.cbc.ca/news/canada/ottawa/transport-canada-two-flight-crew-cockpit-1.4164592.

Chaikin, A. (2016) “Apollo’s Worst Day.” Air & Space Magazine. November 2016. Retrieved March 29, 2019 from https://www.airspacemag.com/history-of-flight/apollo-fire-50-years-180960972/.

Cronbach, L.J., Meehl, P.E. (1955). “Construct Validity in Psychological Tests.” Psych. Bull, 52:281-302.

de Weck, O.L. “Verification and Validation.” Fundamentals of System Engineering. Massachusetts Institute of Technology. Fall 2015. Retrieved April 6, 2019 from https://ocw.mit.edu/courses/aeronautics-and-astronautics/16-842-fundamentals-of-systems-engineering-fall-2015/lecture-notes/MIT16_842F15_Ses9_Ver.pdf.

de Weck, O.L. “Verification and Validation. MIT OpenCourseWare. Published on August 10, 2017. Retrieved April 6, 2019 from https://www.youtube.com/watch?v=-63JXElqPaY.

ECA. (2016). “The End of the 2-Person Rule in Sight.” May 31, 2016. Retrieved March 28, 2019 from https://www.eurocockpit.be/news/end-2-persons-cockpit-rule-sight.

Eisner, H. (1997). Essentials of Project and Systems Engineering Management. New York: Wiley-Interscience.

Evans, B. (2012). “Phoenix from the Ashes: The Fall and Rise of Pad 34.” AmericaSpace. July 12, 2012. Retrieved March 29, 2019 from https://www.americaspace.com/2012/07/07/phoenix-from-the-ashes-the-fall-and-rise-of-pad-34/.

German Depression Foundation. (2018). “5.3 Million Germans Suffer From Depression Each Year.” November 29, 2018. Retrieved March 20, 2019 from https://www.dw.com/en/53-million-germans-suffer-from-depression-each-year/a-46506088.

Hammer, J. (2106). “The Real Story of Germanwings Flight 9525.” GQ. February 22, 2016. Retrieved March 23, 2019 from https://www.gq.com/story/germanwings-flight-9525-final-moments.

Hou, M., McColl, D., Heffner, K., Banbury, S., Charron, M., Arrabito, R. (2017). “Authority Pathway: Intelligent Adaptive Automation for a UAS Ground Control Station.” Engineering Psychology and Cognitive Ergonomics – Performance, Emotion and Situation Awareness. 14th International Conference, EPCE 2017. Vancouver, B.C., Canada, July 9-14, 2017, Proceedings Part I.

Hou, M., Banbury, S., Burns, C. (2015). Intelligent Adaptive Systems. Boca Raton, FL: CRC Press, Taylor & Francis Group.

Koziol, M. “Pilot Locked Out of Air New Zealand Cockpit after Mid-Air Dispute.” The Sydney Morning Herald. July 6, 2014. Retrieved March 28, 2019 from https://www.smh.com.au/lifestyle/pilot-locked-out-of-air-new-zealand-cockpit-after-midair-dispute-20140706-zsxra.html.

Kutner, M. (2015). “The Germanwings Crash Raises Questions About Reinforced Cockpit Doors.” Newsweek. March 26, 2015. Retrieved March 21, 2019 from https://www.newsweek.com/germanwings-crash-raises-questions-about-reinforced-cockpit-doors-317029.

Larimer, S. (2017). “‘We have a fire in the Cockpit!’ The Apollo 1 Disaster 50 Years Later.” The Washington Post. January 26, 2017. Retrieved March 15, 2019 from  https://www.washingtonpost.com/news/speaking-of-science/wp/2017/01/26/50-years-ago-three-astronauts-died-in-the-apollo-1-fire/?noredirect=on&utm_term=.dd33f537b064

MacLeod, I.S. (2008). “Scenario-based Requirements Capture for Human Factors Integration.” Cognitive Technical Work. 10:191-198. Springer-Verlag, London Limited.

Murray, Charles, Cox, Catherine Bly (1990). Apollo: The Race to the Moon. New York: Simon & Schuster.

NASA. “Apollo 1 – The Fire – 27 January 1967.” Retrieved March 21, 2019 from https://history.nasa.gov/SP-4029/Apollo_01a_Summary.htm.

NASA. (2007). Systems Engineering Handbook. NASA/SP-2007-6105 Rev1. Retrieved April 7, 2019 from https://www.nasa.gov/sites/default/files/atoms/files/nasa_systems_engineering_handbook.pdf.

Needell, A. (2017). “Learning from Tragedy: Apollo 1 Fire.” January 27, 2017. Retrieved March 25, 2019 from https://airandspace.si.edu/stories/editorial/learning-tragedy-apollo-1-fire.

Orloff, R.W. (2004). “Apollo 1 – The Fire: 27 January 1967.” Apollo by the Numbers: A Statistical Reference. NASA History Division. Office of Policy and Plans. NASA History Series. Washington, D.C.

Raabe, S. “U.S. Airlines Use ‘Two-Crew’ Cockpit Rule to Stop Renegade Pilots.” The Denver Post. March 26, 2015. Retrieved March 28, 2019 from https://www.denverpost.com/2015/03/26/u-s-airlines-use-two-crew-cockpit-rule-to-stop-renegade-pilots/.

Samaras, G.M. (2005). “Engineering Complex Systems: Validating the Human Factors.” Proc. 7th Annual Symposium on Human Interactions with Complex Systems. Greenbelt, MD., Nov. 17-18, 2005. Retrieved March 18, 2019 from https://pdfs.semanticscholar.org/6e21/5c36941aa34e8696d833b31d2e2a28ed4c06.pdf.

Schank, R. (1995). Tell Me a Story: Narrative and Intelligence. Northwestern University Press, Evanston.

Shea, J. Interview. Kelly, M. NASA Johnson Space Center Oral History Project – Oral History Transcript. August 26, 1998. Retrieved March 29, 2019 from https://historycollection.jsc.nasa.gov/JSCHistoryPortal/history/oral_histories/SheaJF/SheaJF_8-26-98.pdf.

Shea, J. Interview. Kelly, M. NASA Johnson Space Center Oral History Project – Oral History Transcript. November 23, 1998. Retrieved March 29, 2019 from https://historycollection.jsc.nasa.gov/JSCHistoryPortal/history/oral_histories/SheaJF/SheaJF_11-23-98.pdf.

Slayton, D.K. (1994). Deke! – U.S. Manned Space from Mercury to the Shuttle. Forge Paperback, New York.

Slayton, D.K. “The Story of Apollo 1 – Parts 1, 2, 3.” YouTube. Retrieved March 22, 2019 from:

https://www.youtube.com/watch?v=eUleWlFkZHg

https://www.youtube.com/watch?v=OPh_f2mhWe4

https://www.youtube.com/watch?v=GErfc7IlWE4

Thompson, F. Borman, D. Faget, M., White, G. Geer, B. (1967). Report of Apollo 204 Review Board (PDF). NASA. Archived (PDF) from the original on May 14, 2016. Retrieved on March 15, 2019 from https://history.nasa.gov/Apollo204/appendices/AppendixD12-17.pdf.

Weigel, G. (2017). “Grit, Gus, and Glory.” The New Atlantis. Number 52, Spring 2017, pp. 128-131. Retrieved March 19, 2019 from https://www.thenewatlantis.com/publications/grit-gus-and-glory.

White, M. “Detailed Biographies of Apollo 1 Crew – Gus Grissom.” NASA History. Retrieved on March 15, 2019 from https://history.nasa.gov/Apollo204/zorn/grissom.htm.