
With Boeing’s launch of the 737 MAX, the company promised a high-tech update to the world’s most popular plane. The model contained a software feature new to commercial aviation: the Maneuvering Characteristics Augmentation System (MCAS), which promised to automatically resolve emergencies caused by too much lift without any help from a human pilot. It seemed a sound enough program, that is, until October 2018, when a faulty sensor on Lion Air Flight 610 mistakenly activated MCAS and initiated a dive, overriding the pilot and causing the plane to crash, killing all 189 passengers and crew. Only five months later, the same software malfunction on Ethiopian Airlines Flight 302 led to the deaths of an additional 157 people.
What happened? In one sense, what happened was a terrible design process. Given the software’s complexity, which would cost pilots precious time to master, Boeing’s designers had opted not to tell pilots or airlines about the program, wiping all references to it from training manuals. MCAS activation was sophisticated and unlikely enough, they reasoned, that there was no need for pilots to know about it, much less to spend time getting trained in its operation. The pilots had been intentionally made ignorant of the program, so they couldn’t understand the problem.
But there is a deeper sense of mystery to this question. How is it possible for a technological system to come into existence—one that is as huge and complicated as the world’s largest aerospace company—in which agents utterly lack the information they need to perform their roles, no one can understand why things happen as they do, and it is unknown where blame should rest when disaster strikes? (Only one employee, a technical pilot, faced criminal charges, and he was acquitted. Boeing recently reached a deal with the Justice Department to avoid criminal prosecution.)
Continue reading in American Affairs.