What to Read Next
Already a member?Sign in
In March 2000, a fire struck a semiconductor plant in New Mexico, leaving Ericsson Inc. short of millions of chips that the Swedish telecom giant was counting on to launch a new mobile phone product. As a result, Ericsson was ultimately driven from the market (it would later re-enter through a joint venture with Sony Corp.) while its rival Nokia Corp. flourished. Ericsson had failed to recognize the New Mexico plant as a bottleneck in a complex, interconnected global supply chain.
Ericsson is not the only company to suffer a catastrophe due, in part, to the complexity of its own systems. In February 1995, Barings Bank, Britain’s oldest merchant bank (it had financed the Napoleonic wars, the Louisiana Purchase and the Erie Canal) went from strength and prestige to bankruptcy over the course of days. The failure was caused by the actions of a single trader — Nick Leeson — who was based in a small office in Singapore. Soon after Leeson’s appointment as general manager of Barings Securities Singapore, he used a secret account to hide losses he sustained engaging in the unauthorized trading of futures and options. The complexity of the Barings systems enabled Leeson to fool others into thinking that he was making money when in fact he was losing millions. But after the January 1995 Kobe, Japan, earthquake had rocked the Asian financial markets, Leeson’s accumulated losses — some $1.4 billion — became too enormous to hide, eventually leading to Barings’ collapse.
In the past, companies have tried to manage risks by focusing on potential threats outside the organization: competitors, shifts in the strategic landscape, natural disasters or geopolitical events. They are generally less adept at detecting internal vulnerabilities that make breakdowns not just likely but, in many cases, inevitable. Vulnerabilities enter organizations and other human-designed systems as they grow more complex. Indeed, some systems are so complex that they defy a thorough understanding. In August 2006, a defective software program aboard a Malaysia Airlines jetliner flying from Perth, Australia, to Kuala Lumpur, Malaysia, supplied incorrect data about the aircraft’s speed and acceleration. This confused the flight computers, which sent the Boeing 777 on a 3,000-foot roller-coaster ride. With more than five million lines of code, aircraft software programs have become too large and complex to be tested thoroughly and are fielded without any guarantee that they will always work.
Read the Full ArticleAlready a subscriber? Sign in
1. F.P. Brooks, “The Mythical Man-Month: Essays On Software Engineering” (Reading, Massachusetts: Addison-Wesley, 1975).
2. S. Hamm, “Heading off the Hackers,” BusinessWeek, Aug. 10, 2006, www.businessweek.com.
3. G.T. Huang, “The Talented Mr. Mitnick,” Technology Review, March 2005, available at www.technologyreview.com/Infotech/14231/.
4. C. Perrow, “Normal Accidents: Living With High-Risk Technologies” (Princeton, New Jersey: Princeton University Press, 1999); see also S.D. Sagan, “The Limits of Safety: Organizations, Accidents, and Nuclear Weapons” (Princeton, New Jersey: Princeton University Press, 1993).
5. C. Perrow, “Normal Accidents,” 379.