To Forgive Design: Understanding Failure
Authors: Henry Petroski, Henry Petroski
Overview
In “To Forgive Design: Understanding Failure,” I delve into the multifaceted nature of engineering failures, moving beyond the technical aspects to explore the human, organizational, and historical factors that contribute to catastrophic events. My target audience is anyone interested in understanding why things break, from engineers and designers to policymakers and the general public. The book is especially relevant in today’s world, where increasingly complex technological systems and persistent calls for innovation and rapid development heighten the risks of unintended consequences. While failure is an inherent part of the human condition and the engineering process, I argue that we can learn from our mistakes and use them as stepping stones to create more robust and reliable designs. The book is a case study-driven exploration of landmark accidents and near-failures, offering readers a wide-angle lens through which to understand failure in context and its profound impact on how we interact with technology. I revisit classic cases like the Tacoma Narrows Bridge and introduce newer examples like the Deepwater Horizon oil spill, highlighting the timeless nature of engineering challenges and the persistence of human error in even the most advanced systems. I also address the social, economic, and political dimensions of failure, exploring the ethical responsibilities of engineers, the often contentious aftermath of accidents, and the importance of effective communication between technical experts and decision-makers. The overarching message is that failure, while inevitable, is not insurmountable. By forgiving design—that is, by acknowledging our limitations and embracing a failure-averse mindset—we can build a safer and more sustainable future.
Book Outline
1. By Way of Concrete Examples
I begin by examining several high-profile failures in different engineered systems, from airplane crashes to leaking tunnels to collapsing buildings, emphasizing that failures often stem not from faulty designs themselves, but from a combination of factors, including human error, material deficiencies, inadequate maintenance, and unforeseen circumstances during construction, operation, or use.
Key concept: A design is a manifestation of a technological concept, but a designed thing or system can also be neglected, misused, and mishandled by its owners, managers, operators, and users. Getting to the root cause of a failure can sometimes take years, because it can be so subtle and counterintuitive.
2. Things Happen
Prolonged success in engineering can lead to complacency and a lack of attention to potential failure modes. Using recent and historical examples, I highlight how systemic failures can occur when organizations become overconfident in their established practices and fail to adapt to changing conditions or new information.
Key concept: Every success sows the seeds of failure. Success makes you overconfident.
3. Designed to Fail
Sometimes, components are designed to fail to protect a larger system. This concept, known as “managed failure,” is illustrated with examples ranging from the weak points in outdoor stage roofs to the intentional breaking points in eggshells. I also discuss how even minor changes in a design can introduce new, unforeseen failure modes, as seen in the cases of the Citicorp Center and the Apple iPhone 4.
Key concept: The design is us.
4. Mechanics of Failure
Drawing on my experiences at the University of Illinois’ Talbot Laboratory, home to a massive testing machine, I illustrate how failure analysis is central to engineering. From Galileo’s analysis of scaling to my personal introduction to “real-world” failure at Argonne National Laboratory, I underscore the importance of testing, both for understanding material properties and for validating theoretical predictions.
Key concept: Theoretical mechanics seeks exact answers to approximate problems, while applied mechanics seeks approximate answers to exact problems.
5. A Repeating Problem
I explore how seemingly unrelated fields, such as dentistry and engineering, share common ground when it comes to understanding and dealing with failure. I use the analogy of a cracked tooth to explain the insidious nature of fatigue-crack initiation, growth, and fracture, a process relevant to many engineering structures, and emphasize the importance of regular inspections.
Key concept: Just as a medical doctor sends patients’ blood samples to a laboratory […], so do construction engineers send concrete samples to a lab […]. The integrity of this routine work should be taken for granted, but some years ago incidents in the New York City area revealed that it was not necessarily wise to do so.
6. The Old and the New
The longevity of bridges depends on many factors, and their failure can range from rapid collapses to slow deterioration. I discuss the trade-offs between different bridge materials and designs, the crucial role of ongoing maintenance, and the importance of learning from past failures, using examples like the Waldo-Hancock Bridge.
Key concept: Postponing scheduled inspections or maintenance work, such as bridge painting, can result in disastrous consequences.
7. Searching for a Cause
I discuss how eyewitness accounts and physical evidence are pieced together in failure investigations, but can often be conflicting, incomplete, or misinterpreted. I use the collapse of the Silver Bridge as an example, where the ultimate cause was traced to a small flaw and a changed design.
Key concept: A forensic investigation has to sort through all of this—and more—to try to come up with what really happened and why.
8. The Obligation of an Engineer
The importance of learning from past failures is paramount, especially as new technologies are developed. I explore how seemingly minor changes to designs can introduce new failure modes and how a catalog of past failures and case studies can be invaluable in preventing similar mistakes, using the collapse of the Quebec Bridge and its impact on Canadian engineering culture.
Key concept: Fail me once, shame on you; fail me twice, shame on me.
9. Before, during, and after the Fall
I explore the importance of context in analyzing failures, using examples such as the collapse of the Tacoma Narrows Bridge and the sinking of the Titanic. I emphasize that learning from failure is crucial for successful engineering design, and highlight how a focus solely on past successes can lead to overconfidence and future failures.
Key concept: Success is for the moment, and only that moment.
10. Legal Matters
The legal implications of engineering failures are complex and often involve competing theories and lengthy court battles. I use the I-35W bridge collapse in Minneapolis as a case study to discuss the challenges of assigning blame and the importance of comprehensive failure investigations, emphasizing that legal arguments are not necessarily scientific or engineering proofs.
Key concept: Legal arguments about causes of failures rely largely on expert testimony about hypothetical failure scenarios.
11. Back-Seat Designers
Even simple design decisions, like abbreviating years in computer code, can have unforeseen and far-reaching consequences, as seen in the Y2K problem. I discuss the challenges of designing complex systems, the need for a holistic view, and how conflicting constraints can lead to design compromises, using examples ranging from car headrests to software algorithms.
Key concept: Back-seat designers.
12. Houston, You Have a Problem
Human factors and organizational culture play a crucial role in engineering failures, as highlighted by the Deepwater Horizon oil spill. I explore how complex systems involving human-machine interactions can be vulnerable to failure when communication breakdowns and competing priorities between engineers and managers occur.
Key concept: We are designing every option to be successful, and we are planning for it failing.
13. Without a Leg to Stand On
From childhood experiences of “fishing” for lost treasures under subway grates to the complex design of modern tower cranes, I show how engineering principles and an understanding of failure are relevant at all scales. I discuss several tower crane accidents and the regulatory changes they spurred, emphasizing the importance of following good practice and the potentially devastating consequences of cutting corners.
Key concept: Children learn a lot from play.
14. History and Failure
I conclude by reiterating the importance of learning from past failures, not just in engineering but in all aspects of life. Using the Tacoma Narrows Bridge collapse and the 2008 financial crisis as examples, I argue that successful change comes from anticipating and adapting to potential failure modes, and emphasize that history, when viewed through the lens of engineering, offers invaluable lessons for building a more resilient future.
Key concept: The history of engineering, as that of civilization itself, is clearly one of both successes and failures, and paradoxically the failures are the more useful component of the mix.
Essential Questions
1. What are the most common causes of engineering failures, and how can we better understand their complex nature?
Engineering failures rarely stem from a single, isolated cause. They often result from a complex interplay of technical flaws, human error, organizational culture, economic pressures, and even seemingly innocuous design choices. The book explores multiple case studies across various domains, including bridge collapses, airplane crashes, and software glitches, demonstrating how seemingly minor oversights or deviations from best practices can cascade into catastrophic events. Understanding these complex interactions is crucial for effective failure analysis and for developing strategies to mitigate future risks. Often, the root cause lies not in the design itself, but in the human systems surrounding the design, highlighting the need for a holistic approach to failure analysis.
2. Why is learning from past failures essential for successful engineering design, and how does it contribute to a deeper understanding of engineering principles?
Learning from failure is not simply about avoiding past mistakes, but also about gaining a deeper understanding of engineering principles and their applications. Failure analysis provides valuable insights into how and why things break, revealing hidden weaknesses in designs, materials, or processes that may not have been apparent during the design phase. By studying past failures, engineers can refine their understanding of the underlying scientific and mathematical principles governing engineered systems and use this knowledge to improve future designs and prevent similar accidents. Moreover, a historical perspective on engineering failures offers invaluable lessons about the importance of adapting to changing conditions, incorporating new knowledge, and embracing a failure-averse mindset.
3. What is the obligation of an engineer, and how can the lessons of past failures inform ethical decision-making in the profession?
Engineers have a profound responsibility to prioritize public safety, health, and welfare in their designs and practices. This responsibility extends beyond simply meeting technical specifications and requires a thorough understanding of potential risks and consequences, as well as a commitment to ethical decision-making. The book explores several cases where ethical lapses or a lack of due diligence on the part of engineers and managers contributed to catastrophic failures, highlighting the devastating impact such failures can have on human lives and the environment. The Iron Ring tradition among Canadian engineers, with its emphasis on professional obligation and accountability, serves as a powerful reminder of this crucial ethical dimension of engineering.
4. How can success itself contribute to future failures, and how can engineers cultivate a failure-averse mindset to prevent such outcomes?
While successes can be inspirational and boost confidence, an overreliance on past achievements can lead to complacency, overconfidence, and a lack of attention to potential failure modes. This is particularly true when engineers and organizations become overly reliant on established practices and fail to adapt to changing conditions, incorporate new knowledge, or anticipate unforeseen circumstances. The book offers numerous examples of how prolonged success can breed a culture of denial and create blind spots that ultimately lead to failures. Successful design requires not only an understanding of what has worked in the past, but also a keen awareness of what has failed and why, fostering a failure-averse mindset that anticipates and mitigates potential risks.
5. How do non-technical factors influence design decisions, and how can we better balance competing constraints and priorities in engineering projects?
Design is not solely a technical endeavor, but also a human-centered process that involves making choices and compromises in the face of competing constraints and objectives. The book explores the complex interplay of technical, social, economic, political, and aesthetic factors that influence design decisions, demonstrating how value judgments and subjective preferences can sometimes overshadow crucial safety considerations. From the aesthetic flourish that contributed to the Dee Bridge disaster to the sleek design that proved to be the Achilles’ heel of the Tacoma Narrows Bridge, I illustrate how seemingly minor design choices can have unintended and catastrophic consequences. The book emphasizes the need for a holistic approach to design that considers the broader context in which engineered systems operate and prioritizes human safety and well-being.
1. What are the most common causes of engineering failures, and how can we better understand their complex nature?
Engineering failures rarely stem from a single, isolated cause. They often result from a complex interplay of technical flaws, human error, organizational culture, economic pressures, and even seemingly innocuous design choices. The book explores multiple case studies across various domains, including bridge collapses, airplane crashes, and software glitches, demonstrating how seemingly minor oversights or deviations from best practices can cascade into catastrophic events. Understanding these complex interactions is crucial for effective failure analysis and for developing strategies to mitigate future risks. Often, the root cause lies not in the design itself, but in the human systems surrounding the design, highlighting the need for a holistic approach to failure analysis.
2. Why is learning from past failures essential for successful engineering design, and how does it contribute to a deeper understanding of engineering principles?
Learning from failure is not simply about avoiding past mistakes, but also about gaining a deeper understanding of engineering principles and their applications. Failure analysis provides valuable insights into how and why things break, revealing hidden weaknesses in designs, materials, or processes that may not have been apparent during the design phase. By studying past failures, engineers can refine their understanding of the underlying scientific and mathematical principles governing engineered systems and use this knowledge to improve future designs and prevent similar accidents. Moreover, a historical perspective on engineering failures offers invaluable lessons about the importance of adapting to changing conditions, incorporating new knowledge, and embracing a failure-averse mindset.
3. What is the obligation of an engineer, and how can the lessons of past failures inform ethical decision-making in the profession?
Engineers have a profound responsibility to prioritize public safety, health, and welfare in their designs and practices. This responsibility extends beyond simply meeting technical specifications and requires a thorough understanding of potential risks and consequences, as well as a commitment to ethical decision-making. The book explores several cases where ethical lapses or a lack of due diligence on the part of engineers and managers contributed to catastrophic failures, highlighting the devastating impact such failures can have on human lives and the environment. The Iron Ring tradition among Canadian engineers, with its emphasis on professional obligation and accountability, serves as a powerful reminder of this crucial ethical dimension of engineering.
4. How can success itself contribute to future failures, and how can engineers cultivate a failure-averse mindset to prevent such outcomes?
While successes can be inspirational and boost confidence, an overreliance on past achievements can lead to complacency, overconfidence, and a lack of attention to potential failure modes. This is particularly true when engineers and organizations become overly reliant on established practices and fail to adapt to changing conditions, incorporate new knowledge, or anticipate unforeseen circumstances. The book offers numerous examples of how prolonged success can breed a culture of denial and create blind spots that ultimately lead to failures. Successful design requires not only an understanding of what has worked in the past, but also a keen awareness of what has failed and why, fostering a failure-averse mindset that anticipates and mitigates potential risks.
5. How do non-technical factors influence design decisions, and how can we better balance competing constraints and priorities in engineering projects?
Design is not solely a technical endeavor, but also a human-centered process that involves making choices and compromises in the face of competing constraints and objectives. The book explores the complex interplay of technical, social, economic, political, and aesthetic factors that influence design decisions, demonstrating how value judgments and subjective preferences can sometimes overshadow crucial safety considerations. From the aesthetic flourish that contributed to the Dee Bridge disaster to the sleek design that proved to be the Achilles’ heel of the Tacoma Narrows Bridge, I illustrate how seemingly minor design choices can have unintended and catastrophic consequences. The book emphasizes the need for a holistic approach to design that considers the broader context in which engineered systems operate and prioritizes human safety and well-being.
Key Takeaways
1. Anticipate long-term consequences of design decisions.
The Y2K problem, stemming from the use of two-digit years in legacy code, perfectly illustrates how seemingly innocuous design decisions can have unforeseen and far-reaching consequences. Even simple shortcuts, taken in the name of efficiency or to conserve resources, can create vulnerabilities that only become apparent years later. The scramble to fix the Y2K bug highlights the importance of anticipating the long-term implications of design choices, especially in rapidly evolving technological landscapes.
Practical Application:
In AI system development, rigorous testing and validation are crucial. After each iteration or code update, comprehensive testing should be performed, including unit tests, integration tests, and system-level tests, to ensure that new bugs haven’t been introduced and that the system still functions as intended under all expected conditions.
2. Consider context of use and misuse.
As seen in the analysis of bridge collapses, failures often result not from faulty designs alone, but from how structures are used, misused, and maintained. A seemingly robust design can fail catastrophically if not properly maintained or if subjected to loads or stresses beyond those anticipated in the design phase. Similarly, AI systems, even if designed with good intentions, can fail or be misused in ways that have negative consequences if not carefully considered.
Practical Application:
When developing AI algorithms, it’s crucial to consider not only the ideal operating conditions but also the potential for misuse, abuse, and unforeseen interactions with users. Designing robust systems requires anticipating edge cases, adversarial attacks, and the possibility of unintended biases emerging from real-world data. Regular audits and ongoing monitoring are essential to detect and address emerging issues.
3. Avoid insularity in teams and organizations.
The Deepwater Horizon oil spill serves as a stark reminder of the role human factors and organizational culture play in engineering disasters. Competing priorities between engineers and managers, communication breakdowns, and a culture that prioritized cost-cutting over safety created a perfect storm for failure. In the high-stakes world of AI, similar organizational dysfunctions can have far-reaching consequences.
Practical Application:
In AI product development, having a diverse team with varying backgrounds and perspectives can help identify potential blind spots and challenge assumptions. Open communication and respectful debate between engineers, managers, and other stakeholders are crucial for making informed decisions and avoiding groupthink. Regularly incorporating external expert reviews can also help uncover hidden biases or vulnerabilities.
4. Embrace cross-disciplinary learning.
Studying seemingly unrelated fields like dentistry and engineering, or civil engineering and computer science, can provide fresh perspectives and reveal common principles underlying failures across different domains. Just as a cracked tooth can lead to a complete fracture over time, so too can a small flaw in a bridge or a software bug escalate into a catastrophic event. The cross-disciplinary study of failures can help identify patterns and anticipate potential issues.
Practical Application:
In the context of AI safety, it’s crucial to not only learn from past failures in AI systems, but also from analogous failures in other domains. Studying bridge collapses, airplane crashes, or software glitches can offer valuable insights into the underlying principles of failure and inform the design of more robust and resilient AI systems. A cross-disciplinary approach to failure analysis can help identify common patterns and develop effective mitigation strategies.
5. Incorporate design for failure.
Sometimes, components are designed to fail in a controlled manner to protect a larger system or prevent a more catastrophic failure. From the canvas roofs of outdoor stages designed to give way in strong winds to fuses that prevent electrical circuits from overloading, engineered systems often incorporate elements intended to fail under specific conditions. AI systems, too, can be designed to fail gracefully, limiting the impact of inevitable errors or malfunctions.
Practical Application:
In AI, incorporating design for failure requires careful consideration of how systems should behave under various failure scenarios. This includes implementing robust error handling, redundancy, fallback mechanisms, and clear communication protocols to ensure that failures are gracefully handled and do not escalate into catastrophic events. It’s crucial to not only design for failures but also to test these failure modes rigorously to validate their effectiveness.
1. Anticipate long-term consequences of design decisions.
The Y2K problem, stemming from the use of two-digit years in legacy code, perfectly illustrates how seemingly innocuous design decisions can have unforeseen and far-reaching consequences. Even simple shortcuts, taken in the name of efficiency or to conserve resources, can create vulnerabilities that only become apparent years later. The scramble to fix the Y2K bug highlights the importance of anticipating the long-term implications of design choices, especially in rapidly evolving technological landscapes.
Practical Application:
In AI system development, rigorous testing and validation are crucial. After each iteration or code update, comprehensive testing should be performed, including unit tests, integration tests, and system-level tests, to ensure that new bugs haven’t been introduced and that the system still functions as intended under all expected conditions.
2. Consider context of use and misuse.
As seen in the analysis of bridge collapses, failures often result not from faulty designs alone, but from how structures are used, misused, and maintained. A seemingly robust design can fail catastrophically if not properly maintained or if subjected to loads or stresses beyond those anticipated in the design phase. Similarly, AI systems, even if designed with good intentions, can fail or be misused in ways that have negative consequences if not carefully considered.
Practical Application:
When developing AI algorithms, it’s crucial to consider not only the ideal operating conditions but also the potential for misuse, abuse, and unforeseen interactions with users. Designing robust systems requires anticipating edge cases, adversarial attacks, and the possibility of unintended biases emerging from real-world data. Regular audits and ongoing monitoring are essential to detect and address emerging issues.
3. Avoid insularity in teams and organizations.
The Deepwater Horizon oil spill serves as a stark reminder of the role human factors and organizational culture play in engineering disasters. Competing priorities between engineers and managers, communication breakdowns, and a culture that prioritized cost-cutting over safety created a perfect storm for failure. In the high-stakes world of AI, similar organizational dysfunctions can have far-reaching consequences.
Practical Application:
In AI product development, having a diverse team with varying backgrounds and perspectives can help identify potential blind spots and challenge assumptions. Open communication and respectful debate between engineers, managers, and other stakeholders are crucial for making informed decisions and avoiding groupthink. Regularly incorporating external expert reviews can also help uncover hidden biases or vulnerabilities.
4. Embrace cross-disciplinary learning.
Studying seemingly unrelated fields like dentistry and engineering, or civil engineering and computer science, can provide fresh perspectives and reveal common principles underlying failures across different domains. Just as a cracked tooth can lead to a complete fracture over time, so too can a small flaw in a bridge or a software bug escalate into a catastrophic event. The cross-disciplinary study of failures can help identify patterns and anticipate potential issues.
Practical Application:
In the context of AI safety, it’s crucial to not only learn from past failures in AI systems, but also from analogous failures in other domains. Studying bridge collapses, airplane crashes, or software glitches can offer valuable insights into the underlying principles of failure and inform the design of more robust and resilient AI systems. A cross-disciplinary approach to failure analysis can help identify common patterns and develop effective mitigation strategies.
5. Incorporate design for failure.
Sometimes, components are designed to fail in a controlled manner to protect a larger system or prevent a more catastrophic failure. From the canvas roofs of outdoor stages designed to give way in strong winds to fuses that prevent electrical circuits from overloading, engineered systems often incorporate elements intended to fail under specific conditions. AI systems, too, can be designed to fail gracefully, limiting the impact of inevitable errors or malfunctions.
Practical Application:
In AI, incorporating design for failure requires careful consideration of how systems should behave under various failure scenarios. This includes implementing robust error handling, redundancy, fallback mechanisms, and clear communication protocols to ensure that failures are gracefully handled and do not escalate into catastrophic events. It’s crucial to not only design for failures but also to test these failure modes rigorously to validate their effectiveness.
Suggested Deep Dive
Chapter: Houston, You Have a Problem
This chapter, focusing on the Deepwater Horizon oil spill, is particularly relevant to AI product engineers, as it highlights the complex interplay of technical and human factors in system failures. It offers valuable lessons on the importance of communication, risk management, and organizational culture in ensuring the safety and reliability of complex systems, all of which are directly applicable to the development and deployment of AI systems.
Memorable Quotes
By Way of Concrete Examples. 3
Accidents may occur quickly, but they often follow long periods of normal or near-normal behavior.
By Way of Concrete Examples. 8
It finally became clear what the real problem was.
Things Happen. 31
Every success sows the seeds of failure. Success makes you overconfident.
Designed to Fail. 42
The design is us.
Things Happen. 46
Fail me once, shame on you; fail me twice, shame on me.
By Way of Concrete Examples. 3
Accidents may occur quickly, but they often follow long periods of normal or near-normal behavior.
By Way of Concrete Examples. 8
It finally became clear what the real problem was.
Things Happen. 31
Every success sows the seeds of failure. Success makes you overconfident.
Designed to Fail. 42
The design is us.
Things Happen. 46
Fail me once, shame on you; fail me twice, shame on me.
Comparative Analysis
To Forgive Design complements my earlier work, To Engineer Is Human, by broadening the scope of failure analysis beyond mechanical and structural failures to encompass systemic failures and human-machine interactions. It also builds upon the work of other scholars in the field, such as Charles Perrow’s Normal Accidents, which explores the complexities of tightly coupled systems, and Diane Vaughan’s The Challenger Launch Decision, which examines the role of organizational culture in the space shuttle disaster. While To Engineer Is Human focused on the technical aspects of failure, To Forgive Design delves into the human element, aligning with works like Eugene Ferguson’s Engineering and the Mind’s Eye, which highlights the role of nonverbal thinking and intuition in design. The book also distinguishes itself by focusing on the legal and social implications of failure, a dimension often overlooked in technical analyses. Unlike some historical accounts that focus primarily on technological advancements, To Forgive Design integrates the history of engineering with its social and political context, drawing inspiration from works that emphasize the interplay of these factors. Finally, the book’s unique contribution lies in its emphasis on learning from failures, not just to prevent their recurrence but also to foster a deeper understanding of engineering principles and their applications.
Reflection
To Forgive Design offers a valuable perspective on the multifaceted nature of engineering failures, urging readers to move beyond simplistic explanations and consider the interplay of technical, human, and organizational factors. The book’s strength lies in its rich case studies and the author’s ability to draw insightful lessons from them, bridging the gap between technical analysis and real-world implications. A skeptical reader might question the extent to which certain historical failures, especially those involving older technologies, are truly relevant to modern engineering practice. However, Petroski effectively argues that while technologies evolve, the underlying principles of design, and the potential for human error, remain timeless. The book’s focus on American and European examples could also be seen as a limitation, as it neglects failures in other parts of the world that might offer valuable lessons. Despite these minor shortcomings, To Forgive Design stands as a timely reminder of the importance of humility, foresight, and continuous learning in the face of ever-increasing technological complexity. Its insights are particularly pertinent to the field of AI, where the rapid pace of development and the potential for unforeseen consequences make failure analysis and a failure-averse mindset more crucial than ever.
Flashcards
What is the ‘knee-jerk reaction’ often seen in the media following a highly visible failure?
The tendency to attribute failure to the design and its designers, often overlooking other contributing factors.
What is a ‘paradigm of failure analysis’?
A catastrophic event that led to significant changes in how engineers deal with technology.
What atmosphere can prevail in an organization following prolonged success?
Complacency, overconfidence, laxity, and hubris.
What is ‘managed failure’?
A design detail intended to fail under certain conditions to protect a larger system.
According to the author, in what is the blame for engineering failures often falsely placed?
The design itself.
What is the key distinction between theoretical and applied mechanics?
Theoretical mechanics seeks exact answers to approximate problems, while applied mechanics seeks approximate answers to exact problems.
What is ‘factor of safety’?
The ratio of the ultimate load a structure can carry divided by the actual load it is designed to carry.
What is the timeless aspect of engineering?
The creative and inherently human process of design.
What is a central principle illustrated by the Galileo’s marble column story?
Making any change in a design can introduce a new way for it to fail.
What is metal fatigue?
The phenomenon of a material’s strength decreasing under repeated loading.
What is the ‘knee-jerk reaction’ often seen in the media following a highly visible failure?
The tendency to attribute failure to the design and its designers, often overlooking other contributing factors.
What is a ‘paradigm of failure analysis’?
A catastrophic event that led to significant changes in how engineers deal with technology.
What atmosphere can prevail in an organization following prolonged success?
Complacency, overconfidence, laxity, and hubris.
What is ‘managed failure’?
A design detail intended to fail under certain conditions to protect a larger system.
According to the author, in what is the blame for engineering failures often falsely placed?
The design itself.
What is the key distinction between theoretical and applied mechanics?
Theoretical mechanics seeks exact answers to approximate problems, while applied mechanics seeks approximate answers to exact problems.
What is ‘factor of safety’?
The ratio of the ultimate load a structure can carry divided by the actual load it is designed to carry.
What is the timeless aspect of engineering?
The creative and inherently human process of design.
What is a central principle illustrated by the Galileo’s marble column story?
Making any change in a design can introduce a new way for it to fail.
What is metal fatigue?
The phenomenon of a material’s strength decreasing under repeated loading.