Fluke: Chance, Chaos, and Why Everything We Do Matters
Authors: Brian Klaas Tags: philosophy, complexity, chaos theory, decision-making, science Publication Year: 2024
Overview
In this book, I dismantle the comforting storybook version of reality we’ve been taught—a world of neat, linear cause-and-effect—and replace it with a more bewildering but ultimately more truthful picture. Our lives and our societies are not governed by grand plans or predictable forces, but are instead shaped by an endless cascade of flukes: small, contingent, and often random events that have monumental consequences. I argue that we live in a deeply [[intertwined world]], where every action, no matter how seemingly insignificant, ripples across the system in unpredictable ways. This is a world of chaos and complexity, where a vacation taken decades ago can save a city from nuclear annihilation, and a single genetic mutation in a pet shop crayfish can alter the ecology of an island nation. My aim is to challenge the damaging myths of individualism and control that dominate modern thought. For those of you in technology and AI, this is a crucial warning against the hubris of prediction. Our world is not a dataset to be mastered but a [[complex adaptive system]] teetering on the edge of chaos. By understanding the deep-seated roles of contingency, chance, and chaos, we can move beyond the futile quest for certainty and control. Instead, we can embrace a more profound and empowering truth: in a world where everything is connected, we may control nothing, but we influence everything. This perspective doesn’t lead to nihilism, but to a richer, more meaningful existence where every single one of us, and everything we do, truly matters.
Book Distillation
1. Introduction
Our world is not a simple story where X causes Y. Seemingly trivial events—a tourist’s fond memory of a city, a passing cloud—can have life-or-death consequences for hundreds of thousands. This happens because our lives are not linear paths but a constant branching in a ‘Garden of Forking Paths,’ where every small step alters the future for everyone. Our existence is a perpetual seesaw between [[contingency]], where tiny changes produce enormous effects, and [[convergence]], where outcomes are largely inevitable. We are seduced by the comforting logic of convergence, but the truth is that our world is far more contingent, and therefore more interesting, than we dare to imagine.
Key Quote/Concept:
[[Contingency vs. Convergence]]. These are two opposing ways of viewing the world. Contingency is the ‘stuff happens’ theory, where small, random events can drastically alter outcomes (like an asteroid wiping out the dinosaurs). Convergence is the ‘everything happens for a reason’ idea, where outcomes are inevitable because nature finds the same effective solutions to the same problems (like the independent evolution of complex eyes in humans and squids).
2. Changing Anything Changes Everything
The idea that we are independent individuals in full control of our destinies is the defining lie of our time: the [[delusion of individualism]]. In reality, we are all caught in an ‘inescapable network of mutuality,’ where the smallest actions of people we will never meet can determine our fates. This profound [[interconnectedness]] means our world is a chaotic system, sensitive to initial conditions and therefore fundamentally unpredictable. This reality flips our individualist worldview on its head, revealing a potent, astonishing fact: we control nothing, but influence everything.
Key Quote/Concept:
[[The Overview Effect]]. This is the cognitive shift reported by astronauts who see Earth from space. The view shatters the illusion of borders and separateness, revealing a single, interconnected system. It’s a visceral experience of the book’s central message that individualism is a mirage and connection defines us.
3. Everything Doesn’t Happen for a Reason
Major events in history and our own lives are not driven by a grand plan, but by contingency. The very existence of complex life is owed to a one-in-a-billion microbial merger two billion years ago. We systematically downplay the role of luck, preferring comforting myths of meritocracy, yet extreme success is often the product of a random lightning bolt of luck striking average talent. Our world operates under a principle of [[contingent convergence]]: it appears stable and orderly until a small, random event creates a jolt, revealing its true chaotic nature. Randomness isn’t just ‘noise’ to be ignored; it’s a fundamental engine of change.
Key Quote/Concept:
[[Contingent Convergence]]. This is the principle, demonstrated by the Long-Term Evolution Experiment with E. coli, that change is broadly convergent until it’s not. For over a decade, twelve identical bacterial colonies evolved in similar ways, but then one colony, due to a series of improbable contingent mutations, suddenly developed the ability to eat a new food source, radically diverging from the others. Our lives and societies follow this same pattern.
4. Why Our Brains Distort Reality
Our brains are not designed to perceive objective reality. They evolved for survival, making us ‘Shortcut Creatures,’ not ‘Truth Creatures.’ Our perception of the world is a simplified, useful illusion—like a computer’s desktop interface—that hides the staggering complexity underneath. This evolutionary design hardwires us with cognitive biases. We are pattern-detection machines that are allergic to randomness, so we invent causes, narratives, and superstitions to create order from chaos. This [[teleological bias]]—the Cult of Because—makes us wrongly dismiss flukes as unimportant.
Key Quote/Concept:
[[Fitness Beats Truth Theorem]]. This theorem posits that natural selection does not favor organisms that see the world as it truly is, but rather those that see it in a way that is most useful for survival and reproduction. Our minds evolved to create a simplified, distorted ‘map’ of reality because the full ‘territory’ would be computationally overwhelming and paralyzing.
5. The Human Swarm
Modern human society functions as a [[complex adaptive system]], much like a locust swarm. This creates the [[paradox of the swarm]]: on a micro level, our lives are more ordered and predictable than ever (local stability), yet on a macro level, our global system is more fragile and prone to sudden, unpredictable shocks than ever before (global instability). Our hyper-connected, obsessively optimized world exists in a state of [[self-organized criticality]], constantly teetering on the ‘edge of chaos.’ Like a sandpile where a single grain can trigger a massive avalanche, our society is primed for tiny flukes to cause catastrophic cascades, or Black Swans.
Key Quote/Concept:
[[Self-Organized Criticality]]. This is a property of complex systems to naturally evolve toward a critical state where a minor disturbance can lead to a chain reaction of any size. It explains why our seemingly stable, ordered world is so frequently blindsided by massive, unpredictable events like financial crises, pandemics, and social revolutions. These aren’t external shocks; they are the inevitable outcome of the system’s design.
6. Heraclitus Rules
We misuse probability by failing to distinguish between two fundamentally different concepts. [[Risk]] applies to closed systems with stable, knowable odds, like a dice roll. [[Uncertainty]], however, applies to open, complex systems where the underlying rules are themselves changing. Most of the important questions we face—in politics, economics, and our own lives—exist in the ‘Land of Heraclitean Uncertainty,’ where the past is not a reliable guide to the future. Applying probabilistic models here creates a dangerous [[illusion of control]], as we mistake untamable chaos for tamable chance.
Key Quote/Concept:
[[Heraclitean Uncertainty]]. Named for the philosopher who said we never step in the same river twice, this refers to uncertainty that arises because the system itself is non-stationary and constantly changing. Probabilistic forecasts based on past data fail here because the fundamental patterns of cause and effect are morphing over time.
7. The Storytelling Animal
Humans are not the rational actors of economic models; we are storytelling animals. Our brains are hardwired with a [[narrative bias]] that instinctively connects dots to form a coherent story, even when none exists. These stories are not mere interpretations of reality; they are powerful causal forces that shape our beliefs and drive our actions, capable of becoming self-fulfilling prophecies that trigger wars or recessions. Our models of the world systematically ignore this, pretending our actions are dictated by rationality, not narratives, thereby missing the primary engine of human behavior.
Key Quote/Concept:
[[Narrative Economics]]. An idea championed by Robert Shiller, it posits that major economic events are driven not by objective data alone, but by the viral spread of popular stories and narratives. A story about a coming recession can cause people and businesses to cut spending, thereby causing the very recession the story predicted.
8. The Lottery of Earth
Where something happens is as important as what happens. Human history is profoundly shaped by the [[lottery of earth]]—the arbitrary distribution of continents, mountains, rivers, and resources. These geographical facts create [[path dependency]], where early human choices about how to interact with the landscape constrain future possibilities for millennia. The most powerful force is [[human space-time contingency]], where inert geological facts become historically consequential only through their interaction with human civilization at a specific moment in time, linking ancient phytoplankton to modern election results.
Key Quote/Concept:
[[Human Space-Time Contingency]]. This concept describes how inert geographical facts become drivers of change only when they interact with human civilization at a specific time. For example, the rich soil from an ancient inland sea in the American South only became a major historical force when the Industrial Revolution created demand for cotton, which in turn shaped slave-based demographics, which now influences modern voting patterns in those same areas.
9. Everyone’s a Butterfly
In a chaotic, intertwined world, every single person is constantly changing history. The ‘Great Man’ and ‘social forces’ views of history are both wrong. Because of the [[non-identity problem]]—the fact that any tiny change in behavior can alter which people are born—who does something is as important as what they do. The same idea, proposed by a different person, can have a radically different trajectory. Each of us, through our unique existence and actions, creates our own butterfly effect. You matter. That’s not self-help advice. It’s scientific truth.
Key Quote/Concept:
[[The Cassandra Problem]]. Named for the Greek myth, this is the idea that the messenger can be as important as the message. An idea’s validity is not enough; its reception depends on the credibility and identity of the person who proposes it. The theory of evolution, for example, was accepted far more readily coming from the well-respected Charles Darwin than it would have been from his contemporary, the spiritualist-believing Alfred Russel Wallace.
10. Of Clocks and Calendars
When something happens is a primary source of contingency. Our lives are a ‘Garden of Forking Paths,’ where every moment is a branching point, and unrelated causal chains can converge with life-altering consequences ([[Cournot contingency]]). Our very measurement of time—the seven-day week, the names of the months—is an arbitrary product of historical accidents. Sometimes, these accidents become permanent through [[lock-in]], where an early, random choice (like the VHS format or a specific dog breed) becomes standardized and resistant to change, demonstrating that some flukes have staying power.
Key Quote/Concept:
[[Lock-in]]. A concept from complexity economics where an early, often arbitrary, contingent event becomes difficult or impossible to reverse due to increasing returns. The QWERTY keyboard layout is a classic example. It was not designed for efficiency, but once it became the standard, the costs of switching to a better layout became too high, locking us into a suboptimal path.
11. The Emperor’s New Equations
Understanding human society is harder than rocket science because it has a ‘Hard Problem’ that is likely unsolvable. Social science research, our modern oracle, often creates a fun-house mirror of reality. It suffers from an ‘Easy Problem’ of flawed methods (like P-hacking and publication bias) and a ‘Hard Problem’ of inherent uncertainty. Our obsession with quantification leads to [[mathiness]]—using complex equations to create a veneer of certainty that obscures flawed assumptions and ignores the outliers and flukes that are often the most important drivers of change.
Key Quote/Concept:
[[The Hard Problem of Social Research]]. This is the argument that even with perfect methods, human society is fundamentally unpredictable because 1) different researchers will always make different choices and get different results from the same data; 2) the world itself is constantly changing, so theories become obsolete; and 3) we only have one history to observe, making it impossible to distinguish a freak event from an inevitable one.
12. Could It Be Otherwise?
The question of whether we can alter our life’s script confronts the debate between determinism and free will. The scientific consensus suggests that either the universe is deterministic (every event is caused by prior events) or it is indeterministic only due to true quantum randomness. Neither view supports [[libertarian free will]]—the intuitive feeling that a ‘ghost in the machine’ makes choices independent of the physical brain. While this is unsettling, it leads to an awe-inspiring conclusion: we are the contingent culmination of 13.7 billion years of flukes. Every action we take is another thread in the deterministic tapestry, shaping what is to come.
Key Quote/Concept:
[[Libertarian Free Will]]. This is the common-sense, intuitive belief that we are the independent authors of our thoughts and can freely choose to ‘do otherwise’ at any given moment, unconstrained by the causal chain of physics. This idea, while deeply felt, is incompatible with our scientific understanding of the universe, which suggests our choices are the product of the physical state of our brains.
13. Why Everything We Do Matters
The modern despair of meaninglessness comes from a futile obsession with controlling an uncontrollable world. The path to a better life is to abandon this quest for certainty and instead live ‘wonder-smitten with reality.’ We must learn to distinguish between ‘rubber problems’ (stable, optimizable systems) and ‘rice problems’ (complex, uncertain systems). For the rice problems that define most of our lives, the best strategy is not ruthless efficiency but embracing uncertainty through [[exploration and experimentation]]. By letting go of control, we find a more profound truth: we may control nothing, but we influence everything, and that makes our existence deeply meaningful.
Key Quote/Concept:
[[Explore vs. Exploit]]. This is a fundamental trade-off. ‘Exploiting’ means using a known, reliable strategy to achieve a predictable outcome. ‘Exploring’ means trying something new with an uncertain outcome. In a complex, ever-changing world (‘rice problems’), over-reliance on exploitation is dangerous. We must constantly explore, embrace randomness, and build in slack to foster resilience and discover novel solutions.
Generated using Google GenAI
Essential Questions
1. What is the central argument of ‘Fluke’ against the conventional ‘storybook’ understanding of reality?
My central argument is that the simple, linear cause-and-effect stories we tell ourselves are comforting lies. Reality is not a neat narrative but a chaotic and deeply intertwined system governed by ‘flukes’—small, contingent events with massive consequences. I challenge the dominant worldview of [[convergence]], the idea that outcomes are inevitable and everything happens for a reason. Instead, I argue that our world is overwhelmingly shaped by [[contingency]], where tiny, random perturbations can radically alter the future. This is not a world of independent individuals controlling their destinies, but an ‘inescapable network of mutuality’ where every action ripples across the system in unpredictable ways. For an AI engineer, this is a fundamental critique of predictive hubris. The world is not a stable dataset to be mastered but a [[complex adaptive system]] teetering on the edge of chaos. Understanding this moves us from a futile quest for control to a more profound realization: we control nothing, but we influence everything, which makes our actions deeply meaningful.
2. How do human cognitive biases and modern societal structures conspire to create a misleading sense of order and predictability?
Our brains are not ‘Truth Creatures’; they are ‘Shortcut Creatures.’ According to the [[Fitness Beats Truth Theorem]], we evolved to perceive a simplified, useful illusion of reality, not objective truth. This hardwires us with cognitive biases, particularly a [[teleological bias]]—the ‘Cult of Because’—that makes us invent reasons and narratives to explain away randomness. We are storytelling animals who instinctively connect dots, even when no connection exists. Concurrently, modern society has become a ‘human swarm,’ a [[complex adaptive system]] that creates the [[paradox of the swarm]]: unprecedented order and predictability on a micro-level (local stability) masks extreme fragility and unpredictability on a macro-level (global instability). We mistake the illusion of control derived from our ordered daily lives for mastery over the entire system. This system, however, exists in a state of [[self-organized criticality]], like a sandpile where a single grain can trigger a massive avalanche. Our cognitive need for order and the apparent stability of our hyper-optimized world create a dangerous mirage of regularity, blinding us to the chaotic reality that tiny flukes constantly shape our world.
3. What are the practical and philosophical implications of embracing a world driven by contingency and chaos?
Embracing this worldview is not a call to nihilism but a path to a more meaningful existence and more resilient systems. Philosophically, it means abandoning the damaging [[delusion of individualism]] and the futile obsession with control. When we accept that we are part of an intertwined causal web, the fact that we ‘control nothing, but influence everything’ becomes empowering. Every action matters. Practically, especially for those in technology, this demands intellectual humility. We must distinguish between predictable ‘rubber problems’ (closed, optimizable systems) and ‘rice problems’ (complex, uncertain systems). For the latter, which constitute most of the important challenges we face, probabilistic models based on past data are dangerously misleading due to [[Heraclitean Uncertainty]]. The correct strategy is not ruthless optimization, which pushes systems to the fragile edge of chaos, but building in slack and embracing [[exploration and experimentation]]. This means valuing trial-and-error, fostering resilience over brittle efficiency, and recognizing that in a complex world, the best solutions are often discovered, not engineered.
Key Takeaways
1. The world is a Complex Adaptive System, not a complicated but predictable machine.
A key mistake we make is confusing ‘complicated’ with ‘complex.’ A Swiss watch is complicated; it has many parts, but its behavior is predictable and its components do not adapt. Modern society, however, is a [[complex adaptive system]]. It consists of diverse, interconnected agents who are constantly adapting to one another and their environment. This creates emergent properties, feedback loops, and tipping points that make the system’s macro behavior fundamentally unpredictable, even if its micro-level rules are simple. This is the [[paradox of the swarm]]: our lives feel more ordered than ever, yet our global system is more fragile and prone to ‘Black Swan’ events. This is because our hyper-connected, optimized world exists in a state of [[self-organized criticality]], constantly teetering on the ‘edge of chaos,’ where a tiny fluke can trigger a catastrophic cascade. Understanding this distinction is crucial to avoid being blindsided by the inevitable avalanches of our social sandpile.
Practical Application: An AI product engineer building a recommendation engine should recognize they are not just optimizing a complicated algorithm but intervening in a complex adaptive system of user tastes and social trends. Instead of solely optimizing for a single metric like ‘engagement,’ which could lead to brittle, easily-gamed outcomes (like outrage-driven content), they should build in mechanisms for [[exploration and experimentation]]. This could mean intentionally introducing diverse or novel content (exploring) alongside personalized recommendations (exploiting) to create a more resilient and adaptive system that is less prone to catastrophic feedback loops, like political polarization or misinformation spirals.
2. Our brains are ‘Shortcut Creatures’ with a powerful narrative bias; we impose simple stories on complex reality.
Our minds did not evolve to perceive objective reality but to make survival-oriented shortcuts. This is the core of the [[Fitness Beats Truth Theorem]]. One of our most powerful shortcuts is the [[narrative bias]]: an innate tendency to connect disparate events into a coherent cause-and-effect story. We are storytelling animals, not rational actors. As I demonstrate, these stories are not just interpretations; they are causal forces. A viral narrative about a recession can become a self-fulfilling prophecy, an idea central to [[Narrative Economics]]. This cognitive feature makes us allergic to randomness and contingency; we dismiss flukes and invent reasons, creating a fun-house mirror of reality that feels orderly and meaningful. We systematically ignore that small, accidental perturbations are often the true drivers of change, preferring to believe that big events must have big, clear causes.
Practical Application: When conducting user research or analyzing product data, an AI product engineer must be hyper-aware of this bias. Users will often construct a plausible but incorrect story to explain their behavior (‘I clicked this because…’). The engineer must look for the underlying flukes and contextual factors, not just the user’s post-hoc rationalization. When designing an AI product, instead of presenting complex probabilistic outputs, framing the AI’s function within a simple, intuitive narrative can dramatically increase user adoption and trust. For example, a health AI could frame its advice not as statistical correlations but as a story: ‘To help your body build a stronger defense for the winter, let’s try…’.
3. To navigate an uncertain world, we must distinguish ‘rubber problems’ from ‘rice problems’ and embrace exploration.
The futile quest for control stems from misdiagnosing the nature of the problems we face. I distinguish between ‘rubber problems’—stable, closed systems with knowable rules where optimization is effective (like baseball analytics)—and ‘rice problems’—open, complex, and uncertain systems where the underlying dynamics are constantly changing (like farming in a volatile climate). Most of our important challenges, from personal careers to global economics, are ‘rice problems’ governed by [[Heraclitean Uncertainty]]. Applying an optimization-focused, ‘rubber problem’ mindset here is dangerous; it creates brittle systems with no slack, pushing them to the edge of chaos. The correct approach for ‘rice problems’ is to balance ‘exploitation’ (using known strategies) with ‘exploration’ (trying new things). As the Kantu people of Borneo demonstrate with their randomized farming strategy, embracing uncertainty and diversifying through experimentation is the key to resilience and long-term success in an unpredictable world.
Practical Application: An AI product team deciding on its roadmap faces this trade-off. Improving the core feature set based on existing user data is an ‘exploit’ strategy (a ‘rubber problem’). It’s safe and delivers predictable, incremental gains. However, dedicating a portion of the budget to ‘explore’—testing novel, high-risk features with uncertain outcomes, or even funding basic research with no immediate application—is crucial for long-term survival. The work on mRNA vaccines was pure exploration for decades before it suddenly became the most important ‘exploit’ in the world. A product strategy that only exploits its current local maximum will eventually be disrupted by a competitor who explored and found a higher peak.
Suggested Deep Dive
Chapter: Chapter 5: The Human Swarm
Reason: This chapter is the most critical for an AI product engineer as it provides the core mental model for understanding the modern world. It explains why our hyper-connected, optimized society behaves like a [[complex adaptive system]] teetering on the ‘edge of chaos.’ The concepts of the [[paradox of the swarm]], [[self-organized criticality]], and the illusion of stability are essential for anyone building products that operate at scale, as they reveal why seemingly stable systems can experience sudden, catastrophic failures or ‘Black Swans’ from tiny, unpredictable flukes.
Key Vignette
The Secretary’s Pet City
In 1945, the US Target Committee selected Kyoto as the primary target for the first atomic bomb. However, Secretary of War Henry Stimson, remembering a pleasant vacation he and his wife took there in 1926, vehemently and repeatedly objected. Despite the generals’ insistence that Kyoto was a vital military target, Stimson went directly to President Truman and had his ‘pet city’ removed from the list. This single, contingent fluke—a decades-old fond memory—spared over one hundred thousand lives in Kyoto, leading to the eventual bombing of Hiroshima and, due to a passing cloud, Nagasaki instead.
Memorable Quotes
We control nothing, but influence everything.
— Page 30, Chapter 2: Changing Anything Changes Everything
To us, the world appears convergent, until we realize, with a jolt, that it isn’t.
— Page 52, Chapter 3: Everything Doesn’t Happen for a Reason
Our species is a devoted disciple of the Cult of Because.
— Page 64, Chapter 4: Why Our Brains Distort Reality
Modern humans live in the most ordered societies that have ever existed, but our world is also more prone to disarray and disorder than any other social environment in the history of humanity.
— Page 76, Chapter 5: The Human Swarm
You matter. That’s not self-help advice. It’s scientific truth.
— Page 139, Chapter 9: Everyone’s a Butterfly
Comparative Analysis
My work in ‘Fluke’ stands on the shoulders of giants like Nassim Nicholas Taleb and Daniel Kahneman, but takes their ideas in a different philosophical direction. Taleb’s ‘The Black Swan’ brilliantly diagnoses the role of rare, high-impact events in a world of ‘fat tails’ and critiques our misuse of probabilistic risk models. I build on this by arguing that these ‘Black Swans’ are not just statistical artifacts but the natural outcome of [[self-organized criticality]] in our deeply intertwined world. While Taleb focuses on how to be robust to uncertainty, I focus on why that uncertainty is the fundamental texture of reality and a source of meaning. Similarly, Kahneman’s ‘Thinking, Fast and Slow’ meticulously documents the cognitive biases—the ‘shortcuts’—that lead to irrationality. I place these biases within a broader evolutionary context using the [[Fitness Beats Truth Theorem]], arguing they are not just flaws but features of a mind designed for survival, not truth. My unique contribution is to synthesize these insights from complexity science, cognitive psychology, and evolutionary biology into a coherent worldview that challenges the [[delusion of individualism]] and finds profound meaning not in controlling our world, but in our inescapable influence upon it.
Reflection
In writing ‘Fluke,’ my goal was to dismantle the comforting but dangerous illusion of a predictable, controllable world. The book’s strength lies in its synthesis, weaving together stories and concepts from chaos theory, evolutionary biology, history, and philosophy to build a single, cohesive argument: our lives are governed by contingency, and in our deeply [[intertwined world]], every small action matters. For professionals in fields like AI, who are actively building the infrastructure of our future society, this is a crucial warning against the hubris of optimization and prediction. However, a skeptical reader might argue that my focus on contingency underplays the powerful, structural forces—capitalism, geopolitics, inequality—that create strongly convergent outcomes. While a fluke might change who wins an election, these larger structures ensure the game remains largely the same. This is a valid tension. Furthermore, my deep dive into determinism and the non-existence of [[libertarian free will]] may feel, to some, like a philosophical detour. Yet, I believe it is essential for confronting the ultimate source of contingency. The book’s ultimate significance is not to foster despair in an uncontrollable world, but to inspire awe and a sense of responsibility. If we control nothing but influence everything, then the moral weight of our actions becomes heavier, and our existence, a product of 13.7 billion years of flukes, becomes infinitely more precious.
Flashcards
Card 1
Front: Define [[Contingency vs. Convergence]].
Back: Two opposing worldviews. Contingency: Small, random events can drastically alter outcomes (‘stuff happens’). Convergence: Outcomes are largely inevitable as systems find similar solutions to similar problems (‘everything happens for a reason’). The book argues our world is far more contingent than we believe.
Card 2
Front: What is [[Self-Organized Criticality]]?
Back: A property of complex systems to naturally evolve to a ‘critical’ state where a minor disturbance can trigger a chain reaction of any size. It’s exemplified by a sandpile, where one grain of sand can cause a massive avalanche. It explains why our seemingly stable world is prone to sudden, unpredictable shocks (Black Swans).
Card 3
Front: What is [[Heraclitean Uncertainty]]?
Back: Uncertainty that arises because the system itself is non-stationary and constantly changing (named for Heraclitus: ‘you can’t step in the same river twice’). Probabilistic models based on past data fail here because the fundamental rules of cause and effect are morphing over time.
Card 4
Front: Explain the [[Fitness Beats Truth Theorem]].
Back: The theorem from evolutionary game theory stating that natural selection favors organisms that perceive reality in a way that is most useful for survival and reproduction, not those that perceive objective truth. Our brains evolved to be ‘Shortcut Creatures,’ not ‘Truth Creatures.’
Card 5
Front: What is the [[Paradox of the Swarm]]?
Back: The state of modern society where we experience immense order and predictability on a micro-level ([[local stability]]), yet the global system is more fragile and prone to sudden, unpredictable shocks than ever before ([[global instability]]).
Card 6
Front: What is [[Contingent Convergence]]?
Back: The principle that systems appear stable and evolve in predictable, convergent ways for long periods, until a small, random, contingent event (or series of events) creates a jolt, causing a radical and unpredictable divergence. This was demonstrated by the Long-Term Evolution Experiment with E. coli.
Card 7
Front: What is the core idea of [[Narrative Economics]]?
Back: The theory that major economic events are driven not by objective data and rational actors alone, but by the viral spread of popular stories. These narratives can become self-fulfilling prophecies, causing the very outcomes they predict (e.g., a story about a recession causing people to cut spending, thereby triggering one).
Card 8
Front: Distinguish between ‘rubber problems’ and ‘rice problems.’
Back: ‘Rubber problems’ are stable, closed systems where optimization is an effective strategy. ‘Rice problems’ are open, complex, uncertain systems where optimization is dangerous and the best strategy involves experimentation, slack, and building resilience.
Generated using Google GenAI