Back to index

Brainchildren: Essays on Designing Minds

Authors: Daniel C. Dennett, Daniel C. Dennett

Overview

Brainchildren: Essays on Designing Minds is a collection of essays exploring a wide range of topics in the philosophy of mind, artificial intelligence, and cognitive science. Written for a philosophically sophisticated audience, but with an eye towards interdisciplinary engagement, the book aims to illuminate the nature of mind, meaning, and consciousness by drawing on insights from computer science, neuroscience, psychology, and evolutionary biology. I argue that traditional approaches in the philosophy of mind are often hampered by outdated Cartesian assumptions and a lack of attention to the details of how the brain actually works. I introduce the “intentional stance” as a powerful tool for predicting and explaining behavior, emphasizing its role in both human and animal cognition. I also discuss the challenges and opportunities presented by artificial intelligence, arguing that while current AI systems are far from achieving human-level intelligence, they can serve as valuable “prosthetic aids” to philosophical thought experiments. Several essays are devoted to debunking common misconceptions about consciousness and qualia, arguing that traditional views are often based on misguided intuitions and a failure to appreciate the distributed, dynamic nature of cognitive processes. Throughout the book, the emphasis is on developing a naturalistic, scientifically informed approach to understanding the mind, one that avoids the pitfalls of both Cartesian dualism and simplistic behaviorism. The book’s relevance to contemporary debates in AI and cognitive science lies in its challenge to traditional assumptions and its emphasis on the importance of considering the brain as a product of evolution and a complex, dynamic system. The essays offer insights into the nature of intelligence, the role of language in thought, and the challenges of designing truly intelligent machines.

Book Outline

1. Can Machines Think?

The Turing Test, while not a practical tool for assessing individual computer programs, sets a high bar for artificial intelligence, emphasizing natural language processing and common sense reasoning as essential aspects of thinking. A computer that can consistently fool a judge in this test would be considered, for all practical purposes, a thinking entity. However, the true value of the test is philosophical: it forces us to grapple with the nature of thinking itself, and challenges species-chauvinistic or anthropocentric views about intelligence.

Key concept: The Turing Test, in its purest, strictest form, is a test of whether a computer can successfully play the imitation game by engaging in intelligent conversation that is indistinguishable from that of a human.

2. Speaking for Our Selves

Multiple Personality Disorder, or MPD, is a real phenomenon, albeit one shrouded in controversy and prone to misrepresentation. MPD patients exhibit distinct personalities or “alters” that take turns controlling behavior, often with amnesia between these shifts. This raises important philosophical questions about the nature of the self and the unity of consciousness. While child abuse is strongly correlated with MPD, the role of the therapist in creating or exacerbating the disorder through suggestion is a legitimate concern. Nevertheless, MPD highlights the plasticity of selfhood and the possibility of internal fragmentation in the face of psychological trauma.

Key concept: Multiple Personality Disorder (MPD) is not a case of multiple selves inhabiting a single body, but rather a disorder of the self, where different “Heads of Mind” emerge, each with its own set of memories, behaviors, and ways of presenting to the world.

3. Do-It-Yourself Understanding

Understanding is not a process of an inner self transforming information into content, but rather a distributed, evolving process. Drawing on Dretske’s work, this chapter argues against the idea of ‘do-it-yourself understanding’. Meanings don’t have direct causal power in the brain, but their presence in a system can be part of a causal explanation for how that system produces adaptive behavior.

Key concept: If you take care of the syntax, the semantics will take care of itself.

4. Two Contrasts: Folk Craft versus Folk Science, and Belief versus Opinion

Folk psychology should be viewed as a craft, like folk physics, rather than a strict theory. While our intuitive understanding of how minds work can be misleading, folk psychology is a powerful tool for prediction, honed by evolution and culture. Opinions, unlike beliefs, are linguistically infected and play a distinct role in human cognition. Connectionist models, while promising, need further development to fully capture the dynamic complexities of human memory and thought.

Key concept: Folk psychology is a craft, not a theory. The theory of folk psychology is the ideology about the craft, and there is lots of room, as anthropologists will remind us, for false ideology.

5. Real Patterns

Real patterns exist not only in physical phenomena but also in the behavior of agents. These patterns can be discerned through more efficient descriptions than brute enumeration, much like compressing a digital image. The intentional stance is a way of recognizing real patterns in behavior, even though it involves idealization and is vulnerable to misinterpretation. Different interpretations of the same behavior, like different compression algorithms for the same image, can coexist without one being definitively “truer” than the others.

Key concept: Real patterns exist in data if there is a description of the data that is more efficient than a simple ‘bit map.’ This applies to both physical patterns and the patterns we discern in behavior when we take the intentional stance.

6. Julian Jaynes’s Software Archeology

Julian Jaynes’s controversial theory of the bicameral mind proposes that consciousness emerged relatively recently in human history, arising from the breakdown of a more primitive mentality based on auditory hallucinations. While many of Jaynes’s specific historical claims are questionable, his overall approach, emphasizing a top-down perspective and speculative story-telling constrained by available evidence, is valuable.

Key concept: We first have to start from the top, from some conception of what consciousness is, from what our own introspection is.

7. Real Consciousness

Consciousness is not a single, unified phenomenon located in a specific part of the brain. The Multiple Drafts Model views consciousness as a distributed, ongoing process of micro-takings that interact and compete for dominance. Only those contents that persist long enough to have certain effects on memory, behavior, and other cognitive processes are “conscious.”

Key concept: Consciousness is cerebral celebrity—nothing more and nothing less.

8. Instead of Qualia

Qualia, the purported intrinsic, private, qualitative properties of experience, are an illusion. The way things look, sound, or feel to us are better understood as dispositional properties of our cerebral states to produce certain further effects in the very observers whose states they are. Sensory qualities are not something distinct from the brain’s discriminative states; they are those states themselves, functioning in a particular environment and organism.

Key concept: Qualitative properties that are intrinsically conscious are a myth, an artifact of misguided theorizing, not anything given pretheoretically.

9. The Practical Requirements for Making a Conscious Robot

Building a humanoid robot like Cog can help us understand the practical requirements for creating intelligent, adaptive behavior in a real-world environment. This “bottom-up” approach to artificial intelligence emphasizes the importance of embodiment, real-time interaction, and learning from experience. The goal is not to build a conscious robot per se, but rather a robot that can shed light on the design principles of intelligence and perhaps on the nature of consciousness itself.

Key concept: The intended realism of Cog as a model is relatively coarse-grained, varying opportunistically as a function of what we think we know, what we think we can build, and what we think doesn’t matter.

10. The Unimagined Preposterousness of Zombies: Commentary on Moody, Flanagan, and Polger

The philosophical concept of a zombie, a being physically identical to a human but lacking consciousness, is based on a flawed understanding of consciousness. Zombies, if they are to be truly behaviorally indistinguishable from us, must also be capable of all the higher-order reflections that we engage in, thus undermining the supposed contrast between the conscious and the nonconscious.

Key concept: Philosophers ought to have dropped the zombie like a hot potato

11. Cognitive Wheels: The Frame Problem of AI

The frame problem highlights the challenge of designing an artificial intelligence that can effectively use its knowledge to anticipate the consequences of actions and plan accordingly. It’s not enough for a system to have access to a vast amount of information; it must also be able to distinguish relevant from irrelevant information in real-time.

Key concept: The frame problem of AI, in its ‘whole pudding’ guise, is everybody’s problem, not just a problem for AI.

12. Producing Future by Telling Stories

One approach to addressing the frame problem is to adopt simplifying strategies. These include making some aspects of the environment salient, compartmentalizing unpredictable events as the ‘choices’ of agents, and adopting the intentional stance towards these agents. Using narrative schemata and allowing them to “try to happen” could help an agent utilize its memory in complex environments.

Key concept: ‘Actions are the poor man’s physics.’ - Yoav Shoham

13. The Logical Geography of Computational Approaches: A View from the East Pole

Different schools of thought in AI and cognitive science can be metaphorically mapped onto a logical geography, with MIT as the “East Pole” and other approaches ranging westward. High Church Computationalism (HCC) emphasizes symbol manipulation and rule-based systems, while “West Coast” approaches like connectionism explore more distributed, bottom-up models. These are not fundamentally empirical distinctions, but rather represent differing ideologies about how to approach building models of cognitive phenomena.

Key concept: MIT is the East Pole, and from a vantage point at the East Pole, the inhabitants of Chicago, Pennsylvania, Sussex, and even Brandeis University in Waltham are all distinctly Western in their appearance and manners.

14. Hofstadter’s Quest: A Tale of Cognitive Pursuit

Douglas Hofstadter’s work emphasizes the importance of analogy-making and metaphor appreciation in cognition. His approach, grounded in careful phenomenology and the development of progressively refined computer models, offers valuable insights into the workings of the mind, especially in domains involving creativity and insight.

Key concept: Hofstadter’s work…provides a fine demonstration of the powers of [his] school of thought in AI.

15. Foreword to Robert French, The Subtlety of Sameness

Robert French’s Tabletop model provides a concrete example of an AI system that can appreciate analogies, illustrating Hofstadter’s approach to AI. By focusing on a specific, familiar domain, the model reveals both the potential and the limitations of current AI techniques.

Key concept: A single well-developed example of a concept applied is often better than ten pages of definition.

16. Cognitive Science as Reverse Engineering: Several Meanings of ‘Top-Down’ and ‘Bottom-Up’

Cognitive science can be viewed as reverse engineering, attempting to understand the design principles of existing biological systems. This process is further complicated in biological systems where the design process, natural selection, has no foresight, causing multiple-use parts and beneficial side-effects to be discovered opportunistically.

Key concept: Mother Nature is a stingy, opportunistic engineer who takes advantage of rough correspondences whenever they are good enough for the organism’s purposes, given its budget.

17. Artificial Life as Philosophy

Artificial life is not just a scientific endeavor but also a philosophical tool. By creating and studying artificial systems, we can generate and test hypotheses about a wide range of philosophical issues, from the nature of life and mind to the origins of cooperation and the foundations of ethics.

Key concept: Artificial Life research is the creation of prosthetically controlled thought experiments of indefinite complexity.

19. Review of Allen Newell, Unified Theories of Cognition

When philosophers encounter artificial intelligence, they often focus on familiar philosophical problems: the nature of mind, meaning, rationality, and consciousness. However, they should also be open to the new problems and possibilities raised by AI, such as the frame problem, the nature of background knowledge, and the potential of connectionist models.

Key concept: AI is, in large measure, philosophy.

20. Out of the Armchair and into the Field

Studying animal behavior can provide valuable data for theories of mind, including clues about the origins of language and consciousness. Direct observation of animal behavior in the wild can provide useful constraints for thought experiments.

Key concept: We first have to start from the top, from some conception of what consciousness is, from what our own introspection is.

21. Cognitive Ethology: Hunting for Bargains or a Wild Goose Chase

Cognitive ethology can be improved by drawing insights from AI research. The intentional stance is a tool for generating testable hypotheses about animal beliefs and desires. Studying simpler organisms can be a useful starting point for understanding more complex cognitive systems.

Key concept: One adopts a strategy of treating the systems in question as intentional systems.

22. Do Animals Have Beliefs?

The question of whether animals have beliefs is hampered by disagreement over the definition of “belief.” A broad definition of belief, focused on behavior rather than internal representations, can be useful for explaining and predicting animal behavior. The intentional stance is a valuable tool in this regard, allowing us to attribute beliefs and desires to animals in a way that generates testable predictions.

Key concept: Do animals have beliefs?

23. Why Creative Intelligence Is Hard to Find: Commentary on Whiten and Byrne

Creative intelligence in animals is difficult to identify because both deceptive tactics and their countermeasures tend to be short-lived in evolutionary arms races. The concept of intelligence itself is not entirely objective, but rather depends on how we interpret an agent’s behavior in light of its environment and evolutionary history.

Key concept: There is a systematic instability in the phenomenon of creatively intelligent tactical deception (if it exists!) that will tend to frustrate efforts of interpretation.

24. Animal Consciousness: What Matters and Why

Animal consciousness is a complex and controversial issue, fraught with philosophical and scientific pitfalls. While consciousness is a real phenomenon, it is not a simple, all-or-none property. Pain and suffering are distinct, and attributing human-like feelings to animals based on casual observation is fraught with peril. A more rigorous approach involves looking at the specific adaptive problems faced by different species and investigating the neural mechanisms that have evolved to address those problems.

Key concept: The phenomenon of pain is neither homogeneous across species, nor simple.

25. Self-Portrait

Content and consciousness are the two central issues in the philosophy of mind. Content, or intentionality, is the more fundamental of the two and should be addressed first. My own approach to these topics is shaped by a naturalistic perspective, emphasizing the importance of understanding the brain as a physical, biological system.

Key concept: In my opinion, the two main topics in the philosophy of mind are content and consciousness.

26. Information, Technology and the Virtues of Ignorance

Information technology presents both opportunities and dangers for human values. As technology advances, it creates new moral dilemmas by increasing our knowledge and power, but potentially leading to an erosion of traditional virtues, skills, and forms of life. We must be mindful of the ways in which technology can transform our lives and strive to ensure that these transformations are for the better, not the worse.

Key concept: ‘Ought’ implies ‘can’; what is beyond your powers is beyond your obligations.

Essential Questions

1. Can machines think?

The Turing Test, as described in this book, probes the question of machine intelligence, not as a scientific benchmark but as a philosophical conversation starter. Can a machine genuinely ‘think’, or merely simulate thought convincingly enough to fool a human judge? I argue that a machine successfully passing a rigorous Turing Test could be considered a thinking entity for all practical purposes, but the more profound implication lies in challenging our very definition of thinking. It disarms anthropocentric biases and species chauvinism, opening our minds to the possibility of non-human intelligence.

2. How many selves can one person have?

This question explores the contentious nature of the self and the possibility of its fragmentation in cases of Multiple Personality Disorder (MPD). Are these distinct personalities ‘real’ or simply elaborate role-playing? Are there objective markers of genuine multiplicity, and how do we account for memory discrepancies and shifts in identity? This book argues that MPD is a genuine disorder of the self, often with roots in childhood trauma, which raises profound ethical questions about responsibility, agency, and the unity of consciousness.

3. How do we make meaning?

This question investigates how we develop meaning and understanding. Is meaning actively ‘constructed’ by a central interpreter in the mind, or is it a more distributed, evolving process shaped by interaction with the world? I argue against the idea of ‘do-it-yourself’ understanding, suggesting that meanings don’t have direct causal power in the brain but rather emerge from complex interactions. This book also examines various ways meanings might be “grounded”, suggesting that real patterns in behavior are a crucial source of semantic content.

4. What is consciousness, really?

This book challenges the traditional view of consciousness as a unified, centralized phenomenon. Is consciousness a real, discrete property or a construct arising from a dynamic, distributed system? I introduce the Multiple Drafts Model, which views consciousness as a sort of cerebral celebrity. Contents become ‘conscious’ when they become temporarily dominant in influencing behavior and other cognitive processes. This view dismantles the Cartesian Theater and opens up new possibilities for investigating the neural correlates of consciousness.

1. Can machines think?

The Turing Test, as described in this book, probes the question of machine intelligence, not as a scientific benchmark but as a philosophical conversation starter. Can a machine genuinely ‘think’, or merely simulate thought convincingly enough to fool a human judge? I argue that a machine successfully passing a rigorous Turing Test could be considered a thinking entity for all practical purposes, but the more profound implication lies in challenging our very definition of thinking. It disarms anthropocentric biases and species chauvinism, opening our minds to the possibility of non-human intelligence.

2. How many selves can one person have?

This question explores the contentious nature of the self and the possibility of its fragmentation in cases of Multiple Personality Disorder (MPD). Are these distinct personalities ‘real’ or simply elaborate role-playing? Are there objective markers of genuine multiplicity, and how do we account for memory discrepancies and shifts in identity? This book argues that MPD is a genuine disorder of the self, often with roots in childhood trauma, which raises profound ethical questions about responsibility, agency, and the unity of consciousness.

3. How do we make meaning?

This question investigates how we develop meaning and understanding. Is meaning actively ‘constructed’ by a central interpreter in the mind, or is it a more distributed, evolving process shaped by interaction with the world? I argue against the idea of ‘do-it-yourself’ understanding, suggesting that meanings don’t have direct causal power in the brain but rather emerge from complex interactions. This book also examines various ways meanings might be “grounded”, suggesting that real patterns in behavior are a crucial source of semantic content.

4. What is consciousness, really?

This book challenges the traditional view of consciousness as a unified, centralized phenomenon. Is consciousness a real, discrete property or a construct arising from a dynamic, distributed system? I introduce the Multiple Drafts Model, which views consciousness as a sort of cerebral celebrity. Contents become ‘conscious’ when they become temporarily dominant in influencing behavior and other cognitive processes. This view dismantles the Cartesian Theater and opens up new possibilities for investigating the neural correlates of consciousness.

Key Takeaways

1. Intelligence involves efficient, not perfect, use of knowledge.

The frame problem highlights a key challenge for AI: how can we design a system that can effectively use its knowledge to plan and anticipate the consequences of its actions, without getting bogged down in combinatorial explosion? Traditional, logic-based approaches face severe limitations in real-time scenarios. A more promising approach might involve drawing inspiration from simpler biological systems and looking for efficient, heuristic strategies that can approximate ideal performance.

Practical Application:

When designing a chatbot, focus not on pre-programming every possible response (which is impossible), but on creating a flexible system that can learn from its interactions and adapt to novel situations. This approach might involve equipping the chatbot with a set of core conversation strategies, then allowing it to refine these strategies through trial and error, learning to distinguish relevant from irrelevant information on its own.

2. Good enough is often better than perfect in design.

The book emphasizes the importance of a flexible, iterative approach to design, one that recognizes the limitations of foresight and the inevitability of unintended side effects. Natural selection, unlike human engineers, doesn’t plan ahead with perfect precision. It works by a process of blind variation and selective retention, a strategy that is well suited to the complexities of biological and cognitive systems. This suggests that ‘good enough’ solutions that can adapt and improve over time are often more robust than brittle, over-engineered systems that attempt to achieve perfection on the first pass.

Practical Application:

In product design, don’t strive for perfection on the first pass. Instead, create a ‘good enough’ prototype that can be tested and refined in the real world. Be prepared for unforeseen ‘bugs’ and unexpected interactions, and use these as opportunities to improve the design iteratively. Don’t attempt to anticipate every possible user scenario or edge case up front; instead, create a system that can learn and adapt to the demands of its users.

3. Avoid overselling AI’s capabilities; focus on its actual workings.

I argue that folk psychology, while a powerful tool for prediction, is often shrouded in false ideology. We tend to anthropomorphize systems, attributing to them more understanding and agency than they actually possess. This tendency can be particularly dangerous when dealing with sophisticated AI systems, leading to an overestimation of their capabilities and a failure to appreciate their limitations. The book stresses the importance of a scientifically informed understanding of how such systems actually work, to avoid the pitfalls of both Cartesian dualism and simplistic behaviorism.

Practical Application:

When communicating about complex AI systems, don’t mislead users by implying more understanding than is actually present. Instead, be transparent about the limitations of the system and provide tools for users to understand how it works and what kinds of errors or biases it might exhibit. This will empower users to interact with the system more effectively and to make informed judgments about its reliability.

1. Intelligence involves efficient, not perfect, use of knowledge.

The frame problem highlights a key challenge for AI: how can we design a system that can effectively use its knowledge to plan and anticipate the consequences of its actions, without getting bogged down in combinatorial explosion? Traditional, logic-based approaches face severe limitations in real-time scenarios. A more promising approach might involve drawing inspiration from simpler biological systems and looking for efficient, heuristic strategies that can approximate ideal performance.

Practical Application:

When designing a chatbot, focus not on pre-programming every possible response (which is impossible), but on creating a flexible system that can learn from its interactions and adapt to novel situations. This approach might involve equipping the chatbot with a set of core conversation strategies, then allowing it to refine these strategies through trial and error, learning to distinguish relevant from irrelevant information on its own.

2. Good enough is often better than perfect in design.

The book emphasizes the importance of a flexible, iterative approach to design, one that recognizes the limitations of foresight and the inevitability of unintended side effects. Natural selection, unlike human engineers, doesn’t plan ahead with perfect precision. It works by a process of blind variation and selective retention, a strategy that is well suited to the complexities of biological and cognitive systems. This suggests that ‘good enough’ solutions that can adapt and improve over time are often more robust than brittle, over-engineered systems that attempt to achieve perfection on the first pass.

Practical Application:

In product design, don’t strive for perfection on the first pass. Instead, create a ‘good enough’ prototype that can be tested and refined in the real world. Be prepared for unforeseen ‘bugs’ and unexpected interactions, and use these as opportunities to improve the design iteratively. Don’t attempt to anticipate every possible user scenario or edge case up front; instead, create a system that can learn and adapt to the demands of its users.

3. Avoid overselling AI’s capabilities; focus on its actual workings.

I argue that folk psychology, while a powerful tool for prediction, is often shrouded in false ideology. We tend to anthropomorphize systems, attributing to them more understanding and agency than they actually possess. This tendency can be particularly dangerous when dealing with sophisticated AI systems, leading to an overestimation of their capabilities and a failure to appreciate their limitations. The book stresses the importance of a scientifically informed understanding of how such systems actually work, to avoid the pitfalls of both Cartesian dualism and simplistic behaviorism.

Practical Application:

When communicating about complex AI systems, don’t mislead users by implying more understanding than is actually present. Instead, be transparent about the limitations of the system and provide tools for users to understand how it works and what kinds of errors or biases it might exhibit. This will empower users to interact with the system more effectively and to make informed judgments about its reliability.

Suggested Deep Dive

Chapter: Where Am I?

This chapter provides the most concentrated exposition of my view on the self as a ‘Center of Narrative Gravity’. It serves as a sort of user manual for navigating the rest of the essays in the book.

Memorable Quotes

Can Machines Think?. 3

‘I propose to consider the question, ‘Can machines think?’

Can Machines Think?. 19

Instead of arguing interminably about the ultimate nature and essence of thinking, why don’t we all agree that whatever that nature is, anything that could pass this test would surely have it…

Speaking for Our Selves. 33

Somewhere between these two scenarios lies the phenomenon of multiple personality in human beings.

Do-It-Yourself Understanding. 61

There is something privileged-or perhaps proprietary would be a better term-about the state of understanding.

Two Contrasts. 95

Folk psychology is an extraordinarily powerful source of prediction.

Can Machines Think?. 3

‘I propose to consider the question, ‘Can machines think?’

Can Machines Think?. 19

Instead of arguing interminably about the ultimate nature and essence of thinking, why don’t we all agree that whatever that nature is, anything that could pass this test would surely have it…

Speaking for Our Selves. 33

Somewhere between these two scenarios lies the phenomenon of multiple personality in human beings.

Do-It-Yourself Understanding. 61

There is something privileged-or perhaps proprietary would be a better term-about the state of understanding.

Two Contrasts. 95

Folk psychology is an extraordinarily powerful source of prediction.

Comparative Analysis

In contrast to purely philosophical works on the mind that are often light on empirical details, ‘Brainchildren’ engages extensively with research in artificial intelligence, neuroscience, and cognitive psychology. Unlike Jerry Fodor, who insists on a ‘language of thought’ and symbolic representations in the brain, I argue for a more distributed, less language-like view of mental representation. While I share some common ground with Donald Davidson’s ‘anomalous monism’, I disagree with his view on the trivial nature of indeterminacy of translation, suggesting a more radical interpretation is necessary. Unlike eliminativists like Paul Churchland who suggest folk psychology may be replaced, I argue for its enduring power, albeit needing some revisions and refinements as suggested by connectionism. Compared to philosophical discussions on consciousness that often focus on thought experiments with limited real-world application, ‘Brainchildren’ incorporates findings from actual fieldwork in animal behavior and robotics projects like Cog, adding ecological validity to the philosophical discussions.

Reflection

Brainchildren is a valuable contribution to the ongoing conversation about mind, meaning, and consciousness. Its strength lies in its rigorous, scientifically informed approach and its willingness to challenge traditional assumptions. However, the book’s reliance on thought experiments and philosophical argumentation can be perceived as a weakness by those seeking definitive empirical answers. Furthermore, my strong stance against traditional views on qualia and the nature of consciousness has not been universally accepted and continues to be a source of controversy. I do not offer ready-made solutions to the hard problems of consciousness, but I do offer a framework for thinking about them in a new way, drawing on insights from a variety of disciplines. This book’s emphasis on a naturalistic, bottom-up approach to studying the mind has important implications not only for the philosophy of mind, but also for the fields of artificial intelligence, cognitive science, and animal behavior.

Flashcards

What is the Turing test?

A test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

What is species chauvinism in the context of AI?

The erroneous belief that the nature of thinking requires human-like physical characteristics, like soft skin or warm blood.

What is the proposed addition to the Turing test involving object identification?

A proposed alternative to the Turing test where an entity must identify familiar objects to prove embodiment, but is rendered redundant by a properly conducted Turing test.

What is PARRY?

A theoretical computer program designed to simulate the conversation of a paranoid patient, used in a modified Turing test with psychiatrists.

What are expert systems?

Computer systems designed to mimic the decision-making ability of human experts in specific fields.

What is the frame problem?

The challenge faced by AI in representing how actions or events change a situation while most things stay the same.

What is anthropomorphism?

The tendency to attribute human-like emotions and motivations to non-human entities, including animals and robots.

What is the quick-probe assumption?

The assumption that an entity winning a specific test of intelligence is likely capable of performing many other intelligent actions.

What is iatrogenic Multiple Personality Disorder?

A condition where a patient develops multiple personalities, potentially influenced by interactions with therapists.

What is changeover in MPD?

The apparently spontaneous, vacancy-marked switching of dominant personalities in a person with Multiple Personality Disorder.

What is the Turing test?

A test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

What is species chauvinism in the context of AI?

The erroneous belief that the nature of thinking requires human-like physical characteristics, like soft skin or warm blood.

What is the proposed addition to the Turing test involving object identification?

A proposed alternative to the Turing test where an entity must identify familiar objects to prove embodiment, but is rendered redundant by a properly conducted Turing test.

What is PARRY?

A theoretical computer program designed to simulate the conversation of a paranoid patient, used in a modified Turing test with psychiatrists.

What are expert systems?

Computer systems designed to mimic the decision-making ability of human experts in specific fields.

What is the frame problem?

The challenge faced by AI in representing how actions or events change a situation while most things stay the same.

What is anthropomorphism?

The tendency to attribute human-like emotions and motivations to non-human entities, including animals and robots.

What is the quick-probe assumption?

The assumption that an entity winning a specific test of intelligence is likely capable of performing many other intelligent actions.

What is iatrogenic Multiple Personality Disorder?

A condition where a patient develops multiple personalities, potentially influenced by interactions with therapists.

What is changeover in MPD?

The apparently spontaneous, vacancy-marked switching of dominant personalities in a person with Multiple Personality Disorder.