Back to index

Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence

Authors: Kate Crawford, Kate Crawford

Overview

Atlas of AI explores how artificial intelligence is not an abstract, disembodied force but a physical infrastructure deeply entwined with the planet’s resources, labor, and data. It challenges the mythology of AI as separate from social and political forces, emphasizing its role in amplifying existing power structures.

My target audience is anyone concerned about the societal impact of technology, particularly within AI and related fields. The book’s relevance stems from the increasing deployment of AI systems across all aspects of life, from policing and healthcare to education and the workplace. It critically examines AI’s extractive nature, highlighting its dependence on exploited labor and resources, raising crucial questions about justice, ethics, and the future of work.

Atlas of AI aims to understand how AI is made in the widest sense, examining the economic, political, cultural, and historical forces that shape it. By adopting a topographical approach, the book explores diverse perspectives and scales, going beyond the narrow focus on algorithms to reveal AI’s true costs. This analysis aims to expose the power dynamics embedded within AI, fostering a critical understanding necessary for making informed collective decisions about its future trajectory.

Book Outline

1. Earth

AI systems depend on exploiting energy and mineral resources, cheap labor, and data at scale. AI systems are not autonomous, rational, or able to discern anything without extensive training, they reflect and produce social relations and understandings of the world.

Key concept: Artificial intelligence is neither artificial nor intelligent. It is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications.

2. Labor

AI is made of human labor, from the digital pieceworkers paid pennies to click on microtasks, to Amazon warehouse employees keeping in time with algorithmic cadences. Coordinating actions of humans with repetitive motions of robots has always involved controlling bodies in space and time.

Key concept: Labor in AI is a story about time. AI technologies create conditions for ever more granular and precise mechanisms of temporal management, demanding more data about what people are doing and how and when they do it.

3. Data

All publicly accessible digital material—including personal or potentially damaging data—is open to being harvested to train AI models. Gigantic datasets of images and text are used to improve algorithms for facial recognition, language prediction, and object detection.

Key concept: Data is open to being harvested for training. All publicly accessible digital material including data that is personal or potentially damaging is open to being harvested for training datasets. Current practices of working with data in AI raise profound ethical, methodological, and epistemological concerns.

4. Classification

Classification in AI naturalizes hierarchies and magnifies inequity, presenting us with a regime of normative reasoning. Labels are used to predict human identity (gender, race, character), reducing a sign to a system, a proxy to the real, a toy model to human complexity.

Key concept: Classifications are powerful technologies. Embedded in working infrastructures, they become relatively invisible without losing any of their power. Classification is an act of power.

5. Affect

Facial expressions, according to Ekman, could reveal a small set of universal emotions. However, affect recognition tools, premised on reading facial expressions, have raised ethical and scientific doubts due to lack of evidence that interior states can be accurately assessed through facial analysis.

Key concept: There are no reliably detectable “universal emotions”. The claim that a person’s interior state can be accurately assessed from facial analysis is based on shaky evidence.

6. State

AI systems are tools of state power, shaping practices of surveillance, data extraction, and risk assessment. The military past of AI is being revived with a strong nationalist agenda, with extralegal tools from the intelligence community spreading to commercial uses.

Key concept: AI functions as a structure of power that combines infrastructure, capital, and labor. AI systems are built with logics of capital, policing, and militarization, widening existing asymmetries of power.

7. Conclusion: Power

AI is a political, economic, cultural, and scientific force linked to struggles for economic mobility and political maneuvering. The practice of justice and limits to power are essential to consider, particularly the question: Whose interests does AI serve and who bears the greatest risk?

Key concept: Power concedes nothing without a demand. It never did and it never will.

8. Coda: Space

Space colonization is presented as a solution to Earth’s resource limitations, but is driven by an expansionist logic of extraction. This vision ignores minimizing consumption and exploitation, driven by a fear of limits and death.

Key concept: Space has become the ultimate imperial ambition.

Essential Questions

1. How does Crawford challenge the conventional understanding of AI as an immaterial force?

Crawford argues that the portrayal of AI as an abstract, disembodied entity obscures its material reality and far-reaching societal impacts. She emphasizes that AI is built upon the exploitation of natural resources through mining for rare earth minerals, which are essential for computing hardware. Furthermore, AI’s vast energy consumption fuels its carbon footprint, impacting the environment and contributing to climate change. The book underscores the human labor behind AI, from data labelers to warehouse workers, who are often subject to precarious working conditions and surveillance. By tracing AI’s physical and human costs, Crawford aims to deconstruct the myth of “clean” technology and reveal its extractive nature.

2. How do classifications in AI systems reflect and reinforce existing power dynamics?

Crawford argues that the seemingly objective classifications used in AI systems are not neutral but reflect and reinforce existing power structures. These classifications, whether related to race, gender, or socioeconomic status, are embedded within training data and shape how AI systems “see” and interpret the world. This process can lead to discriminatory outcomes and exacerbate existing inequalities. The book challenges the focus on technical debiasing, arguing that addressing the root causes of bias requires a deeper understanding of how social and political forces shape the creation and deployment of AI systems. Ultimately, Crawford calls for a shift towards a more just and equitable approach to AI development, one that prioritizes human well-being and social justice over profit and control.

3. How does Crawford connect AI to state power and surveillance?

Crawford challenges the notion that AI is primarily about code and algorithms, emphasizing instead its role as a powerful tool of state control and surveillance. She traces the military origins and funding of AI research, showing how concepts from the battlefield, such as targeting and situational awareness, have permeated civilian applications. The book explores how AI is used by governments and law enforcement agencies to surveil, track, and control populations, especially marginalized communities. The increasing reliance on private companies like Palantir further blurs the lines between state and corporate power, raising concerns about accountability and transparency in the deployment of AI for surveillance.

Crawford emphasizes that AI is not a neutral or objective force but is shaped by specific economic and political contexts. She argues that the current dominant model of AI development is driven by extractive capitalism, where the focus is on maximizing profit and centralizing control. This leads to practices that prioritize efficiency and scale at the expense of social and environmental well-being. The book calls for a critical examination of the economic incentives and political forces driving AI research and development, urging a shift towards a more just and sustainable model of technological progress.

5. What alternatives does Crawford propose to the current model of AI development, and how does she advocate for social and political intervention?

Crawford challenges the prevailing focus on technical solutions to the problems posed by AI, advocating instead for broader social and political interventions. She calls for a “politics of refusal,” rejecting the notion of AI’s inevitability and demanding greater scrutiny of its purpose and potential harms. This includes challenging the concentration of power within the tech sector, advocating for stronger data protection and labor rights, and promoting alternative visions of technological development that prioritize social and environmental justice. Ultimately, Crawford urges readers to move beyond technical debates about bias and fairness and engage in collective action to shape the future trajectory of AI.

1. How does Crawford challenge the conventional understanding of AI as an immaterial force?

Crawford argues that the portrayal of AI as an abstract, disembodied entity obscures its material reality and far-reaching societal impacts. She emphasizes that AI is built upon the exploitation of natural resources through mining for rare earth minerals, which are essential for computing hardware. Furthermore, AI’s vast energy consumption fuels its carbon footprint, impacting the environment and contributing to climate change. The book underscores the human labor behind AI, from data labelers to warehouse workers, who are often subject to precarious working conditions and surveillance. By tracing AI’s physical and human costs, Crawford aims to deconstruct the myth of “clean” technology and reveal its extractive nature.

2. How do classifications in AI systems reflect and reinforce existing power dynamics?

Crawford argues that the seemingly objective classifications used in AI systems are not neutral but reflect and reinforce existing power structures. These classifications, whether related to race, gender, or socioeconomic status, are embedded within training data and shape how AI systems “see” and interpret the world. This process can lead to discriminatory outcomes and exacerbate existing inequalities. The book challenges the focus on technical debiasing, arguing that addressing the root causes of bias requires a deeper understanding of how social and political forces shape the creation and deployment of AI systems. Ultimately, Crawford calls for a shift towards a more just and equitable approach to AI development, one that prioritizes human well-being and social justice over profit and control.

3. How does Crawford connect AI to state power and surveillance?

Crawford challenges the notion that AI is primarily about code and algorithms, emphasizing instead its role as a powerful tool of state control and surveillance. She traces the military origins and funding of AI research, showing how concepts from the battlefield, such as targeting and situational awareness, have permeated civilian applications. The book explores how AI is used by governments and law enforcement agencies to surveil, track, and control populations, especially marginalized communities. The increasing reliance on private companies like Palantir further blurs the lines between state and corporate power, raising concerns about accountability and transparency in the deployment of AI for surveillance.

Crawford emphasizes that AI is not a neutral or objective force but is shaped by specific economic and political contexts. She argues that the current dominant model of AI development is driven by extractive capitalism, where the focus is on maximizing profit and centralizing control. This leads to practices that prioritize efficiency and scale at the expense of social and environmental well-being. The book calls for a critical examination of the economic incentives and political forces driving AI research and development, urging a shift towards a more just and sustainable model of technological progress.

5. What alternatives does Crawford propose to the current model of AI development, and how does she advocate for social and political intervention?

Crawford challenges the prevailing focus on technical solutions to the problems posed by AI, advocating instead for broader social and political interventions. She calls for a “politics of refusal,” rejecting the notion of AI’s inevitability and demanding greater scrutiny of its purpose and potential harms. This includes challenging the concentration of power within the tech sector, advocating for stronger data protection and labor rights, and promoting alternative visions of technological development that prioritize social and environmental justice. Ultimately, Crawford urges readers to move beyond technical debates about bias and fairness and engage in collective action to shape the future trajectory of AI.

Key Takeaways

1. AI systems can perpetuate and amplify existing societal biases.

AI systems are often trained on historical data that reflects existing societal biases. This can lead to discriminatory outcomes when these systems are deployed. For example, AI hiring tools trained on biased datasets may unfairly penalize female applicants or favor candidates from certain socioeconomic backgrounds. To mitigate this, developers must carefully consider the potential for bias in training data and ensure diversity and representation in the datasets used to train AI models.

Practical Application:

When designing an AI-powered hiring system, consider the potential biases embedded in historical hiring data and ensure diverse representation in training datasets to prevent discrimination against specific demographic groups. For instance, actively seek resumes from women and underrepresented minorities to counter historical biases toward male candidates.

2. AI systems rely on hidden human labor, often performed in exploitative conditions.

While AI is often touted as automating labor, many AI systems rely on hidden human labor, often performed by low-paid workers in precarious conditions. This “ghost work” is essential for AI systems to function but rarely receives recognition or fair compensation. For example, content moderation for social media platforms often involves human workers reviewing disturbing or graphic content, with significant potential for psychological harm.

Practical Application:

When designing an AI system for content moderation, be mindful of the emotional labor involved in reviewing graphic or disturbing content. Provide adequate support and resources for human moderators, including mental health services and regular breaks, to mitigate the potential for psychological harm.

The “raw data” used to train AI systems is often collected without consent, stripped of context, and treated as an interchangeable resource. This can lead to ethical concerns about privacy violations, data ownership, and the potential for harm. Crawford advocates for greater transparency and accountability in data collection practices and for a shift away from the extractive model towards a more ethical and respectful approach to data.

Practical Application:

An AI product designed for environmental monitoring could prioritize data transparency by making its algorithms and training datasets publicly accessible. This would allow independent researchers to scrutinize the system’s underlying assumptions, evaluate its accuracy and limitations, and identify any potential biases.

4. Classifications in AI are not neutral but encode power dynamics.

The act of classifying and categorizing in AI is not neutral but reflects particular worldviews and power dynamics. The categories used to train AI systems can shape how they interpret the world and make decisions, potentially reinforcing existing inequalities and biases. Crawford argues that classification itself is a political act, with far-reaching social and material consequences.

Practical Application:

For instance, a facial recognition system could be designed to detect and identify individuals without categorizing them by race or gender, avoiding the potential for discrimination and biased outcomes. Focus on functional identification rather than classification into potentially problematic categories.

5. AI is not a neutral force but is deeply intertwined with politics and power.

AI systems are not developed or deployed in a vacuum. They are shaped by and embedded within existing political and economic structures, often serving the interests of powerful institutions and corporations. Crawford argues that AI is not simply a technical domain but a political one, requiring critical analysis of its role in shaping power dynamics and social relations.

Practical Application:

A social media company using AI for content moderation could collaborate with impacted communities to develop more culturally sensitive and nuanced classification schemes for hate speech or harmful content, moving beyond simplistic and potentially biased labeling practices.

1. AI systems can perpetuate and amplify existing societal biases.

AI systems are often trained on historical data that reflects existing societal biases. This can lead to discriminatory outcomes when these systems are deployed. For example, AI hiring tools trained on biased datasets may unfairly penalize female applicants or favor candidates from certain socioeconomic backgrounds. To mitigate this, developers must carefully consider the potential for bias in training data and ensure diversity and representation in the datasets used to train AI models.

Practical Application:

When designing an AI-powered hiring system, consider the potential biases embedded in historical hiring data and ensure diverse representation in training datasets to prevent discrimination against specific demographic groups. For instance, actively seek resumes from women and underrepresented minorities to counter historical biases toward male candidates.

2. AI systems rely on hidden human labor, often performed in exploitative conditions.

While AI is often touted as automating labor, many AI systems rely on hidden human labor, often performed by low-paid workers in precarious conditions. This “ghost work” is essential for AI systems to function but rarely receives recognition or fair compensation. For example, content moderation for social media platforms often involves human workers reviewing disturbing or graphic content, with significant potential for psychological harm.

Practical Application:

When designing an AI system for content moderation, be mindful of the emotional labor involved in reviewing graphic or disturbing content. Provide adequate support and resources for human moderators, including mental health services and regular breaks, to mitigate the potential for psychological harm.

The “raw data” used to train AI systems is often collected without consent, stripped of context, and treated as an interchangeable resource. This can lead to ethical concerns about privacy violations, data ownership, and the potential for harm. Crawford advocates for greater transparency and accountability in data collection practices and for a shift away from the extractive model towards a more ethical and respectful approach to data.

Practical Application:

An AI product designed for environmental monitoring could prioritize data transparency by making its algorithms and training datasets publicly accessible. This would allow independent researchers to scrutinize the system’s underlying assumptions, evaluate its accuracy and limitations, and identify any potential biases.

4. Classifications in AI are not neutral but encode power dynamics.

The act of classifying and categorizing in AI is not neutral but reflects particular worldviews and power dynamics. The categories used to train AI systems can shape how they interpret the world and make decisions, potentially reinforcing existing inequalities and biases. Crawford argues that classification itself is a political act, with far-reaching social and material consequences.

Practical Application:

For instance, a facial recognition system could be designed to detect and identify individuals without categorizing them by race or gender, avoiding the potential for discrimination and biased outcomes. Focus on functional identification rather than classification into potentially problematic categories.

5. AI is not a neutral force but is deeply intertwined with politics and power.

AI systems are not developed or deployed in a vacuum. They are shaped by and embedded within existing political and economic structures, often serving the interests of powerful institutions and corporations. Crawford argues that AI is not simply a technical domain but a political one, requiring critical analysis of its role in shaping power dynamics and social relations.

Practical Application:

A social media company using AI for content moderation could collaborate with impacted communities to develop more culturally sensitive and nuanced classification schemes for hate speech or harmful content, moving beyond simplistic and potentially biased labeling practices.

Suggested Deep Dive

Chapter: Labor

This chapter offers a tangible understanding of AI’s human cost, exploring issues like worker exploitation, surveillance, and the changing nature of work. This material is particularly relevant for product engineers working on AI systems that directly impact labor practices.

Memorable Quotes

Introduction. 15

How is intelligence “made,” and what traps can that create?

Introduction. 16

AI systems are ultimately designed to serve existing dominant interests. In this sense, artificial intelligence is a registry of power.

Introduction. 17

AI is narrowly understood as disembodied intelligence, removed from any relation to the material world.

Classification. 119

For if scientists can be honestly self-deluded . . . then prior prejudice may be found anywhere, even in the basics of measuring bones and toting sums.

Conclusion. 205

The master’s tools will never dismantle the master’s house.

Introduction. 15

How is intelligence “made,” and what traps can that create?

Introduction. 16

AI systems are ultimately designed to serve existing dominant interests. In this sense, artificial intelligence is a registry of power.

Introduction. 17

AI is narrowly understood as disembodied intelligence, removed from any relation to the material world.

Classification. 119

For if scientists can be honestly self-deluded . . . then prior prejudice may be found anywhere, even in the basics of measuring bones and toting sums.

Conclusion. 205

The master’s tools will never dismantle the master’s house.

Comparative Analysis

Atlas of AI stands out for its material and political analysis of AI, contrasting with technical or solutionist approaches. Unlike books like “Weapons of Math Destruction” by Cathy O’Neil, which focuses on algorithmic bias, Crawford emphasizes the broader systems of power within which AI operates. She shares some concerns with Shoshana Zuboff’s “The Age of Surveillance Capitalism” regarding data extraction but goes further by connecting it to material resources and labor exploitation. Similar to Ruha Benjamin’s “Race After Technology,” Crawford highlights the ways AI systems reinforce existing inequalities, but offers a more infrastructural analysis rooted in planetary computation networks. Finally, Crawford’s work complements research in science and technology studies, such as Lucy Suchman’s work on human-machine interaction, adding an important focus on the political economy of AI.

Reflection

Atlas of AI provides a crucial counterpoint to the dominant narratives of AI as an abstract, disembodied force. Crawford’s emphasis on the material, political, and social dimensions of AI offers a valuable corrective to techno-solutionist and utopian/dystopian perspectives. However, her focus on the negative aspects of AI could be perceived as overly deterministic, potentially overshadowing the potential benefits and diverse applications of these technologies. It also risks essentializing AI as a singular entity, overlooking the internal debates and diverse approaches within the field. A more skeptical reader might question the extent to which Crawford’s political stance influences her interpretation of the evidence, while acknowledging the importance of raising critical questions about the societal impacts of AI. Overall, Atlas of AI offers a timely and provocative intervention in the ongoing debates about the future of AI, urging readers to engage with the complex ethical, social, and political dimensions of these powerful technologies.

Flashcards

What are the material components of AI?

Rare earth minerals essential for computing hardware, oil and coal for energy, vast amounts of data, and often exploited human labor.

What is the technical focus critique in AI?

The tendency to focus solely on technical aspects of AI, ignoring the social, political, and economic forces that shape it.

What are some hidden costs of AI?

Exploitation of planetary resources, disregard for environmental impacts, and reliance on precarious labor practices.

What is “ghost work” in the context of AI?

Hidden labor involved in data labeling, content moderation, and other tasks essential for AI systems, often poorly paid and unrecognized.

What is the problem with classifying by race and gender in AI?

The idea that AI systems can objectively classify people based on their physical characteristics, perpetuating discriminatory practices.

What is the critique of affect recognition?

The flawed assumption that facial expressions are universally recognizable and reliably indicate internal emotional states.

What is a ‘signature strike’?

A method used by the CIA to justify drone strikes based on patterns of behavior rather than confirmed identity, often leading to civilian casualties.

What is AI’s scoring logic?

The pervasive use of scoring and risk assessment in AI systems, amplifying existing social inequalities and reinforcing a logic of control.

How does AI operate as a structure of power?

Centralized control over resources, data, and labor; prioritization of profit over social good; reproduction and amplification of existing inequalities.

What are potential pathways towards a more just and equitable AI future?

Collective action, challenging dominant narratives, demanding accountability and transparency, advocating for data protection and labor rights, and promoting alternative, just visions of technology.

What are the material components of AI?

Rare earth minerals essential for computing hardware, oil and coal for energy, vast amounts of data, and often exploited human labor.

What is the technical focus critique in AI?

The tendency to focus solely on technical aspects of AI, ignoring the social, political, and economic forces that shape it.

What are some hidden costs of AI?

Exploitation of planetary resources, disregard for environmental impacts, and reliance on precarious labor practices.

What is “ghost work” in the context of AI?

Hidden labor involved in data labeling, content moderation, and other tasks essential for AI systems, often poorly paid and unrecognized.

What is the problem with classifying by race and gender in AI?

The idea that AI systems can objectively classify people based on their physical characteristics, perpetuating discriminatory practices.

What is the critique of affect recognition?

The flawed assumption that facial expressions are universally recognizable and reliably indicate internal emotional states.

What is a ‘signature strike’?

A method used by the CIA to justify drone strikes based on patterns of behavior rather than confirmed identity, often leading to civilian casualties.

What is AI’s scoring logic?

The pervasive use of scoring and risk assessment in AI systems, amplifying existing social inequalities and reinforcing a logic of control.

How does AI operate as a structure of power?

Centralized control over resources, data, and labor; prioritization of profit over social good; reproduction and amplification of existing inequalities.

What are potential pathways towards a more just and equitable AI future?

Collective action, challenging dominant narratives, demanding accountability and transparency, advocating for data protection and labor rights, and promoting alternative, just visions of technology.