Sue Curry Jansen and Jeff Pooley — Neither Artificial nor Intelligent (review of Crawford, Atlas of AI, and Pasquale, New Laws of Robotics)

0
5065
Crawford, Atlas of AI, & Pasquale, New Laws of Robotics
Crawford, Atlas of AI, & Pasquale, New Laws of Robotics

a review of Kate Crawford, Atlas of AI Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale UP, 2021) and Frank Pasquale, New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard UP, 2021)

by Sue Curry Jansen and Jeff Pooley

Artificial intelligence (AI) is a Faustian dream. Conceived in the future tense, its most ardent AI visionaries seek to create an enhanced form of intelligence that far surpasses the capacities of human brains. AI promises to transcend the messiness of embodiment, the biases of human cognition, and the limitations of mortality. Entering its eighth decade, AI is largely a science fiction, despite recent advances in machine learning. Yet it has captured the public imagination since its inception, and acquired potent ideological cache. Robots have become AI’s humanoid faces, as well as icons of popular culture: cast as helpful companions or agents of the apocalypse.

The transcendent vision of artificial intelligence has educated, informed, and inspired generations of scientists, military strategists, policy makers, entrepreneurs, writers, artists, filmmakers, and marketers. However, apologists have also frequently invoked AI’s authority to mystify, intimidate, and silence resistance to its vision, teleology, and deployments. Where, for example, the threat of automation once triggered labor activism, rallying opposition to an esoteric branch of computer science research that few non-specialists understand is a rhetorical non-starter. So is campaigning for alternatives to smart apps, homes, cars, cities, borders, and bombs.

Two remarkable new books, Kate Crawford’s The Atlas of AI and Frank Pasquale’s New Laws of Robotics: Defending Human Expertise in the Age of AI, provide provocative critical assessments of artificial intelligence in clear, accessible, and engaging prose. Both books have titles that could discourage novices, but they are, in fact, excellent primers for non-specialists on what is at stake in the current ascendency of AI science and ideology—especially if read in tandem.

Crawford’s thesis—“AI is neither artificial nor intelligent”—cuts through the sci-fi hype to radically reground AI power-knowledge in material reality. Beginning with its environmental impact on planet Earth, her narrative proceeds vertically to demystify AI’s ways of seeing—its epistemology, methodology, and applications—and then to examine the roles of labor, ideology, the state, and power in the AI enterprise. She concludes with a coda on space and the astronautical illusions of digital billionaires. Pasquale takes a more horizontal approach, surveying AI in health care, education, media, law, policy, economics, war, and other domains. His attention is on the practical present—on the ethical dilemmas posed by current and near-future deployments of AI. His through line is that human judgment, backed by policy, should steer AI toward human ends.

Despite these differences, Crawford and Pasquale converge on several critical points. First, they agree that AI models are skewed by economic and engineering values to the exclusion of other forms of knowledge and wisdom. Second, both endorse greater transparency and accountability in artificial intelligence design and practices. Third, they agree that AI datasets are skewed: Crawford focuses on how the use of natural language datasets, no matter how large, reproduce the biases of the populations they are drawn from, while Pasquale attends to designs that promote addictive engagement to optimize ad revenue. Fourth, both cite the residual effects of AI’s military origins on its logic, values, and rhetoric. Fifth, Crawford and Pasquale both recognize that AI’s futurist hype tends to obscure the real-world political and economic interests behind the screens—the market fundamentalism that models the world as an assembly line. Sixth, both emphasize the embodiment of intelligence, which encompasses tacit and muscle knowledge that cannot be fully extracted and abstracted by artificial intelligence modelers. Seventh, they both view artificial intelligence as a form of data-driven behaviorism, in the stimulus-response sense. Eighth, they acknowledge that AI and economic experts claim priority for their own views—a position they both reject.

Crawford literally travels the world to map the topologies of computation, beginning in the lithium mines of Nevada, on to Silicon Valley, Indonesia, Malaysia, China, and Mongolia, and ending under personal surveillance outside of Jeff Bezos’ Blue Origin suborbital launch facility in West Texas. Demonstrating that AI is anything but artificial, she documents the physical toll it extracts from the environment. Contra the industry’s earth-friendly PR and marketing, the myth of clean tech and metaphors like ‘the Cloud,’ Crawford points out that AI systems are built upon consuming finite resources that required billions of years to take form: “we are extracting Earth’s geological history to serve a split second of contemporary technological time, building devices like the Amazon Echo and iPhone that are often designed to last only a few years.” And the Cloud itself leaves behind a gigantic carbon footprint. AI data mining is not only dependent on human miners of rare minerals, but also on human labor functioning within a “registry of power” that is unequal and exploitive— where “many valuable automated systems feature a combination of underpaid digital piece workers and customers taking on unpaid tasks to make systems function,” all the while under constant surveillance.

While there is a deskilling of human labor, there are also what Crawford calls Potemkin AI systems, which only work because of hidden human labor—Bezos himself calls such systems “artificial artificial intelligence.” AI often doesn’t work as well as the humans it replaces, as, for example, in automated telephone consumer service lines. But Crawford reminds us that AI systems scale up: customers ‘on hold’ replace legions of customer service workers in large organizations. Profits trump service. Her chapters on data and classification strip away the scientistic mystification of AI and Big Data. AI’s methodology is simply data at scale, and it is data that is biased at inception because it is collected indiscriminately, as size, not substance, counts. A dataset extracted and abstracted from a society secured in systemic racism will, for example, produce racist results. The increasing convergence of state and corporate surveillance not only undermines individual privacy, but also makes state actors reliant on technologies that they cannot fully understand as machine learning transforms them. In effect, Crawford argues, states have made a “devil’s bargain” with tech companies that they cannot control. These technologies, developed for command-and-control military and policing functions, increasingly erode the dialogic and dialectic nature of democratic commons.

AI began as a highly subsidized public project in the early days of the Cold War. Crawford demonstrates, however, that it has been “relentlessly privatized to produce enormous financial gains for the tiny minority at the top of the extraction pyramid.” In collaboration with Alex Campolo, Crawford has described AI’s epistemological flattening of complexity as “enchanted determinism,” whereby “AI systems are seen as enchanted, beyond the known world, yet deterministic in that they discover patterns that can be applied with predictive certainty to everyday life.”[1] In some deep learning systems, even the engineers who create these systems cannot interpret them. Yet, they cannot dismiss them either. In such cases, “enchanted determinism acquires an almost theological quality,” which tends to place it beyond critique of both technological utopians as well as dystopians.

Pasquale, for his part, examines the ethics of AI as currently deployed and often circumvented in several contexts: medicine, education, media, law, military, and the political economy of automation, in each case in relation to human wisdom. His basic premise is that “we now have the means to channel technologies of automation, rather than being captured or transformed by them.” Like Crawford, then, he recommends exercising a resistant form of agency. Pasquale’s focus is on robots as automated systems. His rhetorical point of departure is a critique and revision of Isaac Asimov’s highly influential “laws of robotics,” developed in a 1942 short story—more than a decade before AI was officially launched in 1956. Because the world and law-making is far more complex than a short story, Pasquale finds Asimov’s laws ambiguous and difficult to apply, and proposes four new ones, which become the basis of his arguments throughout the book. They are:

  1. Robotic systems and AI should complement professionals, not replace them.
  2. Robotic systems and AI should not counterfeit humanity.
  3. Robotic systems and AI should not intensify zero-sum arms races.
  4. Robotic systems and AI must always indicate the identity of their creator(s), controller(s), and owner(s).

‘Laws’ entail regulation, which Pasquale endorses to promote four corresponding values: complementarity, authenticity, cooperation, and attribution. The four laws’ deployment depends on a critical distinction that Pasquale draws between technologies that replace people and those that assist us in doing our jobs better. Classic definitions of AI have sought to create computers that “can sense, think, and act like humans.” Pasquale endorses an “Intelligence Augmentation” (IA) alternative. This is a crucial shift in emphasis; it is Pasquale’s own version of AI refusal.

He acknowledges that, in the current economy, “there are economic laws that tilt the scale toward AI and against IA.” In his view, deployment of robots may, however, offer an opportunity for humanistic intervention in AI’s hegemony, because the presence of robots, unlike phones, tablets, or sensors, is physically intrusive. They are there for a purpose, which we may accept or reject at our peril, but find hard to ignore. Robots are being developed to enter fields that are already highly regulated, which offers an opportunity to shape their use in ways that conform to established legal standards of privacy and consumer protection. Pasquale is an advocate for building humane (IA) values within the technology, before robots are released into the wild.

In each of his topical chapters, he explains how robots and other AI systems designed to advance the values of complementarity, authenticity, cooperation, and attribution might enhance human existence and community. Some chapters stand out, as particularly insightful, including those on “automated media,” human judgment, and the political economy of automation. One of Pasquale’s chapters addresses important terrain that Crawford does not consider, medicine. Given past abuses by medical researchers in exploiting and/or ignoring race and gender, they may be especially sensitive and receptive to an IA intervention, despite the formidable economic forces stacked against it. Pasquale shows, for example, how IA has amplified diagnostics in dermatology through pattern recognition, providing insight into what distinguishes malignant from benign moles.

In our view, Pasquale’s closing chapter endorsing human wisdom, as opposed to AI, displays multiple examples of the former. But some of their impact is blunted by more diffuse discussions of literature and art, valuable though those practices may be in counter-balancing the instrumental values of economics and engineering. Nonetheless, Pasquale’s argument is an eloquent tribute to a “human form of life that is fragile, embodied in mortal flesh, time-delimited, and irreproducible in silico.”

The two books, read together, amount to a critique of AI ideology. Pasquale and Crawford write about the stuff that phrases like “artificial intelligence” and “machine learning” refer to, but their main concern is the mystique surrounding the words themselves. Crawford is especially articulate on this theme. She shows that, as an idea, AI is self-warranting. Floating above the undersea cables and rare-earth mines—ethereal and cloud-like—the discourse makes its compelling case for the future. Her work is to cut through the cloud cover, to reveal the mines and cables.

So the idea of AI justifies even as it obscures. What Crawford and Pasquale draw out is that AI is a way of seeing the world—a lay epistemology. When we see the world through the lens of AI, we see extraction-ready data. We see countable aggregates everywhere we look. We’re always peering ahead, predicting the future with machinist probabalism. It’s the view from Palo Alto that feels like a god’s eye view. From up there, the continents look patterned and classification-ready. Earth-bound disorder is flattened into clear signal. What AI sees, in Crawford’s phrase, is a “Linnaean order of machine-readable tables.” It is, in Pasquale’s view, an engineering mindset that prizes efficiency over human judgment.

At the same time, as both authors show, the AI lens refracts the Cold War national security state that underwrote the technology for decades. Seeing like an AI means locating targets, assets, and anomalies. Crawford calls it a “covert philosophy of en masse infrastructural command and control,” a martial worldview etched in code.

As Kenneth Burke observed, every way of seeing is also a way of not seeing. What AI can’t see is also its raw material: human complexity and difference. There is, in AI, a logic of commensurability—a reduction of messy and power-laden social life into “computable sameness.” So there is a connection, as both Crawford and Pasquale observe, between extraction and abstraction. The activity of everyday life is extracted into datasets that, in their bloodless tabulation, abstract away their origins. Like Marx’s workers, we are then confronted by the alienated product of our “labor”—interviewed or consoled or policed by AIs that we helped build.

Crawford and Pasquale’s excellent books offer sharp and complementary critiques of the AI fog. Where they differ is in their calls to action. Pasquale, in line with his mezzo-level focus on specific domains like education, is the reformist. His aim is to persuade a policy community that he’s part of—to clear space between do-nothing optimists and fatalist doom-sayers. At core he hopes to use law and expertise to rein in AI and robotics—with the aim to deploy AI much more conscientiously, under human control and for human ends.

Crawford is more radical. She sees AI as a machine for boosting the power of the already powerful. She is skeptical of the movement for AI “ethics,” as insufficient at best and veering toward exculpatory window-dressing. The Atlas of AI ends with a call for a “renewed politics of refusal,” predicated on a just and solidaristic vision of the future.

It would be easy to exaggerate Crawford and Pasquale’s differences, which reflect their projects’ scope and intended audience more than any disagreement of substance. Their shared call is to see AI for what it is. Left to follow its current course, the ideology of AI will reinforce the bars on the “iron cage” that sociologist Max Weber foresaw a century ago: incarcerating us in systems of power dedicated to efficiency, calculation, and control.

_____

Sue Curry Jansen is Professor of Media & Communication at Muhlenberg College, in Allentown, PA. Jeff Pooley is Professor of Media & Communication at Muhlenberg, and director of mediastudies.press, a scholar-led publisher. Their co-authored essay on Shoshanna Zuboff’s Surveillance Capitalism—a review of the book’s reviews—recently appeared in New Media & Society.

Back to the essay

_____

Notes

[1] Crawford acknowledges the collaboration with Campolo, her research assistant, in developing this concept and the chapter on affect, generally.

LEAVE A REPLY

Please enter your comment!
Please enter your name here