b2o

  • Zachary Loeb — Is Big Data the Message? (Review of Natasha Lushetich, ed., Big Data—A New Medium?)

    Zachary Loeb — Is Big Data the Message? (Review of Natasha Lushetich, ed., Big Data—A New Medium?)

    a review of Natasha Lushetich, ed. Big Data—A New Medium? (Routledge, 2021)

    by Zachary Loeb

    When discussing the digital, conversations can quickly shift towards talk of quantity. Just how many images are being uploaded every hour, how many meticulously monitored purchases are being made on a particular e-commerce platform every day, how many vehicles are being booked through a ride-sharing app at 3 p.m. on Tuesday afternoon, how many people are streaming how many shows/movies/albums at any given time? The specific answer to the “how much?” and “how many?” will obviously vary depending upon the rest of the question, yet if one wanted to give a general response across these questions it would likely be fair to answer with some version of “a heck of a lot.” Yet from this flows another, perhaps more complicated and significant question, namely: given the massive amount of information being generated by seemingly every online activity, where does all of that information actually go, and how is that information rendered usable and useful? To this the simple answer may be “big data,” but this in turn just serves to raise the question of what we mean by “big data.”

    “Big data” denotes the point at which data begins to be talked about in terms of scale, not merely gigabytes but zettabytes. And, to be clear, a zettabyte represents a trillion gigabytes—and big data is dealing with zettabytes, plural. Beyond the sheer scale of the quantity in question, considering big data “as process and product” involves a consideration of “the seven Vs: volume” (the amount of data previously generated and newly generated), “variety” (the various sorts of data being generated), “velocity” (the highly accelerated rate at which data is being generated), “variability” (the range of types of information that make up big data), “visualization” (how this data can be visually represented to a user), “value” (how much all of that data is worth, especially once it can be processed in a useful way), and “veracity” (3) (the reliability, trustworthiness, and authenticity of the data being generated). In addition to these “seven Vs” there are also the “three Hs: high dimension, high complexity, and high uncertainty” (3). Granted, “many of these terms remain debatable” (3). Big data is both “process and product” (3), its applications vary from undergirding the sorts of real-time analysis that makes it possible to detect viral outbreaks as they are happening to the directions app that is able to suggest an alternative route before you hit traffic to the recommendation software (be it banal or nefarious) that forecast future behavior based on past actions.

    To the extent that discussions around the digital generally focus on the end(s) results of big data, the means remain fairly occluded both from public view and from many of the discussants. And while big data has largely been accepted as an essential aspect of our digital lives by some, for many others it remains highly fraught.

    As Natasha Lushetich notes, “in the arts and (digital) humanities…the use of big data remains a contentious issue not only because data architectures are increasingly determining classificatory systems in the educational, social, and medical realms, but because they reduce political and ethical questions to technical management” (4). And it is this contentiousness that is at the heart of Lushetich’s edited volume Big Data—A New Medium? (Routledge, 2021). Drawing together scholars from a variety of different disciplines ranging across “the arts and (digital) humanities,” this book moves beyond an analysis of what big data is to a complex considerations of what big data could be (and may be in the process of currently becoming). In engaging with the perils and potentialities of big data, the book (as its title suggests) wrestles with the question as to whether or not big data can be seen as constituting “a new medium.” Through engaging with big data as a medium, the contributors to the volume grapple not only with how big data “conjugates human existence” but also how it “(re)articulates time, space, the material and immaterial world, the knowable and the unknowable; how it navigates or alters, hierarchies of importance” and how it “enhances, obsolesces, retrieves and pushes to the limits of potentiality” (8). Across four sections, the contributors grapple with big data in terms of knowledge and time, use and extraction, cultural heritage and memory, as well as people.

    “Patterning Knowledge and Time” begins with a chapter by Ingrid M. Hoofd that places big data in the broader trajectory of the university’s attempt to make the whole of the world knowable. Considering how “big data renders its object of analysis simultaneously more unknowable (or superficial) and more knowable (or deep)” (18), Hoofd’s chapter examines how big data replicates and reinforces the ways in which that which becomes legitimated as knowable are the very things that can be known through the university’s (and big data’s) techniques. Following Hoofd, Franco “Bifo” Berardi provocatively engages with the power embedded in big data, treating it as an attempt to assert computerized control over a chaotic future by forcing it into a predictable model. Here big data is treated as a potential constraint wherein “the future is no longer  a possibility, but the implementation of a logical necessity inscribed in the present” (43), as participation in society becomes bound up with making one’s self and one’s actions legible and analyzable to the very systems that enclose one’s future horizons. Shifting towards the visual and the environmental, Abelardo Gil-Fournier and Jussi Parikka consider the interweaving of images and environments and how data impacts this. As Gil-Fournier and Parikka explore, as a result of developments in machine learning and computer vision “meteorological changes” are increasingly “not only observable but also predictable as images” (56).

    The second part of the book, “Patterning Use and Existence” starts with Btihaj Ajana reflecting on the ways in which “surveillance technologies are now embedded in our everyday products and services” (64). By juxtaposing the biometric control of refugees with the quantified-self movement, Ajana explores the datafication of society and the differences (as well as similarities) between willing participation and forced participation in regimes of surveillance of the self. Highlighting a range of well-known gig-economy platforms (such as Uber, Deliveroo, and Amazon Mechanical Turk), Tim Christaens examines the ways that “the speed of the platform’s algorithms exceeds the capacities of human bodies” (81). While offering a thorough critique of the inhuman speed imposed by gig economy platforms/algorithms, Christaens also offers a hopeful argument for the possibility that by making their software open source some of these gig platforms could “become a vehicle for social emancipation instead of machinic subjugation” (90). While aesthetic and artistic considerations appear in earlier chapters, Lonce Wyse’s chapter pushes fully into this area through looking at the ways that deep learning systems create the sorts of works of art “that, when recognized in humans, are thought of as creative” (95). Wyse provides a rich, and yet succinct, examination of how these systems function while highlighting the sorts of patterns that emerge (sometimes accidentally) in the process of training these systems.

    At the outset of the book’s third section, “Patterning cultural heritage and memory,” Craig J. Saper approaches the magazine The Smart Set as an object of analysis and proceeds to zoom in and zoom out to reveal what is revealed and what is obfuscated at different scales. Highlighting that “one cannot arbitrarily discount or dismiss particular types of data, big or intimate, or approaches to reading, distant or close” Saper’s chapter demonstrates how “all scales carry intellectual weight” (124). Moving away from the academic and the artist, Nicola Horsley’s chapter reckons with the work of archivists and the ways in which their intellectual labor and the tasks of their profession have been challenged by digital shifts. While archival training teaches archivists that “the historical record, on which collective memory is based, is a process not a product” (140) and in interacting with researchers archivists seek to convey that lesson, Horsley’s considers the ways in which the shift away from the physical archive and towards the digital archive (wherein a researcher may never directly interact with an archivist or librarian) means this “process” risks going unseen. From the archive to the work of art, Natasha Lushetich and Masaki Fujihata’s chapter explores Fujihata’s project BeHere: The Past in the Present and how augmented reality opens up the space for new artistic experience and challenges how individual memory is constructed. Through its engagement with “images obtained through data processing and digital frottage” the BeHere project reveals “new configurations of machinically (rather than humanly) perceived existents” and thus can “shed light on that which eludes the (naked) human eye” (151).

    The fourth and final section of the volume, begins with Dominic Smith’s exploration of the aesthetics of big data. While referring back to the “Seven Vs” of big data, Smith argues that to imagine big data as a “new medium” requires considering “how we make sense of data” in regards to both “how we produce it” and “how we perceive it” (164). A matter which Smith explores through an analysis of “surfaces and depths” of oceanic images. Though big data is closely connected with sheer scale (hence the “big”), Mitra Azar observes that “it is never enough as it is always possible to generate new data and make more comprehensive data sets” (180). Tangling with this in a visual registry, Azar contrasts the cinematic point of view with that of the big data enabled “data double” of the individual (which is meant to stand in for that user). Considering several of his own artistic installations—Babel, Dark Matter, and Heteropticon—Simon Biggs examines the ways in which big data reveals “the everyday and trivial and how it offers insights into the dense ambient noise that is our daily lives” (192). In contrast to treating big data as a revelator of the sublime, Biggs discusses big data’s capacity to show “the infra-ordinary” and to show the value of seemingly banal daily details. The book concludes with Warren Neidich’s speculative gaze to what the future of big data might portend, couched in a belief that “we are at the beginning of a transition from knowledge-based economics to a neural or brain-based economy” (207). Surveying current big data technologies and the trajectories they may suggest, Neidich forecasts “a gradual accumulation of telepathic technocueticals” such that “at some moment a critical point might be reached when telepathy could become a necessary skill for successful adaptation…similar to being able to read in today’s society” (218).

    In the introduction to the book, Natasha Lushetich grounds the discussion in a recognition that “it is also important to ask how big data (re)articulates time, space, the material and immaterial world, the knowable and the unknowable; how it navigates or alters, hierarchies of importance” (8), and over the course of this fascinating and challenging volume, the many contributors do just that.

    ***

    The term big data captures the way in which massive troves of digitally sourced information are made legible and understandable. Yet one of the challenges of discussing big data is trying to figure out a way to make big data itself legible and understandable. In discussions around the digital, big data is often gestured at rather obliquely as the way to explain a lot of mysterious technological activity in the background. We may not find ourselves capable, for a variety of reasons, of prying open the various black boxes of a host of different digital systems but stamped in large letters on the outside of that box are the words “big data.” When shopping online or using a particular app, a user may be aware that the information being gathered from their activities is feeding into big data and that the recommendations being promoted to them come courtesy of the same. Or they may be obliquely aware that there is some sort of connection between the mystery shrouded algorithms and big data. Or the very evocation of “big” when twinned with a recognition of surveillance technologies may serve as a discomforting reminder of “big brother.” Or “big data” might simply sound like a non-existent episode of Star Trek: The Next Generation in which Lieutenant Commander Data is somehow turned into a giant. All of which is to say, that though big data is not a new matter, the question of how to think about it (which is not the same as how to use and be used by it) remains a challenging issue.

    With Big Data—A New Medium?, Natasha Lushetich has assembled an impressive group of thinkers to engage with big data in a novel way. By raising the question of big data as “a new medium,” the contributors shift the discussion away from considerations focused on surveillance and algorithms to wrestle with the ways that big data might be similar and distinct from other mediums. While this shift does not represent a rejection, or move to ignore, the important matters related to issues like surveillance, the focus on big data as a medium raises a different set of questions. What are the aesthetics of big data? As a medium what are the affordances of big data? And what does it mean for other mediums that in the digital era so many of those mediums are themselves being subsumed by big data? After all, so many of the older mediums that theorists have grown so accustomed to discussing have undergone some not insignificant changes as a result of big data. And yet to engage with big data as a medium also opens up a potential space for engaging with big data that does not treat it as being wholly captured and controlled by large tech firms.

    The contributors to the volume do not seem to be fully in agreement with one another about whether big data represents poison or panacea, but the chapters are clearly speaking to one another instead of shouting over each other. There are certainly some contributions to the book, notably Berardi’s, with its evocation of a “new century suspended between two opposite polarities: chaos and automaton” (44), that seem a bit more pessimistic. While other contributors, such as Christaens, engage with the unsavory realities of contemporary data gathering regimes but envision the ways that these can be repurposed to serve users instead of large companies. And such optimistic and pessimistic assessments come up against multiple contributions that eschew such positive/negative framings in favor of an artistically minded aesthetic engagement with what it means to treat big data as a medium for the creation of works of art. Taken together, the chapters in the book provide a wide-ranging assessment of big data, one which is grounded in larger discussions around matters such as surveillance and algorithmic bias, but which pushes readers to think of big data beyond those established frameworks.

    As an edited volume, one of the major strengths of Big Data—A New Medium? is the way it brings together perspectives from such a variety of fields and specialties. As part of Routledge’s “studies in science, technology, and society” series, the volume demonstrates the sort of interdisciplinary mixing that makes STS such a vital space for discussions of the digital. Granted, this very interdisciplinary richness can serve to be as much benefit as burden, as some readers will wish there had been slightly more representation of their particular subfield, or wish that the particular scholarly techniques of a particular discipline had seen greater use. Case in point: Horsley’s contribution will be of great interest to those approaching this book from the world of libraries and archives (and information schools more generally), and some of those same readers will wish that other chapters in the book had been equally attentive to the work done by archive professionals. Similarly those who approach the book from fields more grounded in historical techniques may wish that more of the authors had spent more time engaging with “how we got here” instead of focusing so heavily on the exploration of the present and the possible future. Of course, these are always the challenges with edited interdisciplinary volumes, and it is a major credit to Lushetich as an editor that this volume provides readers from so many different backgrounds with so much to mull over. Beyond presenting numerous perspectives on the titular question, the book is also an invitation to artists and academics to join in discussion about that titular question.

    Those who are broadly interested in discussions around big data will find much in this volume of significance, and will likely find their own thinking pushed in novel directions. That being said, this book will likely be most productively read by those who are already somewhat conversant in debates around big data/the digital humanities/the arts/and STS more generally. While contributors are consistently careful in clearly defining their terms and referencing the theorists from whom they are drawing, from Benjamin to Foucault to Baudrillard to Marx to Deleuze and Guattari (to name but a few), the contributors to this book couch much of their commentary in theory, and a reader of this volume will be best able to engage with these chapters if they have at least some passing familiarity with those theorists themselves. Many of the contributors to this volume are also clearly engaging with arguments made by Shoshana Zuboff in Surveillance Capitalism and this book can be very productively read as critique and complement to Zuboff’s tome. Academics in and around STS, and artists who incorporate the digital into their practice, will find that this book makes a worthwhile intervention into current discourse around big data. And though the book seems to assume a fairly academically engaged readership, this book will certainly work well in graduate seminars (or advanced undergraduate classrooms)—many of the chapter will stand quite well on their own, though much of the book’s strength is in the way the chapters work in tandem.

    One of the claims that is frequently made about big data is that—for better or worse—it will allow us to see the world from a fresh perspective. And what Big Data—A New Medium? does is allow us to see big data itself from a fresh perspective.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2o Review Digital Studies section.

    Back to the essay

  • Alexander R. Galloway — Big Bro (Review of Wendy Hui Kyun Chun, Discriminating Data Correlation, Neighborhoods, and the New Politics of Recognition)

    Alexander R. Galloway — Big Bro (Review of Wendy Hui Kyun Chun, Discriminating Data Correlation, Neighborhoods, and the New Politics of Recognition)

    a review of Wendy Hui Kyun Chun, Discriminating Data Correlation, Neighborhoods, and the New Politics of Recognition (MIT Press, 2021)

    by Alexander R. Galloway

    I remember snickering when Chris Anderson announced “The End of Theory” in 2008. Writing in Wired magazine, Anderson claimed that the structure of knowledge had inverted. It wasn’t that models and principles revealed the facts of the world, but the reverse, that the data of the world spoke their truth unassisted. Given that data were already correlated, Anderson argued, what mattered was to extract existing structures of meaning, not to pursue some deeper cause. Anderson’s simple conclusion was that “correlation supersedes causation…correlation is enough.”

    This hypothesis — that correlation is enough — is the thorny little nexus at the heart of Wendy Chun’s new book, Discriminating Data. Chun’s topic is data analytics, a hard target that she tackles with technical sophistication and rhetorical flair. Focusing on data-driven tech like social media, search, consumer tracking, AI, and many other things, her task is to exhume the prehistory of correlation, and to show that the new epistemology of correlation is not liberating at all, but instead a kind of curse recalling the worst ghosts of the modern age. As Chun concludes, even amid the precarious fluidity of hyper-capitalism, power operates through likeness, similarity, and correlated identity.

    While interleaved with a number of divergent polemics throughout, the book focuses on four main themes: correlation, discrimination, authentication, and recognition. Chun deals with these four as general problems in society and culture, but also interestingly as specific scientific techniques. For instance correlation has a particular mathematical meaning, as well as a philosophical one. Discrimination is a social pathology but it’s also integral to discrete rationality. I appreciated Chun’s attention to details large and small; she’s writing about big ideas — essence, identity, love and hate, what does it mean to live together? — but she’s also engaging directly with statistics, probability, clustering algorithms, and all the minutia of data science.

    In crude terms, Chun rejects the — how best to call it — the “anarcho-materialist” turn in theory, typified by someone like Gilles Deleuze, where disciplinary power gave way to distributed rhizomes, schizophrenic subjects, and irrepressible lines of flight. Chun’s theory of power isn’t so much about tessellated tapestries of desiring machines as it is the more strictly structuralist concerns of norm and discipline, sovereign and subject, dominant and subdominant. Big tech is the mechanism through which power operates today, Chun argues. And today’s power is racist, misogynist, repressive, and exclusionary. Power doesn’t incite desire so much as stifle and discipline it. In other words George Orwell’s old grey-state villain, Big Brother, never vanished. He just migrated into a new villain, Big Bro, embodied by tech billionaires like Mark Zuckerberg or Larry Page.

    But what are the origins of this new kind of data-driven power? The reader learns that correlation and homophily, or “the notion that birds of a feather naturally flock together” (23), not only subtend contemporary social media platforms like Facebook, but were in fact originally developed by eugenicists like Francis Galton and Karl Pearson. “British eugenicists developed correlation and linear regression” (59), Chun notes dryly, before reminding us that these two techniques are at the core of today’s data science. “When correlation works, it does so by making the present and future coincide with a highly curated past” (52). Or as she puts it insightfully elsewhere, data science doesn’t so much anticipate the future, but predict the past.

    If correlation (pairing two or more pieces of data) is the first step of this new epistemological regime, it is quickly followed by some additional steps. After correlation comes discrimination, where correlated data are separated from other data (and indeed internally separated from themselves). This entails the introduction of a norm. Discriminated data are not simply data that have been paired, but measurements plotted along an axis of comparison. One data point may fall within a normal distribution, while another strays outside the norm within a zone of anomaly. Here Chun focuses on “homophily” (love of the same), writing that homophily “introduces normativity within a supposedly nonnormative system” (96).

    The third and fourth moments in Chun’s structural condition, tagged as “authenticity” and “recognition,” complete the narrative. Once groups are defined via discrimination, they are authenticated as a positive group identity, then ultimately recognized, or we could say self-recognized, by reversing the outward-facing discriminatory force into an inward-facing act of identification. It’s a complex libidinal economy that Chun patiently elaborates over four long chapters, linking these structural moments to specific technologies and techniques such as Bayes’ theorem, clustering algorithms, and facial recognition technology.

    A number of potential paths emerge in the wake of Chun’s work on correlation, which we will briefly mention in passing. One path would be toward Shane Denson’s recent volume, Discorrelated Images, on the loss of correlated experience in media aesthetics. Another would be to collide Chun’s critique of correlation in data science with Quentin Meillassoux’s critique of correlation in philosophy, notwithstanding the significant differences between their two projects.

    Correlation, discrimination, authentication, and recognition are the manifest contents of the book as it unfolds page by page. At the same time Chun puts forward a few meta arguments that span the text as a whole. The first is about difference and the second is about history. In both, Chun reveals herself as a metaphysician and moralist of the highest order.

    First Chun picks up a refrain familiar to feminism and anti-racist theory, that of erasure, forgetting, and ignorance. Marginalized people are erased from the archive; women are silenced; a subject’s embodiment is ignored. Chun offers an appealing catch phrase for this operation, “hopeful ignorance.” Many people in power hope that by ignoring difference they can overcome it. Or as Chun puts it, they “assume that the best way to fight abuse and oppression is by ignoring difference and discrimination” (2). Indeed this posture has been central to political liberalism for a long time, in for instance John Rawls’ derivation of justice via a “veil of ignorance.” For Chun the attempt to find an unmarked category of subjectivity — through that frequently contested pronoun “we” — will perforce erase and exclude those structurally denied access to the universal. “[John Perry] Barlow’s ‘we’ erased so many people,” Chun noted in dismay. “McLuhan’s ‘we’ excludes most of humanity” (9, 15). This is the primary crime for Chun, forgetting or ignoring the racialized and gendered body. (In her last book, Updating to Remain the Same, Chun reprinted a parody of a well-known New Yorker cartoon bearing the caption “On the Internet, nobody knows you’re a dog.” The posture of ignorance, of “nobody knowing,” was thoroughly critiqued by Chun in that book, even as it continues to be defended by liberals).

    Yet if the first crime against difference is to forget the mark, the second crime is to enforce it, to mince and chop people into segregated groups. After all, data is designed to discriminate, as Chun takes the better part of her book to elaborate. These are engines of difference and it’s no coincidence that Charles Babbage called his early calculating machine a “Difference Engine.” Data is designed to segregate, to cluster, to group, to split and mark people into micro identities. We might label this “bad” difference. Bad difference is when the naturally occurring multiplicity of the world is canalized into clans and cliques, leveraged for the machinations of power rather than the real experience of people.

    To complete the triad, Chun has proposed a kind of “good” difference. For Chun authentic life is rooted in difference, often found through marginalized experience. Her muse is “a world that resonates with and in difference” (3). She writes about “the needs and concerns of black women” (49). She attends to “those whom the archive seeks to forget” (237). Good difference is intersectional. Good difference attends to identity politics and the complexities of collective experience.

    Bad, bad, good — this is a triad, but not a dialectical one. Begin with 1) the bad tech posture of ignoring difference; followed by 2) the worse tech posture of specifying difference in granular detail; contrasted with 3) a good life that “resonates with and in difference.” I say “not dialectical” because the triad documents difference changing position rather than the position of difference changing (to paraphrase Catherine Malabou from her book on Changing Difference). Is bad difference resolved by good difference? How to tell the difference? For this reason I suggest we consider Discriminating Data as a moral tale — although I suspect Chun would balk at that adjective — because everything hinges on a difference between the good and the bad.

    Chun’s argument about good and bad difference is related to an argument about history, revealed through what she terms the “Transgressive Hypothesis.” I was captivated by this section of the book. It connects to a number of debates happening today in both theory and culture at large. Her argument about history has two distinct waves, and, following the contradictory convolutions of history, the second wave reverses and inverts the first.

    Loosely inspired by Michel Foucault’s Repressive Hypothesis, Chun’s Transgressive Hypothesis initially describes a shift in society and culture roughly coinciding with the Baby Boom generation in the late Twentieth Century. Let’s call it the 1968 mindset. Reacting to the oppressions of patriarchy, the grey-state threats of centralized bureaucracy, and the totalitarian menace of “Nazi eugenics and Stalinism,” liberation was found through “‘authentic transgression’” via “individualism and rebellion” (76). This was the time of the alternative, of the outsider, of the nonconformist, of the anti-authoritarian, the time of “thinking different.” Here being “alt” meant being left, albeit a new kind of left.

    Chun summons a familiar reference to make her point: the Apple Macintosh advertisement from 1984 directed by Ridley Scott, in which a scary Big Brother is dethroned by a colorful lady jogger brandishing a sledge hammer. “Resist, resist, resist,” was how Chun put the mantra. “To transgress…was to be free” (76). Join the resistance, unplug, blow your mind on red pills. Indeed the existential choice from The Matrix — blue pill for a life of slavery mollified by ignorance, red pill for enlightenment and militancy tempered by mortal danger — acts as a refrain throughout Chun’s book. In sum the Transgressive Hypothesis “equated democracy with nonnormative structures and behaviors” (76). To live a good life was to transgress.

    But this all changed in 1984, or thereabouts. Chun describes a “reverse hegemony” — a lovely phrase that she uses only twice — where “complaints against the ‘mainstream’ have become ‘mainstreamed’” (242). Power operates through reverse hegemony, she claims, “The point is never to be a ‘normie’ even as you form a norm” (34). These are the consequences of the rise of neoliberalism, fake corporate multiculturalism, Ronald Reagan and Margaret Thatcher but even more so Bill Clinton and Tony Blaire. Think postfordism and postmodernism. Think long tails and the multiplicity of the digital economy. Think woke-washing at CIA and Spike Lee shilling cryptocurrency. Think Hypernormalization, New Spirit of Capitalism, Theory of the Young Girl, To Live and Think Like Pigs. Complaints against the mainstream have become mainstreamed. And if power today has shifted “left,” then — Reverse Hegemony Brain go brrr — resistance to power shifts “right.” A generation ago the Q Shaman would have been a leftwing nut nattering about the Kennedy assassination. But today he’s a right wing nut (alas still nattering about the Kennedy assassination).

    “Red pill toxicity” (29) is how Chun characterizes the responses to this new topsy-turvy world of reverse hegemony. (To be sure, she’s only the latest critic weighing in on the history of the present; other well-known accounts include Angela Nagle’s 2017 book Kill All Normies, and Mark Fisher’s notorious 2013 essay “Exiting the Vampire Castle.”) And if libs, hippies, and anarchists had become the new dominant, the election of Donald Trump showed that “populism, paranoia, polarization” (77) could also reemerge as a kind of throwback to the worst political ideologies of the Twentieth Century. With Trump the revolutions of history — ironically, unstoppably — return to where they began, in “the totalitarian world view” (77).

    In other words these self-styled rebels never actually disrupted anything, according to Chun. At best they used disruption as a kind of ideological distraction for the same kinds of disciplinary management structures that have existed since time immemorial. And if Foucault showed that nineteenth-century repression also entailed an incitement to discourse, Chun describes how twentieth-century transgression also entailed a novel form of management. Before it was “you thought you were repressed but in fact you’re endlessly sublating and expressing.” Now it’s “you thought you were a rebel but disruption is a standard tactic of the Professional Managerial Class.” Or as Jacques Lacan said in response to some young agitators in his seminar, vous voulez un maître, vous l’aurez. Slavoj Žižek’s rendering, slightly embellished, best captures the gist: “As hysterics, you demand a new master. You will get it!

    I doubt Chun would embrace the word “hysteric,” a term indelibly marked by misogyny, but I wish she would, since hysteria is crucial to her Transgressive Hypothesis. In psychoanalysis, the hysteric is the one who refuses authority, endlessly and irrationally. And bless them for that; we need more hysterics in these dark times. Yet the lesson from Lacan and Žižek is not so much that the hysteric will conjure up a new master out of thin air. In a certain sense, the lesson is the reverse, that the Big Other doesn’t exist, that Big Brother himself is a kind of hysteric, that power is the very power that refuses power.

    This position makes sense, but not completely. As a recovering Deleuzian, I am indelibly marked by a kind of antinomian political theory that defines power as already heterogenous, unlawful, multiple, anarchic, and material. However I am also persuaded by Chun’s more classical posture, where power is a question of sovereign fiat, homogeneity, the central and the singular, the violence of the arche, which works through enclosure, normalization, and discipline. Faced with this type of power, Chun’s conclusion is, if I can compress a hefty book into a single writ, that difference will save us from normalization. In other words, while Chun is critical of the Transgressive Hypothesis, she ends up favoring the Big-Brother theory of power, where authentic alternatives escape repressive norms.

    I’ll admit it’s a seductive story. Who doesn’t want to believe in outsiders and heroes winning against oppressive villains? And the story is especially appropriate for the themes of Discriminating Data: data science of course entails norms and deviations; but also, in a less obvious way, data science inherits the old anxieties of skeptical empiricism, where the desire to make a general claim is always undercut by an inability to ground generality.

    Yet I suspect her political posture relies a bit too heavily on the first half of the Transgressive Hypothesis, the 1984 narrative of difference contra norm, even as she acknowledges the second half of the narrative where difference became a revanchist weapon for big tech (to say nothing of difference as a bonafide management style). This leads to some interesting inconsistencies. For instance Chun notes that Apple’s 1984 hammer thrower is a white woman disrupting an audience of white men. But she doesn’t say much else about her being a woman, or about the rainbow flag that ends the commercial. The Transgressive Hypothesis might be the quintessential tech bro narrative but it’s also the narrative of feminism, queerness, and the new left more generally. Chun avoids claiming that feminism failed; but she’s also savvy enough to avoid saying that it succeeded. And if Sadie Plant once wrote that “cybernetics is feminization,” for Chun it’s not so clear. According to Chun the cybernetic age of computers, data, and ubiquitous networks still orients around structures of normalization: masculine, white, straight, affluent and able-bodied. Resistant to such regimes of normativity, Chun must nevertheless invent a way to resist those who were resisting normativity.

    Regardless, for Chun the conclusion is clear: these hysterics got their new master. If not immediately they got it eventually, via the advent of Web 2.0 and the new kind of data-centric capitalism invented in the early 2000s. Correlation isn’t enough — and that’s the reason why. Correlation means the forming of a general relation, if only the most minimal generality of two paired data points. And, worse, correlation’s generality will always derive from past power and organization rather than from a reimagining of the present. Hence correlation for Chun is a type of structural pessimism, in that it will necessarily erase and exclude those denied access to the general relation.

    Characterized by a narrative poignancy and an attention to the ideological conditions of everyday life, Chun highlights alternative relations that could hopefully replace the pessimism of correlation. Such alternatives might take the form of a “potential history” or a “critical fabulation,” phrases borrowed from Ariella Azoulay and Saidiya Hartman, respectively. For Azoulay potential history means to “‘give an account of diverse worlds that persist’”; for Hartman, critical fabulation means “to see beyond numbers and sources” (79). A slim offering covering a few pages, nevertheless these references to Azoulay and Hartman indicate an appealing alternative for Chun, and she ends her book where it began, with an eloquent call to acknowledge “a world that resonates with and in difference.”

    _____

    Alexander R. Galloway is a writer and computer programmer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), Laruelle: Against the Digital (University of Minnesota, 2014), and most recently, Uncomputable: Play and Politics in the Long Digital Age (Verso, 2021).

    Back to the essay

     

  • Hannah Zeavin — Glasses for the Voice (Review of Jonathan Sterne, Diminished Faculties: A Political Phenomenology of Impairment)

    Hannah Zeavin — Glasses for the Voice (Review of Jonathan Sterne, Diminished Faculties: A Political Phenomenology of Impairment)

    a review of Jonathan Sterne, Diminished Faculties: A Political Phenomenology of Impairment (Duke UP, 2022)

    by Hannah Zeavin

    Somewhere between 500,000 and over 1 million Americans, and many more people worldwide, are now living with some form of post-viral symptomatology from COVID-19—or “Long COVID.” In a pandemic first and pervasively represented by elderly death or “mild” cases no worse than the flu, there are, in reality, three true outcomes after contracting the virus, one of which includes long-term illness, impairment, and disability. These “long haulers” are discovering what disability activists have long known and fought against: accommodation and access are not readily forthcoming, insurance is a nightmare, and people of color and women are much less likely to have their symptoms taken seriously enough to lead to a medical diagnosis. And medical diagnosis, if received, is fraught, too. If 1 in 4 Americans is already disabled, we have been and continue to be living through what some are calling a mass disabling event, akin to a war. This situation is not limited to the circulation of a virus and its aftermath in individual persons and bodies; it extends to the conditions past and present that have produced its lethality: capitalism and its attendants, including medical redlining, environmental racism, settler-colonialism.

    Jonathan Sterne’s Diminished Faculties: A Political Phenomenology of Impairment arrives then just in time to complicate that history via the experience of impairment (as well as its kin experiences and identities, illness and disability). As Sterne writes, “The semantic ambiguity among impairment, disability, and illness remains a constitutive feature of all three categories. They move through the same space and bump into one another, sometimes overlapping, sometimes repelling. All three are conditioned by a divergence from medical or social norms. All three are conditioned by an ideology of ability and a preference for ability and health.” Sterne’s book doesn’t just map the experiences of impairment, he also troubles the binary of disabled and able body/mind. By thinking about impairment and faculties, Sterne upends our received notion that we, somehow, are in control of our senses (or our minds, our limbs). Instead, some forms of impairment are accepted, even become norms, while others present as problems. Sterne’s book is about many kinds of impairment, and their intersections in subjects who are understood to be normative nonetheless or even because they’re impaired; what we think of as normal (gradual hearing loss as we work, listen to music, age) versus what is marked off as different and constitutes an unquestioned disability (e.g., childhood deafness following viral illness).

    Early in the book, Sterne quotes the disability studies adage, “you will someday join us.” This definitive book is also Sterne’s personal story of living in the matrixes of illness, impairment, and disability, in the materiality of their experience as well as the cultures that contain and produce those experiences. Rather than presenting a work at the end of learning, deleting all the traces of theorization up until the point of arrival, Sterne fully tells the story of how he “joined”: from study groups to blog posts, across changes in understanding and bodily experience. Diminished Faculties therefore provides a rigorous, moving account of the experience of the normal and the pathological, the accounted-for body both disabled and abled, and the one shoved to the margins. Sterne also offers his reader the account of impairment via a political phenomenology grounded in his own story while moving slowly and responsibly beyond it to reconceive impairment theory as a theory of labor, of media, and fundamentally, of political experience.

    Sterne is a preeminent voice in Media Studies, and the author of The Audible Past (Duke UP, 2003) and MP3: The Meaning of a Format (Duke UP, 2012). Diminished Faculties is his first book in nearly a decade, the third in a series of works that have shaped and reshaped sound studies, and the first to center his own history.

    While in this way, Diminished Faculties is moving beyond his previous books to auto-theory, If The Audible Past begins with the “Hello” of the telephone, Diminished Faculties takes on another, amplified greeting. In 2009, Sterne was diagnosed with an aggressive case of thyroid cancer; the surgery to remove his tumor (the size of a pomegranate, as demonstrated in a drawing from S. Lochlann Jain) paralyzed one of his two vocal cords. Normal vocal cord functioning looks like, as Sterne puts it elsewhere “a monkey crashing cymbals”; a normative voice depends on that coordinated cooperation between halves. And as he tells us, his voice may sound better, whatever that really means, to his listener (smokey and rich) on one of his worst days. But Sterne also talks for a living—teaching and delivering research-and his voice blows out, he gets exhausted. As Sterne began vocal therapy, he started to use a personal amplification device that hangs from his neck, which he has termed his “dork-o-phone.” Staying with the example of what gets made visible as impairment, Sterne tells the story of someone coming to a house party, pointing to his chest and saying, “What the fuck is that?” Sterne replies: “Glasses for my voice.” This book tries, in part, to account for this importunate reaction, reconciling a moment of surprise or frustration or intolerance with the fact that impairment is everywhere, and tracking what that reaction does to the subject who is marked as other. As Sterne writes, “Think of all the moving parts in that scenario: a subject whose body cannot match its will; but also auditors struggling to align themselves with whatever techniques the speaker is using. Everyone is trying; nobody is quite succeeding.”

    This is one way of naming the book’s method: “think of all the moving parts.” Each of its chapters weaves disability studies, auto-theory, history of science, and media history, turning the levels up or down on any particular input and frame. Diminished Faculties ushers the reader through these interlinked hermeneutics toward a redescription of impairment in the long 20th century.

    The first chapter, “Degrees of Muteness,” offers a deep consideration of the uses of phenomenology, and its methods for describing experience, centered on Sterne’s diagnosis, surgery, and its aftermath. As Sterne writes, “this book begins with consciousness of unconsciousness (or is it unconsciousness of consciousness?)” Here he also introduces a media theory of acquired impairment, arguing that, “the concept of impairment is itself also a media concept. The contemporary concept of normal hearing emerged out of the idea of communication impairments and from a very specific time and place.” He moves from this study of a phenomenology of impairment into its deployment, to consider his own voice, or voices v (spoken, amplified, written, authorial). Via his personal amplification device, which he has named the “dork-o-phone,” Sterne takes this object to think with to give us a history and experience of assistive technology and design as it interacts with other infrastructures.

    Sterne then moves from political phenomenology to breaking the normative form of a book by inserting the written guide for an imaginary exhibition “In Search of New Vocalities.” The exhibition is accessible, designed for bodies coming from places imaginary and real, an act of care in the scene of art going, if only in the mind. The tone of the book shifts once more for the concluding two chapters towards something more familiar from Sterne’s earlier books, here centered more squarely in STS and Disability studies.

    Chapter four is a theorization of Sterne’s identification of “aural scarification” and what he calls normal impairments. In this chapter, Sterne joins recent accounts of the built environment—and here he focuses on our sonic environment—that argue that disability itself reveals aspects of society that hurt everyone, however unevenly. Sara Hendren’s What Can a Body Do? (Riverhead, 2020) shows how the curb on the sidewalk, for example, makes city infrastructures impassable for wheelchair users—but also say, mothers pushing strollers, travelers with suitcases, skateboarders and so on. Add a curb cut and suddenly movement is much more possible in urban spaces for many—not just the conventionally disabled. On the other hand, sometimes access for disabled users is granted almost by accident. Sterne provides another example: closed captioning. Initially, closed captioning was resisted by major broadcast networks precisely because it was expensive and obtrusive—and would only help a small minority. Then other spaces changed and hearing users needed to be able to see what they would otherwise listen to, in airport bars, in hospital waiting rooms, at the gym. Suddenly, D/deaf users got the captions they needed—but only because abled users wanted the same technology. Sterne calls this “crip washing”; the scholar and critic Mara Mills calls this an “assistive pretext.”

    Sterne adds to this account that we live in a physical world that is in fact designed for people who are a little bit hearing impaired. Our entire infrastructure is loud: airplanes, bathroom hand dryers, music, whether live or in ear buds. Sterne shows that it is better not to hear perfectly and we hear less well because we interact with this environment; being alive leads to impairment even if we start without it (“you will someday join us”). Throughout Diminished Faculties, Sterne troubles the binary of disabled and abled body/mind by putting disability into a constellation with impairment and illness. By thinking about impairment and faculties, Sterne argues that some forms of impairment are accepted, even become norms, while others are marked as problems, which separates it as a term even as it overlaps with disability. What then is an impairment if we expect it, if it is normal, and it can be disappeared through design? Why are other impairments made visible through these same processes? Considering impairment and disability as a norm is a revision that Sterne requires of his reader, broadening our working understanding of the built environment.

    The concluding chapter of the book offers a deft theory and history of fatigue and rest. Opening with theorizations of how we manage fatigue in relation to labor, from Taylorism to energy quantified by “spoons” as theorized by Christine Miserandino, Sterne moves his account of fatigue through and beyond a depletion model. He asks whether we can think of fatigue as something other than a loss, a depletion of energy? He argues that rather than a lack of energy, fatigue is a presence. Sterne reminds his reader throughout that fatigue is so difficult to capture phenomenologically precisely because if it is too overtly present, he couldn’t write it down, if not present enough, he could not articulate the experience of fatigue from within. In this moment, Sterne returns to political phenomenology—including its limits. There are certain experiences, extreme fatigue being one of them—that are sometimes simply not accessible in the moment of writing.

    Impairment and fatigue are both concepts from media and the mediation of the body in society, and here are richly positioned within a history of technology and from disability studies. The two commingle, as Sterne deftly shows, to produce our lived experience of body in situ. Along the way, Sterne gives us additional experiences: an account of himself, an exhibition, and a theory to use (and a manual for how we might do it), turn to account, and even dispose of. Diminished Faculties is a lyric, genre-bending book, that is forcefully argued, rendered beautifully, and will open the path for further research. It is deeply generous both to reader and future scholar, as Sterne’s work always is. But additionally, this is a book that so many have needed, and need now, a way of situating the present emergency in a much longer, political history.

    _____

    Hannah Zeavin teaches in the History and English Departments at UC Berkeley. She is the author of The Distance Cure: A History of Teletherapy (2021, MIT Press). Other work is forthcoming or out from differences: A Journal of Feminist Cultural Studies, Dissent, The Guardian, n+1, Technology & Culture, and elsewhere.

    Back to the essay

  • Sue Curry Jansen and Jeff Pooley — Neither Artificial nor Intelligent (review of Crawford, Atlas of AI, and Pasquale, New Laws of Robotics)

    Sue Curry Jansen and Jeff Pooley — Neither Artificial nor Intelligent (review of Crawford, Atlas of AI, and Pasquale, New Laws of Robotics)

    a review of Kate Crawford, Atlas of AI Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale UP, 2021) and Frank Pasquale, New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard UP, 2021)

    by Sue Curry Jansen and Jeff Pooley

    Artificial intelligence (AI) is a Faustian dream. Conceived in the future tense, its most ardent AI visionaries seek to create an enhanced form of intelligence that far surpasses the capacities of human brains. AI promises to transcend the messiness of embodiment, the biases of human cognition, and the limitations of mortality. Entering its eighth decade, AI is largely a science fiction, despite recent advances in machine learning. Yet it has captured the public imagination since its inception, and acquired potent ideological cache. Robots have become AI’s humanoid faces, as well as icons of popular culture: cast as helpful companions or agents of the apocalypse.

    The transcendent vision of artificial intelligence has educated, informed, and inspired generations of scientists, military strategists, policy makers, entrepreneurs, writers, artists, filmmakers, and marketers. However, apologists have also frequently invoked AI’s authority to mystify, intimidate, and silence resistance to its vision, teleology, and deployments. Where, for example, the threat of automation once triggered labor activism, rallying opposition to an esoteric branch of computer science research that few non-specialists understand is a rhetorical non-starter. So is campaigning for alternatives to smart apps, homes, cars, cities, borders, and bombs.

    Two remarkable new books, Kate Crawford’s The Atlas of AI and Frank Pasquale’s New Laws of Robotics: Defending Human Expertise in the Age of AI, provide provocative critical assessments of artificial intelligence in clear, accessible, and engaging prose. Both books have titles that could discourage novices, but they are, in fact, excellent primers for non-specialists on what is at stake in the current ascendency of AI science and ideology—especially if read in tandem.

    Crawford’s thesis—“AI is neither artificial nor intelligent”—cuts through the sci-fi hype to radically reground AI power-knowledge in material reality. Beginning with its environmental impact on planet Earth, her narrative proceeds vertically to demystify AI’s ways of seeing—its epistemology, methodology, and applications—and then to examine the roles of labor, ideology, the state, and power in the AI enterprise. She concludes with a coda on space and the astronautical illusions of digital billionaires. Pasquale takes a more horizontal approach, surveying AI in health care, education, media, law, policy, economics, war, and other domains. His attention is on the practical present—on the ethical dilemmas posed by current and near-future deployments of AI. His through line is that human judgment, backed by policy, should steer AI toward human ends.

    Despite these differences, Crawford and Pasquale converge on several critical points. First, they agree that AI models are skewed by economic and engineering values to the exclusion of other forms of knowledge and wisdom. Second, both endorse greater transparency and accountability in artificial intelligence design and practices. Third, they agree that AI datasets are skewed: Crawford focuses on how the use of natural language datasets, no matter how large, reproduce the biases of the populations they are drawn from, while Pasquale attends to designs that promote addictive engagement to optimize ad revenue. Fourth, both cite the residual effects of AI’s military origins on its logic, values, and rhetoric. Fifth, Crawford and Pasquale both recognize that AI’s futurist hype tends to obscure the real-world political and economic interests behind the screens—the market fundamentalism that models the world as an assembly line. Sixth, both emphasize the embodiment of intelligence, which encompasses tacit and muscle knowledge that cannot be fully extracted and abstracted by artificial intelligence modelers. Seventh, they both view artificial intelligence as a form of data-driven behaviorism, in the stimulus-response sense. Eighth, they acknowledge that AI and economic experts claim priority for their own views—a position they both reject.

    Crawford literally travels the world to map the topologies of computation, beginning in the lithium mines of Nevada, on to Silicon Valley, Indonesia, Malaysia, China, and Mongolia, and ending under personal surveillance outside of Jeff Bezos’ Blue Origin suborbital launch facility in West Texas. Demonstrating that AI is anything but artificial, she documents the physical toll it extracts from the environment. Contra the industry’s earth-friendly PR and marketing, the myth of clean tech and metaphors like ‘the Cloud,’ Crawford points out that AI systems are built upon consuming finite resources that required billions of years to take form: “we are extracting Earth’s geological history to serve a split second of contemporary technological time, building devices like the Amazon Echo and iPhone that are often designed to last only a few years.” And the Cloud itself leaves behind a gigantic carbon footprint. AI data mining is not only dependent on human miners of rare minerals, but also on human labor functioning within a “registry of power” that is unequal and exploitive— where “many valuable automated systems feature a combination of underpaid digital piece workers and customers taking on unpaid tasks to make systems function,” all the while under constant surveillance.

    While there is a deskilling of human labor, there are also what Crawford calls Potemkin AI systems, which only work because of hidden human labor—Bezos himself calls such systems “artificial artificial intelligence.” AI often doesn’t work as well as the humans it replaces, as, for example, in automated telephone consumer service lines. But Crawford reminds us that AI systems scale up: customers ‘on hold’ replace legions of customer service workers in large organizations. Profits trump service. Her chapters on data and classification strip away the scientistic mystification of AI and Big Data. AI’s methodology is simply data at scale, and it is data that is biased at inception because it is collected indiscriminately, as size, not substance, counts. A dataset extracted and abstracted from a society secured in systemic racism will, for example, produce racist results. The increasing convergence of state and corporate surveillance not only undermines individual privacy, but also makes state actors reliant on technologies that they cannot fully understand as machine learning transforms them. In effect, Crawford argues, states have made a “devil’s bargain” with tech companies that they cannot control. These technologies, developed for command-and-control military and policing functions, increasingly erode the dialogic and dialectic nature of democratic commons.

    AI began as a highly subsidized public project in the early days of the Cold War. Crawford demonstrates, however, that it has been “relentlessly privatized to produce enormous financial gains for the tiny minority at the top of the extraction pyramid.” In collaboration with Alex Campolo, Crawford has described AI’s epistemological flattening of complexity as “enchanted determinism,” whereby “AI systems are seen as enchanted, beyond the known world, yet deterministic in that they discover patterns that can be applied with predictive certainty to everyday life.”[1] In some deep learning systems, even the engineers who create these systems cannot interpret them. Yet, they cannot dismiss them either. In such cases, “enchanted determinism acquires an almost theological quality,” which tends to place it beyond critique of both technological utopians as well as dystopians.

    Pasquale, for his part, examines the ethics of AI as currently deployed and often circumvented in several contexts: medicine, education, media, law, military, and the political economy of automation, in each case in relation to human wisdom. His basic premise is that “we now have the means to channel technologies of automation, rather than being captured or transformed by them.” Like Crawford, then, he recommends exercising a resistant form of agency. Pasquale’s focus is on robots as automated systems. His rhetorical point of departure is a critique and revision of Isaac Asimov’s highly influential “laws of robotics,” developed in a 1942 short story—more than a decade before AI was officially launched in 1956. Because the world and law-making is far more complex than a short story, Pasquale finds Asimov’s laws ambiguous and difficult to apply, and proposes four new ones, which become the basis of his arguments throughout the book. They are:

    1. Robotic systems and AI should complement professionals, not replace them.
    2. Robotic systems and AI should not counterfeit humanity.
    3. Robotic systems and AI should not intensify zero-sum arms races.
    4. Robotic systems and AI must always indicate the identity of their creator(s), controller(s), and owner(s).

    ‘Laws’ entail regulation, which Pasquale endorses to promote four corresponding values: complementarity, authenticity, cooperation, and attribution. The four laws’ deployment depends on a critical distinction that Pasquale draws between technologies that replace people and those that assist us in doing our jobs better. Classic definitions of AI have sought to create computers that “can sense, think, and act like humans.” Pasquale endorses an “Intelligence Augmentation” (IA) alternative. This is a crucial shift in emphasis; it is Pasquale’s own version of AI refusal.

    He acknowledges that, in the current economy, “there are economic laws that tilt the scale toward AI and against IA.” In his view, deployment of robots may, however, offer an opportunity for humanistic intervention in AI’s hegemony, because the presence of robots, unlike phones, tablets, or sensors, is physically intrusive. They are there for a purpose, which we may accept or reject at our peril, but find hard to ignore. Robots are being developed to enter fields that are already highly regulated, which offers an opportunity to shape their use in ways that conform to established legal standards of privacy and consumer protection. Pasquale is an advocate for building humane (IA) values within the technology, before robots are released into the wild.

    In each of his topical chapters, he explains how robots and other AI systems designed to advance the values of complementarity, authenticity, cooperation, and attribution might enhance human existence and community. Some chapters stand out, as particularly insightful, including those on “automated media,” human judgment, and the political economy of automation. One of Pasquale’s chapters addresses important terrain that Crawford does not consider, medicine. Given past abuses by medical researchers in exploiting and/or ignoring race and gender, they may be especially sensitive and receptive to an IA intervention, despite the formidable economic forces stacked against it. Pasquale shows, for example, how IA has amplified diagnostics in dermatology through pattern recognition, providing insight into what distinguishes malignant from benign moles.

    In our view, Pasquale’s closing chapter endorsing human wisdom, as opposed to AI, displays multiple examples of the former. But some of their impact is blunted by more diffuse discussions of literature and art, valuable though those practices may be in counter-balancing the instrumental values of economics and engineering. Nonetheless, Pasquale’s argument is an eloquent tribute to a “human form of life that is fragile, embodied in mortal flesh, time-delimited, and irreproducible in silico.”

    The two books, read together, amount to a critique of AI ideology. Pasquale and Crawford write about the stuff that phrases like “artificial intelligence” and “machine learning” refer to, but their main concern is the mystique surrounding the words themselves. Crawford is especially articulate on this theme. She shows that, as an idea, AI is self-warranting. Floating above the undersea cables and rare-earth mines—ethereal and cloud-like—the discourse makes its compelling case for the future. Her work is to cut through the cloud cover, to reveal the mines and cables.

    So the idea of AI justifies even as it obscures. What Crawford and Pasquale draw out is that AI is a way of seeing the world—a lay epistemology. When we see the world through the lens of AI, we see extraction-ready data. We see countable aggregates everywhere we look. We’re always peering ahead, predicting the future with machinist probabalism. It’s the view from Palo Alto that feels like a god’s eye view. From up there, the continents look patterned and classification-ready. Earth-bound disorder is flattened into clear signal. What AI sees, in Crawford’s phrase, is a “Linnaean order of machine-readable tables.” It is, in Pasquale’s view, an engineering mindset that prizes efficiency over human judgment.

    At the same time, as both authors show, the AI lens refracts the Cold War national security state that underwrote the technology for decades. Seeing like an AI means locating targets, assets, and anomalies. Crawford calls it a “covert philosophy of en masse infrastructural command and control,” a martial worldview etched in code.

    As Kenneth Burke observed, every way of seeing is also a way of not seeing. What AI can’t see is also its raw material: human complexity and difference. There is, in AI, a logic of commensurability—a reduction of messy and power-laden social life into “computable sameness.” So there is a connection, as both Crawford and Pasquale observe, between extraction and abstraction. The activity of everyday life is extracted into datasets that, in their bloodless tabulation, abstract away their origins. Like Marx’s workers, we are then confronted by the alienated product of our “labor”—interviewed or consoled or policed by AIs that we helped build.

    Crawford and Pasquale’s excellent books offer sharp and complementary critiques of the AI fog. Where they differ is in their calls to action. Pasquale, in line with his mezzo-level focus on specific domains like education, is the reformist. His aim is to persuade a policy community that he’s part of—to clear space between do-nothing optimists and fatalist doom-sayers. At core he hopes to use law and expertise to rein in AI and robotics—with the aim to deploy AI much more conscientiously, under human control and for human ends.

    Crawford is more radical. She sees AI as a machine for boosting the power of the already powerful. She is skeptical of the movement for AI “ethics,” as insufficient at best and veering toward exculpatory window-dressing. The Atlas of AI ends with a call for a “renewed politics of refusal,” predicated on a just and solidaristic vision of the future.

    It would be easy to exaggerate Crawford and Pasquale’s differences, which reflect their projects’ scope and intended audience more than any disagreement of substance. Their shared call is to see AI for what it is. Left to follow its current course, the ideology of AI will reinforce the bars on the “iron cage” that sociologist Max Weber foresaw a century ago: incarcerating us in systems of power dedicated to efficiency, calculation, and control.

    _____

    Sue Curry Jansen is Professor of Media & Communication at Muhlenberg College, in Allentown, PA. Jeff Pooley is Professor of Media & Communication at Muhlenberg, and director of mediastudies.press, a scholar-led publisher. Their co-authored essay on Shoshanna Zuboff’s Surveillance Capitalism—a review of the book’s reviews—recently appeared in New Media & Society.

    Back to the essay

    _____

    Notes

    [1] Crawford acknowledges the collaboration with Campolo, her research assistant, in developing this concept and the chapter on affect, generally.

  • Tamara Kneese — Our Silicon Valley, Ourselves

    Tamara Kneese — Our Silicon Valley, Ourselves

    a review of Anna Wiener, Uncanny Valley; Joanne McNeil, Lurking; Ellen Ullman, Life in Code; Wendy Liu, Abolish Silicon Valley; Ben Tarnoff and Moira Weigel, eds., Voices from the Valley; Mary Beth Meehan and Fred Turner, Seeing Silicon Valley

    by Tamara Kneese

    “Fuck all that. I have no theory. I’ve only got a story to tell.”
    – Elizabeth Freeman, “Without You, I’m Not Necessarily Nothing”

    ~

    Everyone’s eager to mine Silicon Valley for its hidden stories. In the past several years, women in or adjacent to the tech industry have published memoirs about their time there, ensconcing macrolevel critiques of Big Tech within intimate storytelling. Examples include Anna Wiener’s Uncanny Valley, Joanne McNeil’s Lurking, Ellen Ullman’s Life in Code, Susan Fowler’s Whistleblower, and Wendy Liu’s Abolish Silicon Valley, to name just a handful.[1] At the same time, recent edited volumes curate workers’ everyday lives in the ideological and geographical space that is Silicon Valley, seeking to expose the deep structural inequalities embedded in the tech industry and its reaches in the surrounding region. Examples of this trend include Ben Tarnoff and Moira Weigel’s Voices from the Valley and Mary Beth Meehan and Fred Turner’s Seeing Silicon Valley, along with tech journalists’ reporting on unfair labor practices and subsequent labor organizing efforts. In both cases, personal accounts of the tech industry’s effects constitute their own form of currency.

    What’s interesting about the juxtaposition of women’s first-hand accounts and collected worker interviews is how the first could fit within the much derided and feminized “personal essay” genre while the latter is more explicitly tied to the Marxist tradition of using workers’ perspectives as an organizing catalyst, i.e. through the process of empirical cataloging and self-reflection known as workers’ inquiry.[2] In this review essay, I consider these two seemingly unrelated trends in tandem. What role can personal stories play in sparking collective movements, and does presentation matter?

    *

    Memoirs of life with tech provide a glimpse of the ways that personal experiences—the good, the bad, and the ugly—are mediated by information technologies themselves as well as through their cascading effects on workplaces and social worlds. They provide an antidote to early cyberlibertarian screeds, imbued with dreams of escaping fleshly, earthly drudgery, like John Perry Barlow’s “A Declaration of the Independence of Cyberspace”: “Our identities have no bodies, so, unlike you, we cannot obtain order by physical coercion.” But in femme accounts of life in code, embodiment is inescapable. As much as the sterile efficiencies of automation would do away with the body’s messiness, the body rears its head with a vengeance. In a short post, one startup co-founder, Tracy Young, recounts attempting to neutralize her feminine coded body with plain clothes and a stoic demeanor, persevering through pregnancy, childbirth, and painful breastfeeding, and eventually hiding her miscarriage from her colleagues. Young reveals these details to point to the need for structural changes within the tech industry, which is still male-dominated, especially in the upper rungs. But for Young, capitalism is not the problem. Tech is redeemable through DEI initiatives that might better accommodate women’s bodies and needs. On the other end of the spectrum, pregnant Amazon warehouse workers suffer miscarriages when their managers refuse to follow doctors’ recommendations and compel pregnant workers to lift heavy boxes or prevent them from taking bathroom and water breaks. These experiences lie on disparate ends of the scale, but reflect the larger problems of patriarchy and racial capitalism in tech and beyond. It is unclear if this sliver of common ground can hope to bridge such a gulf of privilege.

    Sexual harassment, workplace misogyny, pregnancy discrimination: these grievances come up again and again within femme tech memoirs, even the ones that don’t at face value seem political. At first glance, Joanne McNeil’s Lurking: How a Person Became a User is not at all about labor. Her memoir is to some extent a celebration of the early internet, at times falling into the trap of nostalgia—the pleasure of the internet being “a place,” and the greater degree of flexibility and play afforded by usernames as opposed to real names policies. “Once I spoke freely and shared my dreams with strangers. Then the real world fastened itself to my digital life…My idle youth online largely—thankfully—evaporated in the sun, but more recent-ish old posts breeze along, colliding with and confusing new image of myself that I try to construct” (McNeil 2020, 8-9). Building on earlier feminist critiques of techno-utopian libertarianism, such as Paulina Borsook’s Cyberselfish (2000), in McNeil’s estimation, the early web allowed people to be lurkers, rather than users, even if the disembodied libertarian imaginaries attached to cyberspace never panned out. With coerced participation and the alignment of actual identities with online profiles, the shift to “the user” reflects the enclosure of the web and the growth of tech corporations, monetization, and ad tech. The beauty of being a lurker was the space to work out the self in relation to communities and to bear witness to these experimental relationships. As McNeil puts it, in her discussion of Friendster, “What happened between <form> and </form> was self-portraiture” (McNeil 2020, 90). McNeil references the many early internet communities, like Echo, LatinoLink, and Café los Negroes, which helped queer, Black, and Latinx relationships flourish in connection with locally situated subcultures.

    In a brief moment, while reflecting on the New York media world built around websites like Gawker, McNeil ties platformization to her experiences as a journalist, a producer of knowledge about the tech industry: “A few years ago, when I was a contractor at a traffic-driven online magazine, I complained to a technologist friend about the pressure I was under to deliver page view above a certain threshold” (McNeil 2020, 138). McNeil, who comes from a working class background, has had in adulthood the kind of work experiences Silicon Valley tends to make invisible, including call center work and work as a receptionist. As a journalist, even as a contractor, she was expected to amass thousands of Twitter followers. Because she lacked a large following, she relied on the publication itself to promote her work. She was eventually let go from the job. “My influence, or lack thereof, impacted my livelihood” (McNeil 2020, 139). This simply stated phrase reveals how McNeil’s critique of Big Tech is ultimately not only about users’ free labor and the extraction of profit from social relationships, but about how platform metrics are making people’s jobs worse.

    Labor practices emerge in McNeil’s narrative at several other points, in reference to Google’s internal caste system and the endemic problem of sexual harassment within the industry. In a discussion of Andrew Norman Wilson’s influential Workers Leaving the Googleplex video (2011), which made clear to viewers the sharp divisions within the Google workforce, McNeil notes that Google still needs these blue-collar workers, like janitors, security guards, and cafeteria staff, even if the company has rendered them largely invisible. But what is the purpose of making these so-called hidden laborers of tech visible, and for whom are they being rendered visible in the first place?[3] If you have ever been on a tech campus, you can’t miss ‘em. They’re right fucking there! If the hierarchies within tech are now more popularly acknowledged, then what? And are McNeil’s experiences as a white-collar tech journalist at all related to these other people’s stories, which often provide the scaffolding for tech reporters’ narratives?

    *

    Other tech memoirs more concretely focus on navigating tech workplaces from a femme perspective. Long-form attention to the matter creates more space for self-reflection and recognition on the part of the reader. In 2016, Anna Wiener’s n+1 essay, “Uncanny Valley,” went viral because it hit a nerve. Wiener presented an overtly gendered story—about body anxiety and tenuous friendship—told through one woman’s time in the world of startups before the majority of the public had caught wind of the downside of digital platforms and their stranglehold on life, work, and politics. Later, Wiener would write a monograph-length version of the story with the same title, detailing her experiences as a “non-technical” woman in tech: “I’d never been in a room with so few women, so much money, and so many people chomping at the bit to get a taste” (Wiener 2020, 61). In conversation with computer science academics and engineers, her skepticism about the feasibility of self-driving cars isn’t taken seriously because she is a woman who works in customer support. Wiener describes herself as being taken in by the promises and material culture of the industry: a certain cashmere sweater and overall look, wellness tinctures, EDM, and Burning Man at the same time she navigates taxicab gropings on work trips and inappropriate comments about “sensual” Jewish women at the office. Given the Protestant Work Ethic-tinged individualism of her workplace, she offers little in the way of solidarity. When her friend Noah is fired after writing a terse memo, she and the rest of the workers at the startup fail to stand up to their boss. She laments, “Maybe we never were a family. We knew we had never been a family,” questioning the common myth that corporations are like kin (Wiener 2020, 113). Near the end of her memoir, Wiener wrestles with the fact that GamerGate, and later the election of Trump, do not bring the reckoning she once thought was coming. The tech industry continues on as before.

    Wiener is in many respects reminiscent of another erudite, Jewish, New York City-to-San Francisco transplant, Ellen Ullman. Ullman published an account of her life as a woman programmer, Close to the Machine: Technophilia and Its Discontents, in 1997, amid the dotcom boom, when tech criticism was less fashionable. Ullman writes about “tantric, algorithmic” (1997, 49) sex with a fellow programmer and the erotics of coding itself, flirting with the romance novel genre. She critiques the sexism and user-disregard in tech (she is building a system for AIDS patients and their providers, but the programmers are rarely confronted with the fleshly existence of their end-users). Her background as a communist, along with her guilt about her awkward class position as an owner and landlord of a building in the Wall Street district, also comes through in the memoir: At one point, she quips “And who was Karl Marx but the original technophile?” (Ullman 1997, 29). Ullman presciently sees remote, contracted tech workers, including globally situated call center works, as canaries in the coal mine. As she puts it, “In this sense, we virtual workers are everyone’s future. We wander from job to job, and now it’s hard for anyone to stay put anymore. Our job commitments are contractual, contingent, impermanent, and this model of insecure life is spreading outward from us” (Ullman 1997, 146). Even for a privileged techie like Ullman, the supposedly hidden global underclass of tech was not so hidden after all.

    Ullman’s Life in Code: A Personal History of Technology, a collection of essays published twenty years later in 2017, reflects a growing desire to view the world of startups, major tech companies, and life in the Bay Area through the lens of women’s unique experiences. A 1998 essay included in Life in Code reveals Ullman’s distrust of what the internet might become: “I fear for the world the internet is creating. Before the advent of the Web, if you wanted to sustain a belief in far-fetched ideas, you had to go out into the desert, or live on a compound in the mountains, or move from one badly furnished room to another in a series of safe houses” (Ullman 2017, 89). Ullman at various points refers to the toxic dynamics of technoculture, the way that engineers make offhand sexist, racist remarks during their workplace interactions. In other words, critics like Ullman had been around for decades, but  her voice, and voices like hers, carried more weight in 2017 than in 1997. Following in Ullman’s footsteps, Wiener’s contribution came at just the right time.

    I appreciate Sharrona Pearl’s excellent review of Wiener’s Uncanny Valley in this publication, and her critique of the book’s political intentions (or lack thereof) and privileged perspective. When it comes to accounts of the self as political forces, Emma Goldman’s Living My Life it is not. But some larger questions remain: why did so many readers find Wiener’s personal narrative compelling, and how might we relate its popularity to a larger cultural shift in how stories about technology are told?

    Another woman’s memoir of a life in tech offers one possible answer. Wendy Liu started as a computer science major at a prestigious university, worked as a Google intern, and co-founded a startup, not an uncommon trajectory for a particular class of tech worker. Her candid memoir of her transformation from tech evangelist to socialist tech critic, Abolish Silicon Valley, references Wiener’s “Uncanny Valley” essay. Wiener’s account resonated with Liu, even as a software engineer who viewed herself as separate from the non-technical women around her— the marketers, program managers, and technical writers. Liu is open about the ways that ideologies around meritocracy and individual success color her trajectory: she viewed Gamergate as an opportunity to test out her company’s tech capabilities and idolized men like Elon Musk and Paul Graham. Hard work always pays off and working 80 hours a week is a means to an end. Sometimes you have to dance with the devil: for example, Liu’s startup at one point considers working for the Republican Party. Despite her seeming belief in the tech industry’s alignment with the social good, Liu has doubts. When Liu first encounters Wiener’s essay, she wryly notes that she thought n+1 might be a tech magazine, given its math-y name. Once she reads it, “The words cut like a knife through my gradually waning hopes, and I wanted to sink into an ocean of this writing” (Liu 2020, 111). Liu goes on to read hundreds of leftist books and undergo a political awakening in London. While Wiener’s memoir is intensely personal, not overtly about a collective politics, it still ignites something in Liu’s consciousness, becoming enfolded into her own account of her disillusionment with the tech industry and capitalism as a whole. Liu also refers to Tech Against Trump, published by Logic Magazine in 2017, which featured “stories from fellow tech workers who were startled into caring about politics because of Trump” (Liu 2020, 150). Liu was not alone in her awakening, and it was first-hand accounts by fellow tech workers who got her and many others to question their relationship to the system.

    Indeed, before Liu published her abolitionist memoir, she published a short essay for a UK-based Marxist publication, Notes from Below, titled “Silicon Inquiry,” applying the time-honored Marxist practice of workers’ inquiry to her own experiences as a white-collar coder. She writes, “I’ve lost my faith in the industry, and with it, any desire to remain within it. All the perks in the world can’t make up for what tech has become: morally destitute, mired in egotism and self-delusion, an aborted promise of what it could have been. Now that I realise this, I can’t go back.” She describes her trajectory from 12-year-old tinkerer, to computer science major, to Google intern, where she begins to sense that something is wrong and unfulfilling about her work: “In Marxist terms, I was alienated from my labour: forced to think about a problem I didn’t personally have a stake in, in a very typically corporate environment that drained all the motivation out of me.” When she turns away from Google to enter the world of startups, she is trapped by the ideology of faking it until you make it. They work long hours, technically for themselves, but without achieving anything tangible. Liu begins to notice the marginalized workers who comprise a major part of the tech industry, not only ride-hail drivers and delivery workers, but the cafeteria staff and janitors who work on tech campuses. The bifurcated workforce makes it difficult for workers to organize; the ones at the top are loyal to management, while those at the bottom of the hierarchy are afraid of losing their jobs if they speak out.

    Towards the end of her memoir, Liu describes joining a picket line of largely Chinese-American women who are cleaners for Marriott Hotels. This action is happening at the same time as the 2018 Google Walkout, during which white-collar tech workers organized against sexual harassment and subsequent retaliation at the company. Liu draws a connection between both kinds of workers, protesting in the same general place: “On the surface, you would think Google engineers and Marriott hotel cleaners couldn’t be more different. And yet, one key component of the hotel workers’ union dispute was the prevalence of sexual harassment in the workplace…The specifics might be different, but the same underlying problems existed at both companies” (Liu 2020, 158). She sees that TVCs (temps, vendors, and contractors) share grievances with their full-time counterparts, especially when it comes to issues over visas, sexual harassment, and entrenched racism. The trick for organizers is to inspire a sense of solidarity and connection among workers who, on the surface, have little in common. Liu explicitly connects the experiences of more white-collar tech workers like herself and marginalized workers within the tech industry and beyond. Her memoir is not merely a personal reflection, but a call to action–individual refusal, like deleting Facebook or Uber, is not sufficient, and transforming the tech industry is necessarily a collective endeavor. Her abolitionist memoir connects tech journalism’s use of workplace grievances and a first-hand account from the coder class, finding common ground in the hopes of sparking structural change. Memoirs like these may act as a kind of connective tissue, bridging disparate experiences of life in and through technology.

    *

    Another approach to personal accounts of tech takes a different tack: Rather than one long-form, first-hand account, cobble together many perspectives to get a sense of contrasts and potential spaces of overlap. Collections of workers’ perspectives have a long leftist history. For decades, anarchists, socialists, and other social reformers have gathered oral histories and published these personal accounts as part of a larger political project (see: Avrich 1995; Buhle and Kelley 1989; Kaplan and Shapiro 1998; Lynd and Lynd 1973). Two new edited collections focus on aggregated workers’ stories to highlight the diversity of people who live and work in Silicon Valley, from Iranian-American Google engineers to Mexican-American food truck owners. The concept of “Silicon Valley,” like “tech industry,” tends to obscure the lived experiences of ordinary individuals, reflecting more of a fantasy than a real place.

    Mary Beth Meehan and Fred Turner’s Seeing Silicon Valley follows the leftist photography tradition (think Lewis Hine or Dorothea Lange) of capturing working class people in their everyday struggles. Based on a six-week Airbnb stay in the area, Meehan’s images are arresting, spotlighting the disparity within Santa Clara Valley through a humanistic lens, while Turner’s historically-informed introduction and short essays provide a narrative through which to read the images. Silicon Valley is “a mirror of America itself. In that sense, it really is a city on a hill for our time” (Meehan and Turner 2021, 8). Through their presentation of life and work in Silicon Valley, Turner and Meehan push back against stereotypical, ahistorical visions of what Silicon Valley is. As Turner puts it, “The workers of Silicon Valley rarely look like the men idealized in its lore” (Meehan and Turner 2021, 7). Turner’s introduction critiques the rampant economic and racial inequality that exists in the Valley, and the United States as a whole, which bears out in the later vignettes. Unhoused people, some of whom work for major tech companies in Mountain View, live in vans despite having degrees from Stanford. People are living with the repercussions of superfund sites, hazardous jobs, and displacement. Several interviewees reference union campaigns, such as organizing around workplace injuries at the Tesla plant or contract security guards unionizing at Facebook, and their stories are accompanied by images of Silicon Valley Rising protest signs from an action in San Jose. Aside from an occasional direct quote, the narratives about the workers are truncated and editorialized. As the title would indicate, the book is above all a visual representation of life in Silicon Valley as a window into contemporary life in the US. Saturated colors and glossy pages make for a perfect coffee table object and one can imagine the images and text at home in a gallery space. To some degree, it is a stealth operation, and the book’s aesthetic qualities bely the sometimes difficult stories contained within, but the book’s intended audience is more academic than revolutionary. Who at this point doesn’t believe that there are poor people in “Silicon Valley,” or that “tech labor” obscures what is more often than not racialized, gendered, embodied, and precarious forms of work?

    A second volume takes a different approach, focusing instead on the stories of individual tech workers. Ben Tarnoff and Moira Weigel, co-founders of Logic Magazine, co-edited Voices from the Valley as part of their larger Logic brand’s partnership series with FSG Originals. The sharply packaged volume includes anonymous accounts from venture capitalist bros as well as from subcontracted massage workers, rendering visible the “people behind the platform” in a secretive industry full of NDAs (Tarnoff and Weigel 2020, 3). As the book’s title suggests, the interviews are edited back-and-forths with a wide range of workers within the industry, emphasizing their unique perspectives. The subtitle promises “Tech Workers Talk About What They Do—And How They Do It.” This is a clear nod to Studs Terkel’s 1974 epic collection of over one hundred workers’ stories, Working: People Talk About What They Do All Day and How They Feel About What They Do, in which he similarly categorizes them according to job description, from gravedigger to flight attendant. Terkel frames each interview and provides a description of their living conditions or other personal details, but for the most part, the workers speak on their own terms. In Tarnoff and Weigel’s contribution, we as readers hear from workers directly, although we do catch a glimpse of the interview prompts that drove the conversations. The editors also provide short essays introducing each “voice,” contextualizing their position. Workers’ voices are there, to be sure, but they are also trimmed to match Logic’s aesthetic. Reviews of the book, even in leftist magazines like Jacobin, tend to focus as much on the (admittedly formidable) husband and wife editor duo as they do on the stories of the workers themselves. Even so, Tarnoff and Weigel emphasize the political salience of their project in their introduction, arguing that “Silicon Valley is now everywhere” (2020, 7) as “tech is a layer of every industry” (2020, 8). They end their introduction with a call to the reader to “Speak, whoever you are. Your voice is in the Valley, too” (Tarnoff and Weigel 2020, 8).

    As in Meehan and Turner’s visually oriented book, Tarnoff and Weigel’s interviews point to the ways that badge color as class marker, along with gender, immigration status, disability, and race, affect people’s experiences on the job. Much like Meehan and Turner’s intervention, the book gives equal space to the most elite voices as it does to those on the margins, spanning the entire breadth of the tech industry. There are scattered examples of activism, like white collar organizing campaigns against Google’s Dragonfly and other #TechWontBuiltIt manifestations. At one point, the individual known as “The Cook” names Tech Workers Coalition. TWC volunteers were “computer techie hacker cool” and showed up to meetings or even union negotiations in solidarity with their subcontracted coworkers. The Cook notes that TWC thinks “everybody working for a tech company should be part of that company, in one sense or another” (Tarnoff and Weigel 2020, 68). There is an asterisk with a shorthand description of TWC, which has become something of a floating signifier of the tech workers’ movement. The international tech workers labor movement encompasses not only white collar coders, but gig and warehouse workers, who are absent here. With only seven interviews included, the volume cannot address every perspective. Because the interviews with workers are abbreviated and punctuated by punchy subheadings, it can be hard to tell whose voices are really being heard. Is it the workers of Silicon Valley, or is it the editors? As with Meehan and Turner’s effort, the end result is largely a view from above, not within. Which isn’t to say there isn’t a place for this kind of aggregation, or that it can’t connect to organizing efforts, but is this volume more of a political work than Wiener’s or Ullman’s memoirs?

    In other interviews, workers reveal gendered workplace discrimination and other grievances that might prompt collective action. The person identified as “The Technical Writer” describes being terminated from her job after her boss suspects her pregnancy. (He eliminates the position instead of directly firing her, making it harder for her to prove pregnancy discrimination). She decides not to pursue a lawsuit because, as she puts it, “Tech is actually kind of a small industry. You don’t want to be the woman who’s not easy to work with” (Tarnoff and Weigel 2020, 46). After being terminated, she finds work as a remote contractor, which allows her to earn an income while caring for her newborn and other young child. She describes the systemic misogyny in tech that leads to women in non-technical roles being seen as less valuable and maternity leave factoring into women’s lower salaries. But she laments the way that tech journalism tends to portray women as the objects, not the subjects of stories, turning them into victims and focusing narratives on bad actors like James Damore, who penned the infamous Google memo against diversity in tech. Sensationalized stories of harassment and discrimination are meant to tug at the heartstrings, but workers’ agency is often missing in these narratives. In another striking interview, “The Massage Therapist,” who is a subcontracted worker within a large tech campus environment, says that despite beleaguered cafeteria workers needing massages more than coders, she was prohibited from treating anyone who wasn’t a full-time employee. The young women working there seemed sad and too stressed to make time for their massages.

    These personal but minor insights are often missing from popular narratives or journalistic accounts and so their value is readily apparent. The question then becomes, how do both personal memoirs and these shorter, aggregated collections of stories translate into changing collective class consciousness? What happens after the hidden stories of Silicon Valley are revealed? Is an awareness of mutual fuckedness enough to form a coalition?[4]

    *

    A first step might be to recognize the political power of the personal essay or memoir, rather than discounting the genre as a whole. Critiques of the personal essay are certainly not new; Virginia Woolf herself decried the genre’s “unclothed egoism.” Writing for The New Yorker in 2017, Jia Tolentino marked the death of the personal essay. For a time, the personal essay was everywhere: sites like The Awl, Jezebel, The Hairpin, and The Toast centered women’s stories of body horror, sex, work, pain, adversity, and, sometimes, rape. In an instant, the personal essay was apparently over, just as white supremacy and misogyny seemed to be on the rise. With the rise of Trumpism and the related techlash, personal stories were replaced with more concretely political takes. Personal essays are despised largely because they are written by and for women. Tolentino traces some of the anti-personal essay discourse to Emily Gould’s big personal reveal in The New York Times Magazine, foregrounding her perspective as a woman on the internet in the age of Gawker. In 2020 essay in The Cut revisiting her Gawker shame and fame, Gould writes, “What the job did have, and what made me blind to everything it didn’t, was exposure. Every person who read the site knew my name, and in 2007, that was a lot of people. They emailed me and chatted with me and commented at me. Overnight, I had thousands of new friends and enemies, and at first that felt exhilarating, like being at a party all the time.” Gould describes her humiliation when a video of her fellating a plastic dildo at work goes viral on YouTube, likely uploaded by her boss, Nick Denton. After watching the infamous 2016 Presidential Debate, when Donald Trump creepily hovered behind Hillary Clinton, Gould’s body registers recognition, prompting a visit to her gynecologist, who tells her that her body is responding to past trauma:

    I once believed that the truth would set us free — specifically, that women’s first-person writing would “create more truth” around itself. This is what I believed when I published my first book, a memoir. And I must have still believed it when I began publishing other women’s books, too. I believed that I would become free from shame by normalizing what happened to me, by naming it and encouraging others to name it too. How, then, to explain why, at the exact same moment when first-person art by women is more culturally ascendant and embraced than it has ever been in my lifetime, the most rapacious, damaging forms of structural sexism are also on the rise?

    Gould has understandably lost her faith in women’s stories, no matter how much attention they receive, overturning structural sexism. But what if the personal essay is, in fact, a site of praxis? Wiener, McNeil, Liu, and Ullman’s contributions are, to various extents, political works because they highlight experiences that are so often missing from mainstream tech narratives. Their power derives from their long-form personal accounts, which touch not only on work but on relationships, family, personal histories. Just as much as the more overtly political edited volumes or oral histories, individual perspectives also align with the Marxist practice of workers’ inquiry. Liu’s memoir, in particular, brings this connection to light. What stories are seen as true workers’ inquiry, part of leftist praxis, and which are deemed too personal, or too femme, to be truly political? When it comes to gathering and publishing workers’ stories, who is doing the collecting and for what purpose? As theorists like Nancy Fraser (2013) caution, too often feminist storytelling under the guise of empowerment, even in cases like the Google Walkout, can be enfolded back into neoliberalism. For instance, the cries of “This is what Googley looks like!” heard during the protest reinforced the company’s hallmark metric of belonging even as it reinterpreted it.

    As Asad Haider and Salar Mohandesi note in their detailed history of workers’ inquiry for Viewpoint Magazine, Marx’s original vision for worker’s inquiry was never quite executed. His was a very empirical project, involving 101 questions about shop conditions, descriptions of fellow workers, and strikes or other organizing activities. Marx’s point was that organizers must look to the working class itself to change their own working conditions. Workers’ inquiry is a process of recognition, whereby reading someone else’s account of their grievances leads to a kind of mutual understanding. Over time and in different geographic contexts, from France and Italy to the United States, workers’ inquiry has entailed different approaches and end goals. Beyond the industrial factory worker, Black feminist socialists like Selma James gathered women’s experiences: “A Woman’s Place discussed the role of housework, the value of reproductive labor, and the organizations autonomously invented by women in the course of their struggle.” The politics of attribution were tricky, and there were often tensions between academic research and political action. James published her account under a pen name. At other times, multi-authored and co-edited works were portrayed as one person’s memoir. But the point was to take the singular experience and to have it extend outward into the collective. As Haider and Mohandesi put it,

    If, however, the objective is to build class consciousness, then the distortions of the narrative form are not problems at all. They might actually be quite necessary. With these narratives, the tension in Marx’s workers’ inquiry – between a research tool on the one hand, and a form of agitation on the other – is largely resolved by subordinating the former to the latter, transforming inquiry into a means to the end of consciousness-building.

    The personal has always been political. Few would argue that Audre Lorde’s deeply personal Cancer Journals is not also a political work. And Peter Kropotkin’s memoir accounting for his revolutionary life begins with his memory of his mother’s death. The consciousness raising and knowledge-sharing of 1970s feminist projects like Our Bodies, Ourselves, the queer liberation movement, disability activism, and the Black Power movement related individual experiences to broader social justice struggles. Oral histories accounting for the individual lives of ethnic minority leftists in the US, like Paul Avrich’s Anarchist Voices, Judy Kaplan and Linn Shapiro’s Red Diapers, and Michael Keith Honey’s Black Workers Remember, perform a similar kind of work. If Voices from the Valley and Seeing Silicon Valley are potentially valuable as political tools, then first person accounts of life in tech should be seen as another fist in the same fight. There is an undeniable power attached to hearing workers’ stories in their own words and movements can emerge from the unlikeliest sources.

    EDIT (8/6/2021): a sentence was added to correctly describe Joanne McNeil’s background and work history.
    _____

    Tamara Kneese is an Assistant Professor of Media Studies and Director of Gender and Sexualities Studies at the University of San Francisco. Her first book on digital death care practices, Death Glitch, is forthcoming with Yale University Press. She is also the co-editor of The New Death (forthcoming Spring 2022, School for Advanced Research/University of New Mexico Press).

    Back to the essay

    _____

    Notes

    [1] I would include Kate Losse’s early, biting critique The Boy Kings, published in 2012, in this category. Losse was Facebook employee #51 and exposed the ways that nontechnical women, even those with PhDs, were marginalized by Zuckerberg and others in the company.

    [2] Workers’ inquiry combines research with organizing, constituting a process by which workers themselves produce knowledge about their own circumstances and use that knowledge as part of their labor organizing.

    [3] Noopur Raval (2021) questions the “invisibility” narratives within popular tech criticism, including Voices from the Valley and Seeing Silicon Valley, arguing that ghost laborers are not so ghostly to those living in the Global South.

    [4] With apologies to Fred Moton. See The Undercommons (2013).
    _____

    Works Cited

    • Paul Avrich. Anarchist Voices: An Oral History of Anarchism in the United States. Princeton, NJ: Princeton University Press, 1995.
    • Paulina Borsook. Cyberselfish: A Critical Romp Through the Terribly Libertarian Culture of High Tech. New York: Public Affairs, 2000.
    • Paul Buhle and Robin D. G. Kelley. “The Oral History of the Left in the United States: A Survey and Interpretation.” The Journal of American History 76, no. 2 (1989): 537-50. doi:10.2307/1907991.
    • Susan Fowler, Whistleblower: My Journey to Silicon Valley and Fight for Justice at Uber. New York: Penguin Books, 2020.
    • Nancy Fraser. Fortunes of Feminism: From State-Managed Capitalism to Neoliberal Crisis. New York: Verso, 2013.
    • Emma Goldman. Living My Life. New York: Alfred A. Knopf, 1931.
    • Emily Gould. “Exposed.” The New York Times Magazine, May 25, 2008, https://www.nytimes.com/2008/05/25/magazine/25internet-t.html.
    • Emily Gould. “Replaying My Shame.” The Cut, February 26, 2020. https://www.thecut.com/2020/02/emily-gould-gawker-shame.html
    • Asad Haider and Salar Mohandesi. “Workers’ Inquiry: A Genealogy.” Viewpoint Magazine, September 27, 2013, https://viewpointmag.com/2013/09/27/workers-inquiry-a-genealogy/.
    • Michael Keith Honey. Black Workers Remember: An Oral History of Segregation, Unionism, and the Freedom Struggle. Oakland: University of California Press, 2002.
    • Judy Kaplan and Linn Shapiro. Red Diapers: Growing Up in the Communist Left. Champaign, IL: University of Illinois Press, 1998.
    • Peter Kropotkin. Memoirs of a Revolutionist. Boston: Houghton Mifflin, 1899.
    • Wendy Liu. Abolish Silicon Valley: How to Liberate Technology from Capitalism. London: Repeater Books, 2020.
    • Wendy Liu. “Silicon Inquiry.” Notes From Below, January 29, 2018, https://notesfrombelow.org/article/silicon-inquiry.
    • Audre Lorde. The Cancer Journals. San Francisco: Aunt Lute Books, 1980.
    • Katherine Losse. The Boy Kings: A Journey Into the Heart of the Social Network. New York: Simon & Schuster, 2012.
    • Alice Lynd and Robert Staughton Lynd. Rank and File: Personal Histories by Working-Class Organizers. New York: Monthly Review Press, 1973.
      Joanne McNeil. Lurking: How a Person Became a User. New York: MCD/Farrar, Straus and Giroux, 2020.
    • Mary Beth Meehan and Fred Turner. Seeing Silicon Valley: Life Inside a Fraying America. Chicago: University of Chicago Press, 2021.
    • Fred Moten and Stefano Harney. The Undercommons: Fugitive Planning & Black Study. New York: Minor Compositions, 2013.
    • Noopur Raval. “Interrupting Invisbility in a Global World.” ACM Interactions. July/August, 2021, https://interactions.acm.org/archive/view/july-august-2021/interrupting-invisibility-in-a-global-world.
    • Ben Tarnoff and Moira Weigel. Voices from the Valley: Tech Workers Talk about What They Do—and How They Do It. New York: FSG Originals x Logic, 2020.
    • Studs Terkel. Working: People Talk About What They Do All Day and How They Feel About What They Do. New York: Pantheon Books, 1974.
    • Jia Tolentino. “The Personal-Essay Boom is Over.” The New Yorker, May 18, 2017, https://www.newyorker.com/culture/jia-tolentino/the-personal-essay-boom-is-over.
    • Ellen Ullman. Close to the Machine: Technophilia and Its Discontents.  New York: Picador/Farrar, Straus and Giroux, 1997.
    • Ellen Ullman. Life in Code: A Personal History of Technology. New York: MCD/Farrar, Straus and Giroux, 2017.
    • Anna Wiener. “Uncanny Valley.” n+1, Spring 2016: Slow Burn, https://nplusonemag.com/issue-25/on-the-fringe/uncanny-valley/.
    • Anna Wiener. Uncanny Valley: A Memoir. New York: MCD/Farrar, Straus and Giroux, 2020.
  • Sharrona Pearl — In the Shadow of the Valley (Review of Anna Wiener, Uncanny Valley)

    Sharrona Pearl — In the Shadow of the Valley (Review of Anna Wiener, Uncanny Valley)

    a review of Anna Wiener, Uncanny Valley: A Memoir (Macmillan, 2020)

    by Sharrona Pearl

    ~

    Uncanny Valley, the latest, very well-publicized memoir of Silicon Valley apostasy, is, for sure, a great read.  Anna Wiener writes beautiful words that become sentences that become beautiful paragraphs and beautiful chapters.  The descriptions are finely wrought, and if not quite cinematic than very, very visceral.  While it is a wry and tense and sometimes stressful story, it’s also exactly what it says it is: a memoir.  It’s the story of her experiences.  It captures a zeitgeist – beautifully, and with nuance and verve and life. It highlights contradictions and complications and confusions: hers, but also of Silicon Valley culture itself.  It muses upon them, and worries them, and worries over them.  But it doesn’t analyze them and it certainly doesn’t solve them, even if you get the sense that Wiener would quite like to do so.  That’s okay.  Solving the problems exposed by Silicon Valley tech culture and tech capitalism is quite a big ask.

    Wiener’s memoir tells the story of her accidental immersion into, and gradual (too gradual?) estrangement from, essentially, Big Tech.  A newly minted graduate from a prestigious small liberal arts college (of course), Wiener was living in Brooklyn (of course) while working as an underpaid assistant in a small literary agency (of course.) “Privileged and downwardly mobile,” as she puts it, Wiener was just about getting by with some extra help from her parents, embracing being perpetually broke as she party-hopped and engaged in some light drug use while rolling her eyes at all the IKEA furniture.  In as clear a portrait of Brooklyn as anything could be, Wiener’s friends spent 2013 making sourdough bread near artisan chocolate shops while talking on their ironic flip phones.  World-weary at 24, Wiener decides to shake things up and applies for a job at a Manhattan-based ebook startup.  It’s still about books, she rationalizes, so the startup part is almost beside the point.  Or maybe, because it’s still about books, the tech itself can be used for good.  Of course, neither of these things turn out to be true for either this startup, or tech itself.  Wiener quickly discovers (and so do her bosses) that she’s just not the right fit.  So she applies for another tech job instead.  This time in the Bay Area.  Why not?  She’d gotten a heady dose of the optimism and opportunity of startup culture, and they offered her a great salary.  It was a good decision, a smart and responsible and exciting decision, even as she was sad to leave the books behind.  But honestly, she’d done that the second she joined the first startup.  And in a way, the entire memoir is Wiener figuring that out.

    Maybe Wiener’s privilege (alongside generational resources and whiteness) is living in a world where you don’t have to worry about Silicon Valley even as it permeates everything.  She and her friends were being willfully ignorant in Brooklyn; it turns out, as Wiener deftly shows us, you can be willfully ignorant from the heart of Silicon Valley too.  Wiener lands a job at one startup and then, at some point, takes a pay cut to work at another whose culture is a better fit.  “Culture” does a lot of work here to elide sexism, harassment, surveillance, and violation of privacy.  To put it another way: bad stuff is going on around Wiener, at the very companies she works for, and she doesn’t really notice or pay attention…so we shouldn’t either.  Even though she narrates these numerous and terrible violations clearly and explicitly, we don’t exactly clock them because they aren’t a surprise.  We already knew.  We don’t care.  Or we already did the caring part and we’ve moved on.

    If 2013 feels both too early and too late for sourdough (weren’t people making bread in the 1950s because they had to?  And in 2020 because of COVID?) that’s a bit like the book itself.  Surely the moment for Silicon Valley Seduction and Cessation was the early 2000s?  And surely our disillusionment from the surveillance of Big Tech and the loss of privacy didn’t happen until after 2016? (Well, if you pay attention to the timeline in the book, that’s when it happened for Wiener too).  I was there for the bubble in the early aughts.  How could anyone not know what to expect?  Which isn’t to say that this memoir isn’t a gripping and illustrative mise-en-scène.  It’s just that in the era of Coded Bias and Virginia Eubanks and Safiya Noble and Meredith Broussard and Ruha Benjamin and Shoshana Zuboff… didn’t we already know that Big Tech was Bad?  When Wiener has her big reveal in learning from her partner Noah that “we worked in a surveillance company,” it’s more like: well, duh.  (Does it count as whistleblowing if it isn’t a secret?)

    But maybe that wasn’t actually the big reveal of the book.  Maybe the point was that Wiener did already know, she just didn’t quite realize how seductive power is, how pervasive an all-encompassing a culture can be, and how easy distinctions between good and bad don’t do much for us in the totalizing world of tech.  She wants to break that all down for us.  The memoir is kind of Tech Tales for Lit Critics, which is distinct from Tech for Dummies ™ because maybe the critics are the smart ones in the end.  The story is for “us;” Wiener’s tribe of smart and idealistic and disaffected humanists.  (Truly us, right dear readers?)  She makes it clear that even as she works alongside and with an army of engineers, there is always an us and them.  (Maybe partly because really, she works for the engineers, and no matter what the company says everyone knows what the hierarchy is.)  The “us” are the skeptics and the “them” are the cult believers except that, as her weird affectation of never naming any tech firms (“an online superstore; a ride-hailing app; a home-sharing platform; the social network everyone loves to hate,”) we are all in the cult in some way, even if we (“we”) – in Wiener’s Brooklyn tribe forever no matter where we live – half-heartedly protest. (For context: I’m not on Facebook and I don’t own a cell phone but PLEASE follow me on twitter @sharronapearl).

    Wiener uses this “NDA language” throughout the memoir.  At first it’s endearing – imagine a world in which we aren’t constantly name-checking Amazon and AirBnB.  Then its addicting – when I was grocery shopping I began to think of my local Sprouts as “a West-Coast transplant fresh produce store.”  Finally, it’s annoying – just say Uber, for heaven’s sake!  But maybe there’s a method to it: these labels makes the ubiquity of these platforms all the more clear, and forces us to confront just how very integrated into our lives they all are.  We are no different from Wiener; we all benefit from surveillance.

    Sometimes the memoir feels a bit like stunt journalism, the tech take on The Year of Living Biblically or Running the Books.  There’s a sense from the outset that Wiener is thinking “I’ll take the job, and if I hate it I can always write about it.”  And indeed she did, and indeed she does, now working as the tech and start-up correspondent for The New Yorker.  (Read her articles: they’re terrific.)  But that’s not at all a bad thing: she tells her story well, with self-awareness and liveliness and a lot of patience in her sometimes ironic and snarky tone.  It’s exactly what it we imagine it to be when we see how the sausage is made: a little gross, a lot upsetting, and still really quite interesting.

    If Wiener feels a bit old before her time (she’s in her mid-twenties during her time in tech, and constantly lamenting how much younger all her bosses are) it’s both a function of Silicon Valley culture and its veneration of young male cowboys, and her own affectations.  Is any Brooklyn millennial ever really young?  Only when it’s too late.  As a non-engineer and a woman, Wiener is quite clear that for Silicon Valley, her time has passed.  Here is when she is at her most relatable in some ways: we have all been outsiders, and certainly many of would be in that setting.  At the same time, at 44 with three kids, I feel a bit like telling this sweet summer child to take her time.  And that much more will happen to her than already has.  Is that condescending?  The tone brings it out in me.  And maybe I’m also a little jealous: I could do with having made a lot of money in my 20s on the road to disillusionment with power and sexism and privilege and surveillance.  It’s better – maybe – than going down that road without making a lot of money and getting to live in San Francisco.  If, in the end, I’m not quite sure what the point of her big questions are, it’s still a hell of a good story.  I’m waiting for the movie version on “the streaming app that produces original content and doesn’t release its data.”

    _____

    Sharrona Pearl (@SharronaPearl) is a historian and theorist of the body and face.  She has written many articles and two monographs: About Faces: Physiognomy in Nineteenth-Century Britain (Harvard University Press, 2010) and Face/On: Face Transplants and the Ethics of the Other (University of Chicago Press, 2017). She is Associate Professor of Medical Ethics at Drexel University.

    Back to the essay

  • Zachary Loeb — Burn It All (Review of Mullaney, Peters, Hicks and Philip, eds., Your Computer Is on Fire)

    Zachary Loeb — Burn It All (Review of Mullaney, Peters, Hicks and Philip, eds., Your Computer Is on Fire)

    a review of Thomas S. Mullaney, Benjamin Peters, Mar Hicks and Kavita Philip, eds., Your Computer Is on Fire (MIT Press, 2021)

    by Zachary Loeb

    ~

    It often feels as though contemporary discussions about computers have perfected the art of talking around, but not specifically about, computers. Almost every week there is a new story about Facebook’s malfeasance, but usually such stories say little about the actual technologies without which such conduct could not have happened. Stories proliferate about the unquenchable hunger for energy that cryptocurrency mining represents, but the computers eating up that power are usually deemed less interesting than the currency being mined. Debates continue about just how much AI can really accomplish and just how soon it will be able to accomplish even more, but the public conversation winds up conjuring images of gleaming terminators marching across a skull-strewn wasteland instead of rows of servers humming in an undisclosed location. From Zoom to dancing robots, from Amazon to the latest Apple Event, from misinformation campaigns to activist hashtags—we find ourselves constantly talking about computers, and yet seldom talking about computers.

    All of the aforementioned specifics are important to talk about. If anything, we need to be talking more about Facebook’s malfeasance, the energy consumption of cryptocurrencies, the hype versus the realities of AI, Zoom, dancing robots, Amazon, misinformation campaigns, and so forth. But we also need to go deeper. Case in point, though it was a very unpopular position to take for many years, it is now a fairly safe position to say that “Facebook is a problem;” however, it still remains a much less acceptable position to suggest that “computers are a problem.” At a moment in which it has become glaringly obvious that tech companies have politics, there still remains a common sentiment that computers are neutral. And thus such a view can comfortably disparage Bill Gates and Jeff Bezos and Sundar Pichai and Mark Zuckerberg for the ways in which they have warped the potential of computing, while still holding out hope that computing can be a wonderful emancipatory tool if it can just be put in better hands.

    But what if computers are themselves, at least part of, the problem? What if some of our present technological problems have their roots deep in the history of computing, and not just in the dorm room where Mark Zuckerberg first put together FaceSmash?

    These are the sorts of troubling and provocative questions with which the essential new book Your Computer Is on Fire engages. It is a volume that recognizes that when we talk about computers, we need to actually talk about computers. A vital intervention into contemporary discussions about technology, this book wastes no energy on carefully worded declarations of fealty to computers and the Internet, there’s a reason why the book is not titled Your Computer Might Be on Fire but Your Computer Is on Fire.

    The editors of the volume are quite upfront about the confrontational stance of the volume, Thomas Mullaney opens the book by declaring that “Humankind can no longer afford to be lulled into complacency by narratives of techno-utopianism or technoneutrality” (4). This is a point that Mullaney drives home as he notes that “the time for equivocation is over” before emphasizing that despite its at moments woebegone tonality, the volume is not “crafted as a call of despair but as a call to arms” (8). While the book sets out to offer a robust critique of computers, Mar Hicks highlights that the editors and contributors of the book shall do this in a historically grounded way, which includes a vital awareness that “there are almost always red flags and warning signs before a disaster, if one cares to look” (14). Though unfortunately many of those who attempted to sound the alarm about the potential hazards of computing were either ignored or derided as technophobes. Where Mullaney had described the book as “a call to arms,” Hicks describes what sorts of actions this call may entail: “we have to support workers, vote for regulation, and protest (or support those protesting) widespread harms like racist violence” (23). And though the focus is on collective action, Hicks does not diminish the significance of individual ethical acts, noting powerfully (in words that may be particularly pointed at those who work for the big tech companies): “Don’t spend your life as a conscientious cog in a terribly broken system” (24).

    Your Computer Is on Fire begins like a political manifesto; as the volume proceeds the contributors maintain the sense of righteous fury. In addition to introductions and conclusions, the book is divided into three sections: “Nothing is Virtual” wherein contributors cut through the airy talking points to bring ideas about computing back to the ground; “This is an Emergency” sounds the alarm on many of the currently unfolding crises in and around computing; and “Where Will the Fire Spread” turns a prescient gaze towards trajectories to be mindful of in the swiftly approaching future. Hicks notes, “to shape the future, look to the past” (24), and this is a prompt that the contributors take up with gusto as they carefully demonstrate how the outlines of our high-tech society were drawn long before Google became a verb.

    Drawing attention to the physicality of the Cloud, Nathan Ensmenger begins the “Nothing is Virtual” section by working to resituate “the history of computing within the history of industrialization” (35). Arguing that “The Cloud is a Factory,” Ensmenger digs beneath the seeming immateriality of the Cloud metaphor to extricate the human labor, human agendas, and environmental costs that get elided when “the Cloud” gets bandied about. The role of the human worker hiding behind the high-tech curtain is further investigated by Sarah Roberts, who explores how many of the high-tech solutions that purport to use AI to fix everything, are relying on the labor of human beings sitting in front of computers. As Roberts evocatively describes it, the “solutionist disposition toward AI everywhere is aspirational at its core” (66), and this desire for easy technological solutions covers up challenging social realities. While the Internet is often hailed as an American invention, Benjamin Peters discusses the US ARPANET alongside the ultimately unsuccessful network attempts of the Soviet OGAS and Chile’s Cybersyn, in order to show how “every network history begins with a history of the wider word” (81), and to demonstrate that networks have not developed by “circumventing power hierarchies” but by embedding themselves into those hierarchies (88). Breaking through the emancipatory hype surrounding the Internet, Kavita Philip explores the ways in which the Internet materially and ideologically reifies colonial logics, of dominance and control, demonstrating how “the infrastructural internet, and our cultural stories about it, are mutually constitutive.” (110). Mitali Thakor brings the volume’s first part to a close, with a consideration of how the digital age is “dominated by the feeling of paranoia” (120), by discussing the development and deployment of sophisticated surveillance technologies (in this case, for the detection of child pornography).

    “Electronic computing technology has long been an abstraction of political power into machine form” (137), these lines from Mar Hicks eloquently capture the leitmotif that plays throughout the chapters that make up the second part of the volume. Hicks’ comment comes from an exploration of the sexism that has long been “a feature, not a bug” (135) of the computing sector, with particular consideration of the ways in which sexist hiring and firing practices undermined the development of England’s computing sector. Further exploring how the sexism of today’s tech sector has roots in the development of the tech sector, Corinna Schlombs looks to the history of IBM to consider how that company suppressed efforts by workers to organize by framing the company as a family—albeit one wherein father still knew best. The biases built into voice recognition technologies (such as Siri) are delved into by Halcyon Lawrence who draws attention to the way that these technologies are biased towards those with accents, a reflection of the lack of diversity amongst those who design these technologies. In discussing robots, Safiya Umoja Noble explains how “Robots are the dreams of their designers, catering to the imaginaries we hold about who should do what in our societies” (202), and thus these robots reinscribe particular viewpoints and biases even as their creators claim they are creating robots for good. Shifting away from the flashiest gadgets of high-tech society, Andrea Stanton considers the cultural logics and biases embedded in word processing software that treat the demands of languages that are not written left to write as somehow aberrant. Considering how much of computer usage involves playing games, Noah Wardrip-Fruin argues that the limited set of video game logics keeps games from being about very much—a shooter is a shooter regardless of whether you are gunning down demons in hell or fanatics in a flooded ruin dense with metaphors.

    Oftentimes hiring more diverse candidates is hailed as the solution to the tech sector’s sexism and racism, but as Janet Abbate notes in the first chapter of the “Where Will the Fire Spread?” section, this approach generally attempts to force different groups to fit into Silicon Valley’s warped view of what attributes make for a good programmer. Abbate contends that equal representation will not be enough “until computer work is equally meaningful for groups who do not necessarily share the values and priorities that currently dominate Silicon Valley” (266). While computers do things to society, they also perform specific technical functions, and Ben Allen comments on source code to show the power that programmers have to insert nearly undetectable hacks into the systems they create. Returning to the question of code as empowerment, Sreela Sarkar discusses a skills training class held in Seelampur (near New Delhi), to show that “instead of equalizing disparities, IT-enabled globalization has created and further heightened divisions of class, caste, gender, religion, etc.” (308). Turning towards infrastructure, Paul Edwards considers how the speed with which platforms have developed to become infrastructure has been much swifter than the speed with which older infrastructural systems were developed, which he explores by highlighting three examples in various African contexts (FidoNet, M-Pesa, and Free Basiscs). And Thomas Mullaney closes out the third section with a consideration of the way that the QWERTY keyboard gave rise to pushback and creative solutions from those who sought to type in non-Latin scripts.

    Just as two of the editors began the book with a call to arms, so too the other two editors close the book with a similar rallying cry. In assessing the chapters that had come before, Kavita Philip emphasizes that the volume has chosen “complex, contradictory, contingent explanations over just-so stories.” (364) The contributors, and editors, have worked with great care to make it clear that the current state of computers was not inevitable—that things currently are the way they are does not mean they had to be that way, or that they cannot be changed. Eschewing simplistic solutions, Philip notes that language, history, and politics truly matter to our conversations about computing, and that as we seek for the way ahead we must be cognizant of all of them. In the book’s final piece, Benjamin Peters sets the computer fire against the backdrop of anthropogenic climate change and the COVID-19 pandemic, noting the odd juxtaposition between the progress narratives that surround technology and the ways in which “the world of human suffering has never so clearly appeared on the brink of ruin” (378). Pushing back against a simple desire to turn things off, Peters notes that “we cannot return the unasked for gifts of new media and computing” (380). Though the book has clearly been about computers, truly wrestling with the matters must force us to reflect on what it is that we really talk about when we talk about computers, and it turns out that “the question of life becomes how do not I but we live now?” (380)

    It is a challenging question, and it provides a fitting end to a book that challenges many of the dominant public narratives surrounding computers. And though the book has emphasized repeatedly how important it is to really talk about computers, this final question powers down the computer to force us to look at our own reflection in the mirrored surface of the computer screen.

    Yes, the book is about computers, but more than that it is about what it has meant to live with these devices—and what it might mean to live differently with them in the future.

    *

    With the creation of Your Computer Is on Fire the editors (Hicks, Mullaney, Peters, and Philip) have achieved an impressive feat. The volume is timely, provocative, wonderfully researched, filled with devastating insights, and composed in such a way as to make the contents accessible to a broad audience. It might seem a bit hyperbolic to suggest that anyone who has used a computer in the last week should read this book, but anyone who has used a computer in the last week should read this book. Scholars will benefit from the richly researched analysis, students will enjoy the forthright tone of the chapters, and anyone who uses computers will come away from the book with a clearer sense of the way in which these discussions matter for them and the world in which they live.

    For what this book accomplishes so spectacularly is to make it clear that when we think about computers and society it isn’t sufficient to just think about Facebook or facial recognition software or computer skills courses—we need to actually think about computers. We need to think about the history of computers, we need to think about the material aspects of computers, we need to think about the (oft-unseen) human labor that surrounds computers, we need to think about the language we use to discuss computers, and we need to think about the political values embedded in these machines and the political moments out of which these machines emerged. And yet, even as we shift our gaze to look at computers more critically, the contributors to Your Computer Is on Fire continually remind the reader that when we are thinking about computers we need to be thinking about deeper questions than just those about machines, we need to be considering what kind of technological world we want to live in. And moreover we need to be thinking about who is included and who is excluded when the word “we” is tossed about casually.

    Your Computer Is on Fire is simultaneously a book that will make you think, and a good book to think with. In other words, it is precisely the type of volume that is so desperately needed right now.

    The book derives much of its power from the willingness on the parts of the contributors to write in a declarative style. In this book criticisms are not carefully couched behind three layers of praise for Silicon Valley, and odes of affection for smartphones, rather the contributors stand firm in declaring that there are real problems (with historical roots) and that we are not going to be able to address them by pledging fealty to the companies that have so consistently shown a disregard for the broader world. This tone results in too many wonderful turns of phrase and incendiary remarks to be able to list all of them here, but the broad discussion around computers would be greatly enhanced with more comments like Janet Abbate’s “We have Black Girls Code, but we don’t have ‘White Boys Collaborate’ or ‘White Boys Learn Respect.’ Why not, if we want to nurture the full set of skills needed in computing?” (263) While critics of technology often find themselves having to argue from a defensive position, Your Computer Is on Fire is a book that almost gleefully goes on the offense.

    It almost seems like a disservice to the breadth of contributions to the volume to try to sum up its core message in a few lines, or to attempt to neatly capture the key takeaways in a few sentences. Nevertheless, insofar as the book has a clear undergirding position, beyond the titular idea, it is the one eloquently captured by Mar Hicks thusly:

    High technology is often a screen for propping up idealistic progress narratives while simultaneously torpedoing meaningful social reform with subtle and systemic sexism, classism, and racism…The computer revolution was not a revolution in any true sense: it left social and political hierarchies untouched, at times even strengthening them and heightening inequalities. (152)

    And this is the matter with which each contributor wrestles, as they break apart the “idealistic progress narratives” to reveal the ways that computers have time and again strengthened the already existing power structures…even if many people get to enjoy new shiny gadgets along the way.

    Your Computer Is on Fire is a jarring assessment of the current state of our computer dependent societies, and how they came to be the way they are; however, in considering this new book it is worth bearing in mind that it is not the first volume to try to capture the state of computers in a moment in time. That we find ourselves in the present position, is unfortunately a testament to decades of unheeded warnings.

    One of the objectives that is taken up throughout Your Computer Is on Fire is to counter the techno-utopian ideology that never so much dies as much as it shifts into the hands of some new would-be techno-savior wearing a crown of 1s and 0s. However, even as the mantle of techno-savior shifts from Mark Zuckerberg to Elon Musk, it seems that we may be in a moment when fewer people are willing to uncritically accept the idea that technological progress is synonymous with social progress. Though, if we are being frank, adoring faith in technology remains the dominant sentiment (at least in the US). Furthermore, this isn’t the first moment when a growing distrust and dissatisfaction with technological forces has risen, nor is this the first time that scholars have sought to speak out. Therefore, even as Your Computer is on Fire provides fantastic accounts of the history of computing, it is worthwhile to consider where this new vital volume fits within the history of critiques of computing. Or, to frame this slightly differently, in what ways is the 21st century critique of computing, different from the 20th century critique of computing?

    In 1979 the MIT Press published the edited volume The Computer Age: A Twenty Year View. Edited by Michael Dertouzos and Joel Moses, that book brought together a variety of influential figures from the early history of computing including J.C.R. Licklider, Herbert Simon, Marvin Minsky, and many others. The book was an overwhelmingly optimistic affair, and though the contributors anticipated that the mass uptake of computers would lead to some disruptions, they imagined that all of these changes would ultimately be for the best. Granted, the book was not without a critical voice. The computer scientist turned critic, Joseph Weizenbaum was afforded a chapter in a quarantined “Critiques” section from which to cast doubts on the utopian hopes that had filled the rest of the volume. And though Weizenbaum’s criticisms were presented, the book’s introduction politely scoffed at his woebegone outlook, and Weizenbaum’s chapter was followed by not one but two barbed responses, which ensured that his critical voice was not given the last word. Any attempt to assess The Computer Age at this point will likely say as much about the person doing the assessing as about the volume itself, and yet it would take a real commitment to only seeing the positive sides of computers to deny that the volume’s disparaged critic was one of its most prescient contributors.

    If The Computer Age can be seen as a reflection of the state of discourse surrounding computers in 1979, than Your Computer Is on Fire is a blazing demonstration of how greatly those discussions have changed by 2021. This is not to suggest that the techno-utopian mindset that so infused The Computer Age no longer exists. Alas, far from it.

    As the contributors to Your Computer Is on Fire make clear repeatedly, much of the present discussion around computing is dominated by hype and hopes. And a consideration of those conversations in the second half of the twentieth century reveals that hype and hope were dominant forces then as well. Granted, for much of that period (arguably until the mid-1980s and not really taking off until the 1990s), computers remained technologies with which most people had relatively little direct interaction. The mammoth machines of the 1960s and 1970s were not all top-secret (though some certainly were), but when social critics warned about computers in the 50s, 60s, and 70s they were not describing machines that had become ubiquitous—even if they warned that those machines would eventually become so. Thus, when Lewis Mumford warned in 1956, that:

    In creating the thinking machine, man has made the last step in submission to mechanization; and his final abdication before this product of his own ingenuity has given him a new object of worship: a cybernetic god. (Mumford, 173)

    It is somewhat understandable that his warning would be met with rolled eyes and impatient scoffs. For “the thinking machine” at that point remained isolated enough from most people’s daily lives that the idea that this was “a new object of worship” seemed almost absurd. Though he continued issuing dire predictions about computers, by 1970 when Mumford wrote of the development of “computer dominated society” this warning could still be dismissed as absurd hyperbole. And when Mumford’s friend, the aforementioned Joseph Weizenbaum, laid out a blistering critique of computers and the “artificial intelligentsia” in 1976 those warnings were still somewhat muddled as the computer remained largely out of sight and out of mind for large parts of society. Of course, these critics recognized that this “cybernetic god” had not as of yet become the new dominant faith, but they issued such warnings out of a sense that this was the direction in which things were developing.

    Already by the 1980s it was apparent to many scholars and critics that, despite the hype and revolutionary lingo, computers were primarily retrenching existing power relations while elevating the authority of a variety of new companies. And this gave rise to heated debates about how (and if) these technologies could be reclaimed and repurposed—Donna Haraway’s classic Cyborg Manifesto emerged out of those debates. By the time of 1990’s “Neo-Luddite Manifesto,” wherein Chellis Glendinning pointed to “computer technologies” as one of the types of technologies the Neo-Luddites were calling to be dismantled, the computer was becoming less and less an abstraction and more and more a feature of many people’s daily work lives. Though there is not space here to fully develop this argument, it may well be that the 1990s represent the decade in which many people found themselves suddenly in a “computer dominated society.”  Indeed, though Y2K is unfortunately often remembered as something of a hoax today, delving back into what was written about that crisis as it was unfolding makes it clear that in many sectors Y2K was the moment when people were forced to fully reckon with how quickly and how deeply they had become highly reliant on complex computerized systems. And, of course, much of what we know about the history of computing in those decades of the twentieth century we owe to the phenomenal research that has been done by many of the scholars who have contributed chapters to Your Computer Is on Fire.

    While Your Computer Is on Fire provides essential analyses of events from the twentieth century, as a critique it is very much a reflection of the twenty-first century. It is a volume that represents a moment in which critics are no longer warning “hey, watch out, or these computers might be on fire in the future” but in which critics can now confidently state “your computer is on fire.” In 1956 it could seem hyperbolic to suggest that computers would become “a new object of worship,” by 2021 such faith is on full display. In 1970 it was possible to warn of the threat of “computer dominated society,” by 2021 that “computer dominated society” has truly arrived. In the 1980s it could be argued that computers were reinforcing dominant power relations, in 2021 this is no longer a particularly controversial position. And perhaps most importantly, in 1990 it could still be suggested that computer technologies should be dismantled, but by 2021 the idea of dismantling these technologies that have become so interwoven in our daily lives seems dangerous, absurd, and unwanted. Your Computer Is on Fire is in many ways an acknowledgement that we are now living in the type of society about which many of the twentieth century’s technological critics warned. In the book’s final conclusion, Benjamin Peters pushes back against “Luddite self-righteousness” to note that “I can opt out of social networks; many others cannot” (377), and it is the emergence of this moment wherein the ability to “opt out” has itself become a privilege is precisely the sort of danger about which so many of the last century’s critics were so concerned.

    To look back at critiques of computers made throughout the twentieth century is in many ways a fairly depressing activity. For it reveals that many of those who were scorned as “doom mongers” had a fairly good sense of what computers would mean for the world. Certainly, some will continue to mock such figures for their humanism or borderline romanticism, but they were writing and living in a moment when the idea of living without a smartphone had not yet become unthinkable. As the contributors to this essential volume make clear, Your Computer Is on Fire, and yet too many of us still seem to believe that we are wearing asbestos gloves, and that if we suppress the flames of Facebook we will be able to safely warm our toes on our burning laptop.

    What Your Computer Is on Fire achieves so masterfully is to remind its readers that the wired up society in which they live was not inevitable, and what comes next is not inevitable either. And to remind them that if we are going to talk about what computers have wrought, we need to actually talk about computers. And yet the book is also a discomforting testament to a state of affairs wherein most of us simply do not have the option of swearing off computers. They fill our homes, they fill our societies, they fill our language, and they fill our imaginations. Thus, in dealing with this fire a first important step is to admit that there is a fire, and to stop absentmindedly pouring gasoline on everything. As Mar Hicks notes:

    Techno-optimist narratives surrounding high-technology and the public good—ones that assume technology is somehow inherently progressive—rely on historical fictions and blind spots that tend to overlook how large technological systems perpetuate structures of dominance and power already in place. (137)

    And as Kavita Philip describes:

    it is some combination of our addiction to the excitement of invention, with our enjoyment of individualized sophistications of a technological society, that has brought us to the brink of ruin even while illuminating our lives and enhancing the possibilities of collective agency. (365)

    Historically rich, provocatively written, engaging and engaged, Your Computer Is on Fire is a powerful reminder that when it is properly controlled fire can be useful, but when fire is allowed to rage out of control it turns everything it touches to ash. This book is not only a must read, but a must wrestle with, a must think with, and a must remember. After all, the “your” in the book’s title refers to you.

    Yes, you.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2o Review Digital Studies section.

    Back to the essay

    Works Cited

    • Lewis Mumford. The Transformations of Man. New York: Harper and Brothers, 1956.

     

     

     

     

     

  • Richard Hill — “Free” Isn’t Free (Review of Michael Kende, The Flip Side of Free)

    Richard Hill — “Free” Isn’t Free (Review of Michael Kende, The Flip Side of Free)

    a review of Michael Kende, The Flip Side of Free: Understanding the Economics of the Internet (MIT Press, 2021)

    by Richard Hill

    ~

    This book is a must-read for anyone who wishes to engage in meaningful discussions of Internet governance, which will increasingly involve economic issues (17-20). It explains clearly why we don’t have to pay in money for services that are obviously expensive to provide. Indeed, as we all know, we get lots of so-called free services on the Internet: search facilities, social networks, e-mail, etc. But, as the old saying goes “there ain’t no such thing as a free lunch.” It costs money to provide all those Internet services (10), and somebody has to pay for them somehow. In fact, users pay for them, by allowing (often unwittingly: 4, 75, 92, 104, 105) the providers to collect personal data which is then aggregated and used to sell other services (in particular advertising, 69) at a large profit. The book correctly notes that there are both advantages (79) and disadvantages (Chapters 5-8) to the current regime of surveillance capitalism. Had I written a book on the topic, I would have been more critical and would have preferred a subtitle such as “The Triumph of Market Failures in Neo-Liberal Regimes.”

    Michael Kende is a Senior Fellow and Visiting Lecturer at the Graduate Institute of International and Development Studies, Geneva, a Senior Adviser at Analysis Mason, a Digital Development Specialist at the World Bank Group, and former Chief Economist of the Internet Society. He has worked as an academic economist at INSEAD as a US regulator at the Federal Communications Commission. In this clearly written and well researched book, he explains, in laymen’s terms, the seeming paradox of “free” services that nevertheless yield big profits.

    The secret is to exploit the monetary value of something that had some, but not much, value until a bit over twenty years ago: data (63). The value of data is now so large that the companies that exploit it are the most valuable companies in the world, worth more than old giants such as producers of automobiles or petroleum. In fact data is so central to today’s economy that, as the author puts it (143): “It is possible that a new metric is needed to measure market power, especially when services are offered for free. Where normally a profitable increase in price was a strong metric, the new metric may be the ability to profitably gather data – and monetize it through advertising – without losing market share.” To my knowledge, this is an original idea, and it should be taken seriously by anyone interested in the future evolution of, not just the Internet, but society in general (for the importance of data, see for example the annex of this paper, and also here).

    The core value of this book lies in Chapters 5 through 10, which provide economic explanations – in easy-to-understand lay language – of the current state of affairs. They cover the essential elements: the importance of data, and why a few companies have dominant positions. Readers looking for somewhat more technical economic explanations may consider reading this handbook and readers looking for the history of the geo-economic policies that resulted in the current state of affairs can read the books reviewed here and here.

    Chapter 5 of the book explains why most of us trade off the privacy of our data in exchange for “free” services: the benefits may outweigh the risks (88), we may underestimate the risks (89), and we may not actually know the risks (91, 92, 105). As the author correctly notes (99-105), there likely are market failures that should be corrected by government action, such as data privacy laws. The author mentions the European Union GDPR (100); I think that it is also worth mentioning the less known, but more widely adopted, Council of Europe Convention (108). And I would have preferred an even more robust criticism of jurisdictions that allow data brokers to operate secretively (104).

    Chapter 6 explains how market failures have resulted in inadequate security in today’s Internet. In particular users cannot know if a product has an adequate level of security (information asymmetry) and one user’s lack of security may not affect him or her, but may affect others (negative externalities). As the author says, there is a need to develop security standards (e.g. devices should not ship with default administrator passwords) and to impose liability for companies that market insecure products (120, 186).

    Chapter 7 explains well the economic concepts of economies of scale and network effect (see also 23), how they apply to the Internet, and why (122-129) they facilitated the emergence of the current dominant platforms (such as Amazon, Facebook, Google, and their Chinese equivalents). This results in a winner-takes-all situation: the best company becomes the only significant player (133-137). At present, competition policy (140-142) has not dealt with this issue satisfactorily and innovative approaches that recognize the central role and value of data may be needed. I would have appreciated an economic discussion of how much (or at least some) of the gig economy is not based on actual innovation (122), but on violating labor laws or housing and consumer protection laws. I would also have expected a more extensive discussion of two-sided markets (135): while the topic is technical, I believe that the author has the skills to explain it clearly for laypeople. It is a pity that the author didn’t explore, at least briefly, the economic issues relating to the lack of standardization, and interoperability, of key widely used services, such as teleconferencing: nobody would accept having to learn to use a plethora of systems in order to make telephone calls; why do we accept that for video calls?

    The chapter correctly notes that data is the key (143-145) and notes that data sharing (145-147, 187, 197) may help to reintroduce competition. While it is true that data is in principle non-rivalrous (194), in practice at present it is hoarded and treated as private property by those who collect it. It would have been nice if the author had explored methods for ensuring the equitable distribution of the value added of data, but that would no doubt have required an extensive discussion of equity. It is a pity that the author didn’t discuss the economic implications, and possible justification, of providing certain base services (e.g. e-mail, search) as public services: after all, if physical mail is a public service, why shouldn’t e-mail also be a public service?

    Chapter 8 documents the digital divide: access to Internet is much less affordable, and widespread, in developing countries than it is in developed countries. As the author points out, this is not a desirable situation, and he outlines solutions (including infrastructure sharing and universal service funds (157)), as have others (for example here, here, here, and here). It would have been nice if the author had explored how peering (48) may disadvantage developing countries (in particular because much of their content is hosted abroad (60, 162)); and evaluated the economics of relying on large (and hence efficient and low-cost) data centers in hubs as opposed to local hosting (which has lower transmission costs but higher operating costs); but perhaps those topics would have strayed from the main theme of the book. The author correctly identifies the lack of payment systems as a significant hindrance to greater adoption of the e-commerce in developing countries (164); and, of course, the relative disadvantage with respect to data of companies in developing countries (170, 195).

    Chapter 9 explains why security and trust on the Internet must be improved, and correctly notes that increasing privacy will not necessarily increase trust (183). The Chapter reiterates some of the points outlined above, and rightly concludes: “There is good reason to raise the issue [of lack of trust] when seeing the market failures taking place today with cybersecurity, sometimes based on the most easily avoidable mistakes, and the lack of efforts to fix them. If we cannot protect ourselves today, what about tomorrow?” (189)

    Chapter 10 correctly argues that change is needed, and outlines the key points: “data is the basis for market power; lack of data is the hidden danger of the digital divide; and data will train the algorithms of the future AI” (192). Even when things go virtual, there is a role for governments: “who but governments could address market power and privacy violations and respond to state-sponsored attacks against their citizens or institutions?” (193) Data governance will be a key topic for the future: “how to leverage the unique features of data and avoid the costs: how to generate positive good while protecting privacy and security for personal data; how to maintain appropriate property rights to reward innovation and investment while checking market power; how to enable machine learning while allowing new companies strong on innovation and short on data to flourish; how to ensure that the digital divide is not replaced by a data divide.” (195)

    Chapters 1 through 4 purport to explain how certain technical features of the Internet condition its economics. The chapters will undoubtedly be useful for people who don’t have much knowledge of telecommunication and computer networks, but they are unfortunately grounded in an Internet-centric view that does not, in my view, accord sufficient weight to the long history of telecommunications, and, consequently, considers as inevitable things that were actually design choices. It is important to recall that the Internet was originally designed as a national (US) non-public military and research network (27-28). As such, it originally provided only for 7-bit ASCII character sets (thus excluding character with accents), it did not provide for usage-based billing, and it assumed that end-to-end encryption could be used to provide adequate security (108). It was not designed to allow insecure end-user devices (such as personal computers) to interconnect on a global scale.

    The Internet was originally funded by governments, so when it was privatized, some method of funding other than conventional usage charges had to be invented (such as receiver pays (53)– and advertising). It is correct (39, 44) that differences in pricing are due to differences in technology, but only because the Internet technologies were not designed to facilitate consumption/volume-based pricing. I would have expected an economics-based discussion of how this makes it difficult to optimize networks, which always have choke points (54-55). For example, I am connected by DSL, and I pay for a set bandwidth, which is restricted by my ISP. While the fiber can carry higher bandwidth (I just have to pay more for it), at any given time (as the author correctly notes) my actual bandwidth depends on what my neighbours that share the same multiplexor are doing. If one of my neighbours is streaming full-HD movies all day long, my performance will degrade, yet they may or may not be paying the same price as me (55). This is not economically efficient. Thus, contrary to what the author posits (46), best-effort packet switching (the Internet model) is not always more efficient than circuit-switching: if guaranteed quality of service is needed, circuit-switching can be more efficient that paying for more bandwidth, even if, in case of overload, service is denied rather than being “merely” degraded (those of us who have had to abandon an Internet teleconference because of poor quality will appreciate that degradation can equal service denial; and musicians who have tried to perform virtually doing the pandemic would have appreciated a guaranteed quality of service that would have ensured synchronization between performers and between video and sound).

    As the author correctly notes, (59) some form of charging is necessary when resources are scarce; and (42, 46, 61) it is important to allocate scarcity efficiently. It’s a pity that the author didn’t explore the economics of usage-based billing, and dedicated circuits, as methods for the efficient allocation of scarcity (again, in the end there is always a scarce resource somewhere in the system). And it’s a pity that he didn’t dig into the details of the economic factors that result in video traffic being about 70% of all traffic (159): is that due to commercial video-on-demand services (such as Netflix), or to user file sharing (such as YouTube) or to free pornography (such as PornHub)? In addition, I would have appreciated a discussion of the implications of the receiver-pays model, considering that receivers pay not only for the content they requested (e.g. Wikipedia pages), but also for content that they don’t want (e.g. spam) or didn’t explicitly request (e.g. adversiting).

    The mention in passing of the effects of Internet on democracy (6) fails to recognize the very deleterious indirect effects resulting from the decline of traditional media. Contrary to what the book implies (7, 132) breaking companies up would not necessarily be deleterious, and making platforms responsible for content would not necessarily stifle innovation., even if such measures could have downsides.

    It is true (8) that anything can be connected to the Internet (albeit with a bit more configuration than the book implies), but it is also true that this facilitates phishing, malware attacks, spoofing, abuse of social networks, and so forth.

    Contrary to what the author implies (22), ICT standards have always been free to use (with some exceptions relating to intellectual property rights; further, the exceptions allowed by IETF are the same as those allowed by ITU and most other standards-making bodies (34)). Core Internet standards have always been free to access online, whereas that was not the case in the past for telecommunications standards; however, that has changed, and ITU telecommunications standards are also freely available online. While it is correct (24) that access to traditional telecommunication networks was tightly controlled, and that early data networks were proprietary, traditional telecommunications networks and later data networks were based on publicly-available standards. While it is correct (31) that anybody can contribute to Internet standards-making, in practice the discussions are dominated by people who are employed by companies that have a vested interest in the standards (see for example pp. 149-152 of the book reviewed here, and Chapters 5 and 6 of the book reviewed here); further, W3C (32) and IEEE (33) are a membership organization, as are the more traditional standardization bodies. While users of standards (in particular manufacturers) have a role in making Internet standards, that is the case for most standard-making; end-users do not have a role in making Internet standards (32). Regarding standards (33), the author fails to mention the key role of ITU-R with respect to the availability of WiFi spectrum and of ITU-T with respect to xDSL (51) and compression.

    The OSI Model (26) was a joint effort of CCITT/ITU, IEC, and ISO. Contrary to what the author implies (29), e-mail existed in some form long before the Internet, albeit as proprietary systems, and there were other efforts to standardize e-mail; it is a pity that the author didn’t provide an economic analysis of why SMTP prevailed over more secure e-mail protocols, and how its lack of billing features facilitates spam (I have been told that the “simple” in SMTP refers to absence of the security and billing features that encumbered other e-mail protocols).

    While much of the Internet is decentralized (30), so is much of the current telephone system. On the other hand, Internet’s naming and addressing is far more centralized than that of telephony.

    However, these criticisms of specific bits of Chapters 1 through 4 do not in any way detract from the value of the rest of the book which, as already mentioned, should be required reading for anyone who wishes to engage in discussions of Internet-related matters.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Richard Hill — Multistakeholder Internet Governance Still Doesn’t Live Up to Its PR (Review of Palladino and Santaniello, Legitimacy, Power, and Inequalities in the Multistakeholder Internet Governance)

    Richard Hill — Multistakeholder Internet Governance Still Doesn’t Live Up to Its PR (Review of Palladino and Santaniello, Legitimacy, Power, and Inequalities in the Multistakeholder Internet Governance)

    a review of Nicola Palladino and Mauro Santaniello, Legitimacy, Power, and Inequalities in the Multistakeholder Internet Governance: Analyzing IANA Transition (Palgrave MacMillan, 2020)

    by Richard Hill

    ~

    While multistakeholder processes have long existed (see the Annex of this submission to an ITU group), they have recently been promoted as a better alternative to traditional governance mechanisms, in particular at the international level; and Internet governance has been put forward as an example of how multistakeholder processes work well, and better than traditional governmental processes. Thus it is very appropriate that a detailed analysis be made of a recent, highly visible, allegedly multistakeholder process: the process by which the US government relinquished its formal control over the administration of Internet names and address. That process was labelled the “IANA transition.”

    The authors are researchers at, respectively, the School of law and Governance, Dublin City University; and the Internet & Communication Policy Center, Department of Political and Social Studies, University of Salerno, Italy. They have taken part in several national and international research projects on Internet Governance, Internet Policy and Digital Constitutionalism processes. They have methodically examined various aspects of the IANA (Internet Assigned Numbers Authority) transition, and collected and analysed an impressive body of data regarding who actually participated in, and influenced, the transition process. Their research confirms what others have stated, namely that the process was dominated by insiders with vested interests, that the outcome did not resolve long-standing political issues, and that the process cannot by any means be seen as an example of an ideal multistakeholder process, and this despite claims to the contrary by the architects of the IANA transition.

    As the authors put the matter: “For those who believe that the IANA is a business concerning exclusively or primarily ICANN [Internet Corporations for Assigned Names and Numbers], the IETF [Internet Engineering Task Force], the NRO [Numbering Resource Organization], and their respective communities, the IANA transition process could be considered inclusive and fair enough, and its outcome effectively transferring the stewardship over IANA functions to the global stakeholder’s community of reference. For those who believe that the IANA stakeholders extend far beyond the organizations mentioned above, the assessment can only have a negative result” (146). Because “in the end, rather than transferring the stewardship of IANA functions to a new multistakeholder body that controls the IANA operator (ICANN), the transition process allowed the ICANN multistakeholder community to perform the oversight role that once belonged to the NTIA [the US government]” (146). Indeed “in the end, the novel governance arrangements strengthened the position of the registries and the technical community” (148). And the US government could still exercise ultimate control, because “ICANN, the PTI [Post-Transition IANA], and most of the root server organizations remain on US territory, and therefore under US jurisdiction” (149).

    That is, the transition failed to address the key political issue: “the IANA functions are at the heart of the DNS [Domain Name System] and the Internet as we know it. Thus, their governance and performance affect a vast range of actors [other than the technical and business communities involved in the operation of the DNS] that should be considered legitimate stakeholders” (147). Instead, it was one more example of “the rhetorical use of the multistakeholder discourse. In particular, … through a neoliberal discourse, the key organizations already involved in the DNS regime were able to use the ambiguity of the concept of a ‘global multistakeholder community’ as a strategic power resource.” Thus failing fully to ensure that discussions “take place through an open process with the participation of all stakeholders extending beyond the ICANN community.” While the call for participation in the process was formally open “its addressees were already identified as specific organizations. It is worth noting that these organizations did not involve external actors in the set-up phase. Rather, they only allowed other interested parties to take part in the discussion according to their rules and with minor participatory rights [speaking, but non-voting, observers]” (148).

    Thus, the authors’ “analysis suggests that the transition did not result in, nor did it lead to, a higher form of multistakeholderism filling the gap between reality and the ideal-type of what multistakeholderism ought to be, according to normative standards of legitimacy. Nor was it able to fix the well-known limitations in inclusiveness, fairness of the decision-making process, and accountability of the entire DNS regime. … Instead, the transition seems to have solidified previous dominant positions and ratified the ownership of an essential public function by a private corporation, led by interwoven economic and technical interests” (149). In particular, “the transition process showed the irrelevance of civil society, little and badly represented in the stakeholder structure before and after the transition” (150). And “multistakeholderism [in this case] seems to have resulted in misleading rhetoric legitimizing power asymmetries embedded within the institutional design of DNS management, rather than in a new governance model capable of ensuring the meaningful participation of all the interested parties.”

    In summary, the IANA transition is one more example of the failure of multistakeholder processes to achieve their desired goal. As the authors correctly note: “Initiatives supposed to be multistakeholder have often been criticized for not complying with their premises, resulting in ‘de-politicization mechanisms that limit political expression and struggle’” (153). Indeed, “While multistakeholderism is used as a rhetoric to solidify and legitimize power positions within some policy-making arena, without any mechanisms giving up power to weaker stakeholders and without making concrete efforts to include different discourses, it will continue to produce ambiguous compromises without decisions, or make decisions affected by a poor degree of pluralism” (153). As others have stated, “‘multistakeholderism reinforces existing power dynamics that have been ‘baked in’ to the model from the beginning. It privileges north-western governments, particularly the US, as well as the US private sector.’ Similarly, … multistakeholderism [can be defined] as a discursive tool employed to create consensus around the hegemony of a power élite” (12). As the authors starkly put the matter, “multistakeholder discourse could result in misleading rhetoric that solidifies power asymmetries and masks domination, manipulation, and hegemonic practices” (26). In particular because “election and engagement procedures often tend to favor an already like-minded set of collective and individual actors even if they belong to different stakeholder categories” (30).

    The above conclusions are supported by detailed, well referenced, descriptions and analyses. Chapters One and Two explain the basic context of the IANA transition, Internet governance and their relation to multistakeholder processes. Chapter One “points out how multistakeholderism is a fuzzy concept that has led to ambiguous practices and disappointing results. Further, it highlights the discursive and legitimizing nature of multistakeholderism, which can serve both as a performing narrative capable of democratizing the Internet governance domain, as well as a misleading rhetoric solidifying the dominant position of the most powerful actors in different Internet policy-making arenas” (1). It traces the history of multistakeholder governance in the Internet context, which started in 2003 (however, a broader historical context would have been useful, see the Annex of this submission to an ITU group). It discusses the conflict between developed and developing countries regarding the management and administration of domain names and addresses that dominated the discussions at the World Summit on the Information Society (WSIS) (Mueller’s Networks and States gives a more detailed account, explaining how development issues – which were supposed to be the focus of the WSIS – got pushed aside, thus resulting in the focus on Internet governance). As the authors correctly state, “the outcomes of the WSIS left the tensions surrounding Internet governance unresolved, giving rise to contestation in subsequent years and to the cyclical recurrence of political conflicts challenging the consensus around the multistakeholder model” (5). The IANA transition was seen as a way of resolving these tensions, but it relied “on the conflation of the multistakeholder approach with the privatization of Internet governance” (8).

    As the authors posit (citing well-know scholar Hoffmann, “multistakeholderism is a narrative based on three main promises: the promise of achieving global representation on an issue putting together all the affected parties; the promise of overcoming the traditional democratic deficit at the transnational level, ‘establishing communities of interest as a digitally enabled equivalent to territorial constituencies’; and the promise of higher and enforced outcomes since incorporating global views on the matter through a consensual approach should ensure more complete solutions and their smooth implementation” (10).

    Chapter Three provides a thorough introduction to the management of Internet domain names and address and of the issues related to it and to the IANA function, in particular the role of the US government and of US academic and business organizations; the seminal work of the Internet Ad Hoc Group (IAHC); the creation and evolution of ICANN; and various criticism of ICANN, in particular regarding its accountability. (The chapter inexplicably fails to mention the key role of Mocakpetris in the creation of the DNS).

    Chapter Four describes the institutional setup of the IANA transition, and the constraints unilaterally imposed by the US government (see also 104) and the various parties that dominate discussions of the issues involved. As the authors note, the call for the creation of the key group went out “without having before voted on the proposed scheme [of the group], neither within the ICANN community nor outside through a further round of public comments” (67). The structure of that group heavily influenced the discussions and the outcome.

    Chapter Five evaluates the IANA transition in terms of one of three types of legitimacy: input legitimacy, that is whether all affected parties could meaningfully participate in the process (the other two types of legitimacy are discussed in subsequent chapters, see below). By analysing in detail the profiles and affiliations of the participants with decision-making power, the authors find that “a vast majority (56) of the people who have taken part in the drafting of the IANA transition proposal are bearers of technical and operative interests” (87); “Regarding nationality, Western countries appear to be over-represented within the drafting and decisional organism involved in the IANA transition process. In particular, US citizens constitute the most remarkable group, occupying 20 seats over 90 available” (89); and  “IANA transition voting members experienced multiple and trans-sectoral affiliations, blurring the boundaries among stakeholder categories” (151). In summary “the results of this stakeholder analysis seem to indicate that the adopted categorization and appointment procedures have reproduced within the IANA transition process well-known power relationships and imbalances already existing in the DNS management, overrepresenting Western, technical, and business interests while marginalizing developing countries and civil society participation” (90).

    Chapter Six evaluates the transition with respect to process legitimacy: whether all participants could meaningfully affect the outcome. As the authors correctly note, “Stakeholders not belonging to the organizations at the core of the operational communities were called to join the process according to rules and procedures that they had not contributed to creating, and with minor participatory rights” (107). The decision-making process was complex, and undermined the inputs from weaker parties – thus funded, dedicated participants were more influential. Further, key participants were concerned about how the US government would view the outcome, and whether it would approve it (116). And discussions appear to have been restricted to a neo-liberal framework and technical framework (120, 121). As the authors state: “Ultimately, this narrow technical frame prevented the acknowledgment of the public good nature of the IANA functions, and, even more, of their essence as public policy issues” (121). Further, “most members and participants at the CWG-Stewardship had been socialized to the ICANN system, belonging to one of its structures or attending its meetings” and “the long-standing neoliberal plan of the US government and the NTIA to ‘privatize’ the DNS placed the IANA transition within a precise system of definitions, concepts, references, and assumptions that constrained the development of alternative policy discourses and limited the political action of sovereignist and constitutional coalitions” (122).

    Thus, it is not surprising that the authors find that “a single discourse shaped the deliberation. These results contradict the assumptions at the basis of the multistakeholder model of governance, which is supposed to reach a higher and more complete understanding of a particular matter through deliberation among different categories of actors, with different backgrounds, views, and perspectives. Instead, the set of IANA transition voting members in many regards resembled what has been defined as a ‘club governance’ model, which refers to an ‘elite community where the members are motivated by peer recognition and a common goal in line with values, they consider honourable’” (151).

    Chapter Seven evaluates the transition with respect to output legitimacy: whether the result achieved its goals of transferring oversight of the IANA function to a global multistakeholder community. As the authors state “ the institutional effectiveness of the IANA transition cannot be evaluated as satisfying from a normative point of view in terms of inclusiveness, balanced representation, and accountability. As a consequence, the ICANN board remains the expression of interwoven business and technical interests and is unlikely to be truly constrained by an independent entity” (135). Further, as shown in detail, “the political problems connected to the IANA functions have been left unresolved, …  it did not take a long time before they re-emerged” (153).

    Indeed, “IANA was, first of all, a political matter. Indeed, the transition was settled as a consequence of a political fact – the widespread loss of trust in the USA as the caretaker of the Internet after the Snowden disclosures. Further, the IANA transition process aimed to achieve eminently political goals, such as establishing a novel governance setting and strengthening the DNS’s accountability and legitimacy” (152). However, as the authors explain in detail, the IANA transition was turned into a technical discussion, and “The problem here is that governance settings, such as those described as club governance, base their legitimacy form professional expertise and reputation. They are well-suited to performing some form of ‘technocratic’ governance, addressing an issue with a problem-solving approach based on an already given understanding of the nature of the problem and of the goals to be reached. Sharing a set of overlapping and compatible views is the cue that puts together these networks of experts. Nevertheless, they are ill-suited for tackling political problems, which, by definition, deal with pluralism” (152).

    Chapter Seven could have benefitted from a discussion of ICANN’s new Independent Review Process, and the length of time it has taken to put into place the process to name the panellists.

    Chapter Eight, already summarized above, presents overall conclusions.

    In summary, this is a timely and important book that provides objective data and analyses of a particular process that has been put forward as a model for multistakeholder governance, which itself has been put forth as a better alternative to conventional governance. While there is no doubt that ICANN, and the IANA function, are performing their intended functions, the book shows that the IANA transition was not a model multistakeholder process: on the contrary, it exhibited many of the well-known flaws of multistakeholder processes. Thus it should not be used as a model for future governance.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Zachary Loeb — General Ludd in the Long Seventies (Review of Matt Tierney, Dismantlings)

    Zachary Loeb — General Ludd in the Long Seventies (Review of Matt Tierney, Dismantlings)

    a review of Matt Tierney, Dismantlings: Words Against Machines in the American Long Seventies (Cornell University Press, 2019)

    by Zachary Loeb

    ~

    The guy said, “If machinery
    makes you so happy
    go buy yourself
    a Happiness Machine.”
    Then he realized:
    They were trying to do
    exactly that.

    – Kenneth Burke, “Routine for a Stand-Up Comedian” (15)

    A sledgehammer is a fairly versatile tool. You can use it do destroy things, you can use it to build things, and in some cases you can use it to destroy things so that you can build things. Granted, it remains a rather heavy and fairly blunt tool, it is not particularly well suited for fine detail work requiring a high degree of precision. Which is, likely, one of the reasons why those who are famed for wielding sledgehammers often wind up being characterized as being just as blunt and unsubtle as the heavy instruments they swung.

    And, perhaps, no group has been more closely associated with sledgehammers, than the Luddites. Those early 19th century skilled crafts workers who took up arms to defend their communities and their livelihoods from the “obnoxious machines” being introduced by their employers. Though the tactic of machine breaking as a form of protest has a lengthy history that predates (and post-dates) the Luddites, it is a tactic that has come to be bound up with the name of the followers of the mysterious General Ludd. Despite the efforts of writers and thinkers to rescue the Luddite’s legacy from “the enormous condescension of posterity” (Thompson, 12), the term “Luddite” today generally has less to do with a specific historical group and has instead largely become an epithet to be hurled at anyone who dares question the gospel of technological progress. Yet, as the second decade of the twenty-first century comes to a close, it may well be that “Luddite” has lost some of its insulting sting against the backdrop of metastasizing tech giants, growing mountains of toxic e-waste, and an ecological crisis that owes much to an unquestioned faith in the benefits of technology.

    General Ludd may well get the last laugh.

    That the Luddites have lingered so fiercely in the public imagination is a testament to the fact that the Luddites, and the actions for which they are remembered, are good to think with. Insofar as one can talk about Luddism it represents less a coherent body of thought created by the Luddites themselves, and more the attempt by later scholars, critics, artists, and activists to try to make sense of what is usable from the Luddite legacy. And it is this effort to think through and think with, that Matt Tierney explores in his phenomenal book Dismantlings: Words Against Machines in the American Long Seventies. While the focus of Dismantlings, as its title makes clear, is on the “long seventies” (the years from 1965 to 1980) the book represents an important intervention in current discussions and debates around the impacts of technology on society. Just as the various figures Tierney discussed turned their thinking (to varying extents) back to the Luddites, so too the book argues is it worth revisiting the thinking and writing on the matter from the long seventies. This is not a book on the historical Luddites, instead this book is a vital contribution to attempts to theorize what Luddism might mean, and how we are to confront the various technological challenges facing us today.

    Largely remembered for occurrences including the Vietnam War, the Civil Rights movement, the space race, and a general tone of social upheaval – the long seventies also represented a period when technological questions were gaining prominence. With thinkers such as Marshall McLuhan, Buckminster Fuller, Norbert Wiener, and Stewart Brand all putting forth visions of the way that the new consumer technologies would remake society: creating “global villages” or giving rise to a perception of all of humanity as passengers on “spaceship earth.” Yet they were hardly the only figures contemplating technology in that period, and many of the other visions that emerged aimed to directly challenge some of the assumptions and optimism of the likes of McLuhan and Fuller. In the long seventies, the question of what would come next was closely entwined with an evaluation of what had come before, indeed “the breaking of retrogressive notions of technology coupled with the breaking of retrogressive technologies…undergoes a period of vital activity during the Long Seventies in the poems, fictions, and activist speech of what was then called cyberculture,” (15). Granted, this was a “breaking” that generally had more to do with theorizing than with actual machine smashing. Instead it could more accurately be seen as “dismantling,” the careful taking apart so that the functioning can be more fully understood and evaluated. Yet it is a thinking that, importantly, occurred against a recognition that the world was, as Norbert Wiener observed, “the world of Belsen and Hiroshima” (8). To make sense of the resistant narratives towards technology in the long seventies it is necessary to engage critically with the terminology of the period, and thus Tierney’s book represents a sort of conceptual “counterlexicon,” to do just that.

    As anyone who knows about the historical Luddites can attest, they did not hate technology (as such). Rather they were opposed to particular machines being used in a particular way at a particular place and time. And it is a similar attitude towards Luddism (not as an opposition to all technology, but as an understanding that technology has social implications) that Tierney discusses in the long seventies. Luddism here comes to represent “a gradual relinquishing of machines whose continued use would contravene ethical principles” (30), and this attitude is found in Langdon Winner’s concept of “epistemological Luddism” (as discussed in his book Autonomous Technology) and in the poetry of Audre Lorde. While Lorde’s line “for the master’s tools will never dismantle the master’s house” continues to be well known by activists, the question of “tools” can also be engaged with quite literally. Approached with a mind towards Luddism, Lorde’s remarks can be seen as indicating that it is not only that “the master’s house” must be dismantled but “the master’s tools” as well – and Lorde’s writing suggests poetry as a key tool for the dismantler. The version of Luddism that emerges in the late seventies represents a “sort of relinquishing” it “is not about machine-smashing at all” (47), instead it entails a careful work of examining machines to determine which are worth keeping.

    The attitudes towards technology of the long seventies were closely entwined with a sense of the world as made seemingly smaller and more connected thanks to the new technologies of the era. A certain strand of thinking in this period, exemplified by McLuhan’s “global village” or Fuller’s “Spaceship Earth,” achieved great popular success even as reactionary racist and nativist notions lurked just below the surface of the seeming technological optimism of those concepts. Contrary to the “fatalistic acceptance of new technological constraints on life” (48), works by science fiction authors like Ursula Le Guin and Samuel R. Delaney presented a notion of “communion, as a collaborative process of making do” (51). Works like The Dispossessed (Le Guin) and Triton (Delaney), presented readers with visions, and questions, of “real coexistence…not the passage but the sharing of a moment” (63). In contrast to the “technological Messianism” (74) of the likes of Fuller and McLuhan, the “communion” based works by the likes of Le Guin and Delaney focused less on exuberance for the machines themselves but instead sought to critically engage with what types of coexistence such machines would and could genuinely facilitate.

    Coined by Alice Mary Hilton, in 1963, the idea of “cyberculture” did not originally connote the sort of blissed-out-techno-optimism that the term evokes today. Rather it was meant to be “an alternative to the global village and the one-town world, and an insistence on collective action in a world not only of Belsen and Hiroshima but also of ongoing struggles toward decolonization, sexual and gender autonomy, and racial justice” (12). Thus, “cyberculture” (and cybernetics more generally) may represent one of the alternative pathways along which technological society could have developed. What “cyberculture” represented was not an exuberant embrace of all things “cyber,” but an attempt to name and thereby open a space for protest, not “against thinking machines” but which would “interrupt the advancing consensus that such machines had shrunk the globe” (81). These concepts achieved further maturation in the Ad Hoc Committee’s “Triple Revolution Manifesto” (from 1964), which sought to link an emancipatory political program to advances in new technology, linking “cybernation to a decrease in capitalist, racist, and militarist violence” (85). Seizing upon an earnest belief that the technological ethics could guide new technological developments towards just ends, “cyberculture” also imagined that such tools could supplant scarcity with abundance.

    What “cyberculture” based thinking consists of is a sort of theoretical imagining, which is why a document like a manifesto represents such an excellent example of “cyberculture” in practice. It is a sort of “distortion” that recognizes how “the fates of militarism, racism, and cybernation have only ever been knotted together” and “thus calls for imaginative practices, whether literary or activist, for cutting through the knot” (95). This is the sort of theorizing that can be seen in Martin Luther King, Jr.’s commentary on how science and technology had made of “this world a neighborhood” without yet making “of it a brotherhood” (96). The technological ethics of the advocates of “cyberculture” could be the tools with which to make “it a brotherhood” without discarding all of the tools that had made it first “a neighborhood.” The risks and opportunities of new technological forms were also commented upon in works like Shulamith Firestone’s Dialectic of Sex wherein she argued that women needed to seize and guide these technologies. Blending analysis of what is with a program for what could be, Firestone’s work shows “that if other technologies are possible, then other social practices, even practices that are rarely considered in relation to new technology, may be possible too” (105).

    For some, in the long seventies, challenging machinery still took on a destructive form. Though this often entailed a sort of “revolutionary suicide” which represented an attempt to “prevent the becoming-machine of subjugated human bodies and selves” (113). A refusal to become a machine oneself, and a refusal to allow oneself to become fodder for the machine. Such a self-destructive act flows from the Pynchon-esque tragic recognition of a growing consensus “that nothing can be done to oppose” the new machines (122). Such woebegone dejection is in contrast to other attitudes that sought to not only imagine but to also construct new tools that would put the people and community first. John Mohawk, of the Haudenosaunee Confederacy of Mohawk, Oneida, Onondaga, Cayuga, and Seneca people gave voice to this in his theorizing of “liberation technology.” As Mohawk explained at a UN session, “Decentralized technologies that meet the needs of the people those technologies serve will necessarily give life to a different kind of political structure, and it is safe to predict that the political structure that results will be anticolonial in nature” (127). The search for such alternative technologies suggested a framework in which what was needed was “machines to suit the community, or else no machines at all” (129) – a position that countered the technological abundance hoped for by “cyberculture” with an appeal for technologies of subsistence. After all, this was the world of Belsen and Hiroshima, “a world of new and barely understood technologies” (149), in such a world “where the very skin of the planet is a ledger of technological misapplications” (154) it is wise to proceed with caution and humility.

    The long seventies present a fascinating kaleidoscope of visions of technologies, how to live with them, how to select them, and how to think about them. What makes the long seventies so worthy of revisiting is that they and the present moment are both “seized with a critical discourse about technology, and by a popular social upheaval in which new social movements emerge, grow, and proliferate” (5). Luddism may be routinely held up as a foolish reaction, but “by breaking apart certain machines, we can learn to use them better, or never use them again. By dissecting certain technocentric cultural logics, we can likewise challenge or reject them” (162). That the Luddites are so constantly vilified may ultimately be a signal of their dangerous power, insofar as they show that people need not passively sit and accept everything that is sold to them as technological progress. Dismantling represents a politics “not as machine hating, but as a way to protect life against a large=scale regimentation and policing of security, labor, time, and community” (166).

    To engage in the fraught work of technological critique is to open oneself up to being labeled a Luddite (with the term being hurled as an epithet), to accusations of complicity in the very systems you are critiquing, and to a realization that many people simply don’t want to listen to their smartphone habits being criticized. Yet the various conceptual frameworks that can be derived from a consideration of “words against machines in the American long seventies” provide “tactics that might be repeated or emulated, if nostalgia and cynicism do not bar the way” (172). Such concepts present a method of pushing back at the “yes, but” logic which riddles so many discussions of technology today – conversations in which the downsides are acknowledged (the “yes”), yet where the counter is always offered that perhaps there’s still a way to use those technologies correctly (the “but”).

    In contrast to the comfortable rut of “yes, but” Tierney’s book argues for dismantling, wherein “to dismantle is to set aside the dithering of yes, but and to try instead the hard work of critique” (175).

    Running through many of the thinkers, writers, and activists detailed in Dismantlings is a genuine attempt to come to terms with the ways in which new technological forces are changing society. Though many of these individuals responded to such changes not by picking up hammers, but by turning to writing, this activity was always couched in a sense that the shifts afoot truly mattered. Agitated by the roaring clangor of the machines of their day, these figures from the long seventies were looking at the machines of their moment in order to consider what would need to be done to construct a different future. And they did this while looking askance at the more popular techno-utopian visions of the future being promulgated in their day. Writing of the historic Luddites, the historian David Noble commented that, “the Luddites were perhaps the last people in the West to perceive technology in the present tense and to act upon that perception” (Noble, 7), and it may be tempting to suggest that the various figures cataloged in Dismantlings were too focused on the future to have acted upon technology in their present. Nevertheless, as Tierney notes, “the present does not precede the future; rather the future (like its past) distorts and neighbors the present” (173) – the Luddites may have acted in the present, but their eyes were also on the future. It is worth remembering that we do not make sense of the technologies around us solely by what they mean now, but by what we think they will mean for the future.

    While Dismantlings provides a “counterlexicon” drawn from the writing/thinking/acting of a range of individuals in the late seventies, there is something rather tragic about reading these thoughts two decades into the twenty-first century. After all, readers of Dismantlings find themselves in what would have been the future to these late seventies thinkers. And, to be blunt, the world of today seems more in line with those thinkers’ fears for the future than with their hopes. An “epistemological Luddism” has not been used to carefully evaluate which tools to keep and which to discard, “communion” has not become a guiding principle, and “cyberculture” has drifted away from Hiton’s initial meaning to become a stand-in for a sort of uncritical techno-utopianism. The “master’s tools” have expanded to encompass ever more powerful tools, and the “master’s house” appears sturdier than ever – worse still many of us may have become so enamored by some of “the master’s tools” that we have started to entertain delusions that these are actually our tools. To a certain extent, Dismantlings stands as a reminder of a range of individuals who tried to warn us that we would wind up in the mess in which we find ourselves. Those who are equipped with such powers of perception are often mocked and derided in their own time, but looking back at them with hindsight one can get a discomforting sense of just how prescient they truly were.

    Matt Tierney’s Dismantlings: Words Against Machines in the American Long Seventies is a remarkable book. It is also a difficult book. Difficult not because of impenetrable theoretical prose (the writing is clear and crisp), but because it is always challenging to go back and confront the warnings that were ignored. At a moment when headlines are filled with sordid tales of the malfeasance of the tech behemoths, and increasingly terrifying news of the state of the planet, it is both reassuring and infuriating to recognize that it did not have to be this way. True, these long seventies figures did not specifically warn about Facebook, and climate change was not the term they used to speak of environmental degradation – but it’s doubtful that many of these figures would be particularly surprised by either occurrence.

    As a contribution to scholarship, Dismantlings represents a much needed addition to the literature on the long seventies – particularly the literature that considers technology in that period. While much of the present literature (much of it excellent) dealing with those years has tended to focus on the hippies who fell in love with their computers, Tierney’s book is a reminder of those who never composed poems of praise for their machines. After all, not everyone believed that the computer would be an emancipatory technology. This book brings together a wide assortment of figures and draws useful connections between them that will hopefully rescue many a name from obscurity. And even those names that can hardly be called obscure appear in a new light when viewed through the lenses that Tierney develops in this book. While readers may be familiar with names like Lorde, Le Guin, Delaney, and Pynchon – Tierney makes it clear that there is much to be gained by reading Hilton, Mohawk, Firestone, and revisiting the “Triple Revolution Manifesto.”

    Tierney also offers a vital intervention into ongoing discussions over the meaning of Luddism. While it may be fair to say that such discussions are occurring amongst a rather small group of people, it is a passionate debate nevertheless. Tierney avoids re-litigating the history of the original Luddites, and his timeline cuts off before the emergence of the Neo-Luddites, but his book provides valuable insight into the transformations the idea of Luddism went through in the long seventies. Granted, Luddism does not always appear to be a term that was being embraced by the figures in Tierney’s history. Certainly, Winner developed the concept of “epistemological Luddism,” and Pynchon is still remembered for his “Is it O.K. to Be a Luddite?” op-ed, but many of those who spoke about dismantling did not don the mask, or pick up the hammer, of General Ludd. Thus, this book is a clear attempt not to restate others’ views on Luddism, but to freshly theorize the idea. Drawing on his long seventies sources, Tierney writes that:

    Luddism is not the destruction of all machines. And neither is it the hatred of machines as such. Like cyberculture, it is another word for dismantling. Luddism is the performative breaking of machines that limit species expression and impede planetary survival. (13)

    This is a robust and loaded definition of Luddism. While it clearly moves Luddism towards a practice instead of simply a descriptor for particular historical actors, it also presents Luddism as a constructive (as opposed to destructive) process. There are several aspects of Tierney’s definition that deserve particular attention. First, by also evoking “cyberculture” (referring to Hilton’s ethically grounded notion when she coined the term), Tierney demonstrates that Luddism is not the only word or tactic for dismantling. Second, by evoking “the performative breaking,” Tierney moves Luddism away from the blunt force of hammers and towards the more difficult work of critical evaluation. Lastly, by linking Luddism to “species expression and…planetary survival,” Tierney highlights that even if this Luddism is not “the hatred of machines as such” it still entails the recognition that there are some machines that should be hated – and that should be taken apart. It’s the sort of message that you can imagine many people getting behind, even as one can anticipate the choruses of “yes, but” that would be sure to greet this.

    Granted, even though Tierney considers a fair number of manifestos of a revolutionary sort, Dismantlings is not a new Luddite manifesto (though it might be a Luddite lexicon). While Tierney writes of the various figures he analyzes with empathy and affection, he also writes with a certain weariness. After all, as was noted earlier, we are currently living in the world about which these critics tried to warn us. And therefore Tierney can note, “if no political overturning followed the literary politics of cyberculture and Luddism in their own moment, then certainly none will follow them now” (25). Nevertheless, Tierney couches these dour comments in the observation that, “even as a revolution fails, its failure fuels common feeling without which subsequent revolutions cannot succeed” (25). At the very least the assorted thinkers and works described in Dismantlings provide a rich resource to those in the present who are concerned about “species expression” and “planetary survival.” Indeed, those advocating to break up the tech companies or pushing for the Green New Deal can learn a great deal by revisiting the works discussed in Dismantlings.

    Nevertheless, it feels as though there are some key characters missing from Dismantlings. To be clear this point is not meant to detract from Tierney’s excellent and worthwhile book. Furthermore, it must be noted that devotees of particular theorists and social critics tend to have a strong “why isn’t [the theorist/social critic I am devoted to] discussed more in here!?” reaction to works. Nevertheless, there were certain figures who seemed to be oddly missing from Dismantlings. Reflecting on the types of machines against which figures in the long seventies were reacting, Tierney writes of “the war machine, the industrial machine, the computer, and the machines of state are all connected” (4). And it was the dangerous connection of all of these that the social critic Lewis Mumford sought to describe in his theorizing of “the megamachine” – theorizing which he largely did in his two volume Myth of the Machine (which was published in the long seventies). Though Mumford’s idea of “technic” eras is briefly mentioned early in Dismantlings, his broader thinking that touches directly on the core areas of Dismantlings are not remarked on. Several figures who were heavily influenced by Mumford’s work appear in Dismantlings (notably Bookchin and Roszak), and Mumford’s thought could have certainly bolstered some of the books arguments. Mumford, after all, saw himself as a bit of an anti-McLuhan – and in evaluating thinkers who were concerned with what technology meant for “species expression” and “planetary survival” Mumford deserves more attention. Given the overall thrust of Dismantlings it also might have been interesting to see Erich Fromm’s The Revolution of Hope: toward a humanized technology and Ivan Illich’s Tools for Conviviality discussed. Granted, these comments are not meant as attacks on Tierney’s excellent book – they are simply an observation by an avowed Mumford partisan.

    To fully appreciate why the thoughts from the long seventies still matter today it may be useful to consider a line from one of Mumford’s early works. As Mumford wrote, in 1931, “every generation revolts against its fathers and makes friends with its grandfathers” (Mumford, 1). To a certain extent, Dismantlings is an argument for those currently invested in debates around technology to revisit “and make friends” with earlier generations of critics. There is much to be gained from such a move. Notable here is a shift in an evaluation of dangers. Throughout Dismantlings Tierney returns frequently to Wiener’s line that “this is the world of Belsen and Hiroshima” – and without meaning to be crass this is an understanding of the world that has somewhat receded into the past as the memory of those events becomes enshrined in history books. Yet for the likes of Wiener and many of the other individuals discussed in Dismantlings, “Belsen and Hiroshima” were not abstractions or distant memories – they were not the crimes that could be consigned to the past. Rather they were bleak reminders of the depths to which humanity could sink, and the way in which science and technology could act as a weight to drag humanity even deeper. Today’s world is the world of climate change, border walls, and surveillance capitalism – but it is still “the world of Belsen and Hiroshima.”

    There is much that needs to be dismantled, and not much time in which to do that work.

    The lessons from the long seventies are those that we are still struggling to reckon with today, including the recognition that in order to fully make sense of the machines around us it may be necessary to dismantle many of them. Of course, “not everything should be dismantled, but many things should be and some things must be, even if we don’t know where to begin” (163).

    Tierney’s book does not provide an easy answer, but it does show where we should begin.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • Lewis Mumford. The Brown Decades. New York: Dover Books, 1971.
    • David F. Noble. Progress Without People. Toronto: Between the Lines, 1995.
    • E.P. Thompson. The Making of the English Working Class. New York: Vintage Books, 1966.
  • Zachary Loeb — Flamethrowers and Fire Extinguishers (Review of Jeff Orlowski, dir., The Social Dilemma)

    Zachary Loeb — Flamethrowers and Fire Extinguishers (Review of Jeff Orlowski, dir., The Social Dilemma)

    a review of Jeff Orlowski, dir., The Social Dilemma (Netflix/Exposure Labs/Argent Pictures, 2020)

    by Zachary Loeb

    ~

    The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!

    – Joseph Weizenbaum (1976)

    Why did you last look at your smartphone? Did you need to check the time? Was picking it up a conscious decision driven by the need to do something very particular, or were you just bored? Did you turn to your phone because its buzzing and ringing prompted you to pay attention to it? Regardless of the particular reasons, do you sometimes find yourself thinking that you are staring at your phone (or other computerized screens) more often than you truly want? And do you ever feel, even if you dare not speak this suspicion aloud, that your gadgets are manipulating you?

    The good news is that you aren’t just being paranoid, your gadgets were designed in such a way as to keep you constantly engaging with them. The bad news is that you aren’t just being paranoid, your gadgets were designed in such a way as to keep you constantly engaging with them. What’s more, on the bad news front, these devices (and the platforms they run) are constantly sucking up information on you and are now pushing and prodding you down particular paths. Furthermore, alas more bad news, these gadgets and platforms are not only wreaking havoc on your attention span they are also undermining the stability of your society. Nevertheless, even though there is ample cause to worry, the new film The Social Dilemma ultimately has good news for you: a collection of former tech-insiders is starting to speak out! Sure, many of these individuals are the exact people responsible for building the platforms that are currently causing so much havoc—but they meant well, they’re very sorry, and (did you hear?) they meant well.

    Directed by Jeff Orlowski, and released to Netflix in early September 2020, The Social Dilemma is a docudrama that claims to provide a unsparing portrait of what social media platforms have wrought. While the film is made up of a hodgepodge of elements, at the core of the work are a series of interviews with Silicon Valley alumni who are concerned with the direction in which their former companies are pushing the world. Most notable amongst these, the film’s central character to the extent it has one, is Tristan Harris (formerly a design ethicists at Google, and one of the cofounders of The Center for Humane Technology) who is not only repeatedly interviewed but is also shown testifying before the Senate and delivering a TED style address to a room filled with tech luminaries. This cast of remorseful insiders is bolstered by a smattering of academics, and non-profit leaders, who provide some additional context and theoretical heft to the insiders’ recollections. And beyond these interviews the film incorporates a fictional quasi-narrative element depicting the members of a family (particularly its three teenage children) as they navigate their Internet addled world—with this narrative providing the film an opportunity to strikingly dramatize how social media “works.”

    The Social Dilemma makes some important points about the way that social media works, and the insiders interviewed in the film bring a noteworthy perspective. Yet beyond the sad eyes, disturbing animations, and ominous music The Social Dilemma is a piece of manipulative filmmaking on par with the social media platforms it critiques. While presenting itself as a clear-eyed expose of Silicon Valley, the film is ultimately a redemption tour for a gaggle of supposedly reformed techies wrapped in an account that is so desperate to appeal to “both sides” that it is unwilling to speak hard truths.

    The film warns that the social media companies are not your friends, and that is certainly true, but The Social Dilemma is not your friend either.

    The Social Dilemma

    As the film begins the insiders introduce themselves, naming the companies where they had worked, and identifying some of the particular elements (such as the “like” button) with which they were involved. Their introductions are peppered with expressions of concern intermingled with earnest comments about how “Nobody, I deeply believe, ever intended any of these consequences,” and that “There’s no one bad guy.” As the film transitions to Tristan Harris rehearsing for the talk that will feature later in the film, he comments that “there’s a problem happening in the tech industry, and it doesn’t have a name.” After recounting his personal awakening, whilst working at Google, and his attempt to spark a serious debate about these issues with his coworkers, the film finds “a name” for the “problem” Harris had alluded to: “surveillance capitalism.” The thinker who coined that term, Shoshana Zuboff, appears to discuss this concept which captures the way in which Silicon Valley thrives not off of users’ labor but off of every detail that can be sucked up about those users and then sold off to advertisers.

    After being named, “surveillance capitalism” hovers in the explanatory background as the film considers how social media companies constantly pursue three goals: engagement (to keep you coming back), growth (to get you to bring in more users), and advertising (to get better at putting the right ad in front of your eyes, which is how the platforms make money). The algorithms behind these platforms are constantly being tweaked through A/B testing, with every small improvement being focused around keeping users more engaged. Numerous problems emerge: designed to be addictive, these platforms and devices claw at users’ attention; teenagers (especially young ones) struggle as their sense of self-worth becomes tied to “likes;” misinformation spreads rapidly in an information ecosystem wherein the incendiary gets more attention than the true; and the slow processes of democracy struggle to keep up with the speed of technology. Though the concerns are grave, and the interviewees are clearly concerned, the tonality is still one of hopefulness; the problem here is not really social media, but “surveillance capitalism,” and if “surveillance capitalism” can be thwarted then the true potential of social media can be attained. And the people leading that charge against “surveillance capitalism”? Why, none other than the reformed insiders in the film.

    While the bulk of the film consists of interviews, and news clips, the film is periodically interrupted by a narrative in which a family with three teenage children is shown. The Mother (Barbara Gehring) and Step-Father (Chris Grundy) are concerned with their children’s social media usage, even as they are glued to their own devices. As for the children: the oldest Cassandra (Kara Hayward) is presented as skeptical towards social media, the youngest Isla (Sophia Hammons) Is eager for online popularity, and the middle child Ben (Skyler Gisondo) eventually falls down the rabbit hole of recommended conspiratorial content. As the insiders, and academics, talk about the various dangers of social media the film shifts to the narrative to dramatize these moments – thus a discussion of social media’s impact on young teenagers, particularly girls, cuts to Isla being distraught after an insulting comment is added to one of the images she uploads. Cassandra (that name choice can’t be a coincidence) is presented as most in line with the general message of the film and the character refers to Jaron Lanier as a “genius” and in another sequence is shown reading Zuboff’s The Age of Surveillance Capitalism. Yet the member of the family the film dwells on the most is almost certainly Ben. For the purposes of dramatizing how an algorithm works, the film repeatedly returns to a creepy depiction of the Advertising, Engagement, and Growth Ais (all played by Vincent Kartheiser) as they scheme to get Ben to stay glued to his phone. Beyond the screens, the world in the narrative is being rocked by a strange protest movement calling itself “The Extreme Center” – whose argument seems to be that both sides can’t be trusted – and Ben eventually gets wrapped up in their message. The family’s narrative concludes with Ben and Cassandra getting arrested at a raucous rally held by “The Extreme Center,” sitting handcuffed on the ground and wondering how it is that this could have happened.

    To the extent that The Social Dilemma builds towards a conclusion, it is the speech that Harris gives (before an audience that includes many of the other interviewees in the film). And in that speech, and the other comments made around it, the point that is emphasized is that Silicon Valley must get away from “surveillance capitalism.” It must embrace “humane technology” that seeks to empower users not entangle them. Emphasizing that, despite how things have turned out, that “I don’t think these guys set out to be evil” the various insiders double-down on their belief in high-tech’s liberatory potential. Contrasting rather unflattering imagery of Mark Zuckerberg (without genuinely calling him out) testifying with images of Steve Jobs in his iconic turtleneck, the film claims “the idea of humane technology, that’s where Silicon Valley got its start.” And before the credits roll, Harris seems to speak for his fellow insiders as he notes “we built these things, and we have a responsibility to change it.” For those who found the film unsettling, and who are confused by exactly what they are meant to do if they are not part of Harris’s “we,” the film offers some straightforward advice. Drawing on their own digital habits, the insiders recommend: turning off notifications, never watching a recommended video, opting for a less-invasive search engine, trying to escape your content bubble, keeping your devices out of your bedroom, and being a critical consumer of information.

    It is a disturbing film, and it is constructed so as to unsettle the viewer, but it still ends on a hopeful note: reform is possible, and the people in this film are leading that charge. The problem is not social media as such, but what the ways in which “surveillance capitalism” has thwarted what social media could really be. If, after watching The Social Dilemma, you feel concerned about what “surveillance capitalism” has done to social media (and you feel prepared to make some tweaks in your social media use) but ultimately trust that Silicon Valley insiders are on the case—then the film has succeeded in its mission. After all, the film may be telling you to turn off Facebook notifications, but it doesn’t recommend deleting your account.

    Yet one of the points the film makes is that you should not accept the information that social media presents to you at face value. And in the same spirit, you should not accept the comments made by oh-so-remorseful Silicon Valley insiders at face value either. To be absolutely clear: we should be concerned about the impacts of social media, we need to work to rein in the power of these tech companies, we need to be willing to have the difficult discussion about what kind of society we want to live in…but we should not believe that the people who got us into this mess—who lacked the foresight to see the possible downsides in what they were building—will get us out of this mess. If these insiders genuinely did not see the possible downsides of what they were building, than they are fools who should not be trusted. And if these insiders did see the possible downsides, continued building these things anyways, and are now pretending that they did not see the downsides, than they are liars who definitely should not be trusted.

    It’s true, arsonists know a lot about setting fires, and a reformed arsonist might be able to give you some useful fire safety tips—but they are still arsonists.

    There is much to be said about The Social Dilemma. Indeed, anyone who cares about these issues (unfortunately) needs to engage with The Social Dilemma if for no other reason than the fact that this film will be widely watched, and will thus set much of the ground on which these discussions take place. Therefore, it is important to dissect certain elements of the film. To be clear, there is a lot to explore in The Social Dilemma—a book or journal issue could easily be published in which the docudrama is cut into five minute segments with academics and activists being each assigned one segment to comment on. While there is not the space here to offer a frame by frame analysis of the entire film, there are nevertheless a few key segments in the film which deserve to be considered. Especially because these key moments capture many of the film’s larger problems.

    “when bicycles showed up”

    A moment in The Social Dilemma that perfectly, if unintentionally, sums up many of the major flaws with the film occurs when Tristan Harris opines on the history of bicycles. There are several problems in these comments, but taken together these lines provide you with almost everything you need to know about the film. As Harris puts it:

    No one got upset when bicycles showed up. Right? Like, if everyone’s starting to go around on bicycles, no one said, ‘Oh, my God, we’ve just ruined society. [chuckles] Like, bicycles are affecting people. They’re pulling people away from their kids. They’re ruining the fabric of democracy. People can’t tell what’s true.’ Like we never said any of that stuff about a bicycle.

    Here’s the problem, Harris’s comments about bicycles are wrong.

    They are simply historically inaccurate. Some basic research into the history of bicycles that looks at the ways that people reacted when they were introduced would reveal that many people were in fact quite “upset when bicycles showed up.” People absolutely were concerned that bicycles were “affecting people,” and there were certainly some who were anxious about what these new technologies meant for “the fabric of democracy.” Granted, that there were such adverse reactions to the introduction of bicycles should not be seen as particularly surprising, because even a fairly surface-level reading of the history of technology reveals that when new technologies are introduced they tend to be met not only with excitement, but also with dread.

    Yet, what makes Harris’s point so interesting is not just that he is wrong, but that he is so confident while being so wrong. Smiling before the camera, in what is obviously supposed to be a humorous moment, Harris makes a point about bicycles that is surely one that will stick with many viewers—and what he is really revealing is that he needs to take some history classes (or at least do some reading). It is genuinely rather remarkable that this sequence made it into the final cut of the film. This was clearly an expensive production, but they couldn’t have hired a graduate student to watch the film and point out “hey, you should really cut this part about bicycles, it’s wrong”? It is hard to put much stock in Harris, and friends, as emissaries of technological truth when they can’t be bothered to do basic research.

    That Harris speaks so assuredly about something which he is so wrong about gets at one of the central problems with the reformed insiders of The Social Dilemma. Though these are clearly intelligent people (lots of emphasis is placed on the fancy schools they attended), they know considerably less than they would like the viewers to believe. Of course, one of the ways that they get around this is by confidently pretending they know what they’re talking about, which manifests itself by making grandiose claims about things like bicycles that just don’t hold up. The point is not to mock Harris for this mistake (though it really is extraordinary that the segment did not get cut), but to make the following point: if Harris, and his friends, had known a bit more about the history of technology, and perhaps if they had a bit more humility about what they don’t know, perhaps they would not have gotten all of us into this mess.

    A point that is made by many of the former insiders interviewed for the film is that they didn’t know what the impacts would be. Over and over again we hear some variation of “we meant well” or “we really thought we were doing something great.” It is easy to take such comments as expressions of remorse, but it is more important to see such comments as confessions of that dangerous mixture of hubris and historical/social ignorance that is so common in Silicon Valley. Or, to put it slightly differently, these insiders really needed to take some more courses in the humanities. You know how you could have known that technologies often have unforeseen consequences? Study the history of technology. You know how you could have known that new media technologies have jarring political implications? Read some scholarship from media studies. A point that comes up over and over again in such scholarly work, particularly works that focus on the American context, is that optimism and enthusiasm for new technology often keeps people (including inventors) from seeing the fairly obvious risks—and all of these woebegone insiders could have known that…if they had only been willing to do the reading. Alas, as anyone who has spent time in a classroom knows, a time honored way of covering up for the fact that you haven’t done the reading is just to speak very confidently and hope that your confidence will successfully distract from the fact that you didn’t do the reading.

    It would be an exaggeration to claim “all of these problems could have been prevented if these people had just studied history!” And yet, these insiders (and society at large) would likely be better able to make sense of these various technological problems if more people had an understanding of that history. At the very least, such historical knowledge can provide warnings about how societies often struggle to adjust to new technologies, can teach how technological progress and social progress are not synonymous, can demonstrate how technologies have a nasty habit of biting back, and can make clear the many ways in which the initial liberatory hopes that are attached to a technology tend to fade as it becomes clear that the new technology has largely reinscribed a fairly conservative status quo.

    At the very least, knowing a bit more about the history of technology can keep you from embarrassing yourself by confidently making claiming that “we never said any of that stuff about a bicycle.”

    “to destabilize”

    While The Social Dilemma expresses concern over how digital technologies impact a person’s body, the film is even more concerned about the way these technologies impact the body politic. A worry that is captured by Harris’s comment that:

    We in the tech industry have created the tools to destabilize and erode the fabric of society.

    That’s quite the damning claim, even if it is one of the claims in the film that probably isn’t all that controversial these days. Though many of the insiders in the film pine nostalgically for those idyllic days from ten years ago when much of the media and the public looked so warmly towards Silicon Valley, this film is being released at a moment when much of that enthusiasm has soured. One of the odd things about The Social Dilemma is that politics are simultaneously all over the film, and yet politics in the film are very slippery. When the film warns of looming authoritarianism: Bolsanaro gets some screen time, Putin gets some ominous screen time—but though Trump looms in the background of the film he’s pretty much unseen and unnamed. And when US politicians do make appearances we get Marco Rubio and Jeff Flake talking about how people have become too polarized and Jon Tester reacting with discomfort to Harris’s testimony. Of course, in the clip that is shown, Rubio speaks some pleasant platitudes about the virtues of coming together…but what does his voting record look like?

    The treatment of politics in The Social Dilemma comes across most clearly in the narrative segment, wherein much attention is paid to a group that calls itself “The Extreme Center.” Though the ideology of this group is never made quite clear, it seems to be a conspiratorial group that takes as its position that “both sides are corrupt” – rejecting left and right it therefore places itself in “the extreme center.” It is into this group, and the political rabbit hole of its content, that Ben falls in the narrative – and the raucous rally (that ends in arrests) in the narrative segment is one put on by the “extreme center.” It may appear that “the extreme center” is just a simple storytelling technique, but more than anything else it feels like the creation of this fictional protest movement is really just a way for the film to get around actually having to deal with real world politics.

    The film includes clips from a number of protests (though it does not bother to explain who these people are and why they are protesting), and there are some moments when various people can be heard specifically criticizing Democrats or Republicans. But even as the film warns of “the rabbit hole” it doesn’t really spend much time on examples. Heck, the first time that the words “surveillance capitalism” get spoken in the film are in a clip of Tucker Carlson. Some points are made about “pizzagate” but the documentary avoids commenting on the rapidly spreading QAnon conspiracy theory. And to the extent that any specific conspiracy receives significant attention it is the “flat earth” conspiracy. Granted, it’s pretty easy to deride the flat earthers, and in focusing on them the film makes a very conscious decision to not focus on white supremacist content and QAnon. Ben falls down the “extreme center” rabbit hole, and it may well be that the reason why the filmmakers have him fall down this fictional rabbit hole is so that they don’t have to talk about the likelihood that (in the real world) he would likely fall down a far-right rabbit hole. But The Social Dilemma doesn’t want to make that point, after all, in the political vision it puts forth the problem is that there is too much polarization and extremism on both sides.

    The Social Dilemma clearly wants to avoid taking sides. And in so doing demonstrates the ways in which Silicon Valley has taken sides. After all, to focus so heavily on polarization and the extremism of “both sides” just serves to create a false equivalency where none exists. But, the view that “the Trump administration has mismanaged the pandemic” and the view that “the pandemic is a hoax” – are not equivalent. The view that “climate change is real” and “climate change is a hoax” – are not equivalent. People organizing for racial justice and people organizing because they believe that Democrats are satanic cannibal pedophiles – are not equivalent. The view that “there is too much money in politics” and the view that “the Jews are pulling the strings” – are not equivalent. Of course, to say that these things “are not equivalent” is to make a political judgment, but by refusing to make such a judgment The Social Dilemma presents both sides as being equivalent. There are people online who are organizing for the cause of racial justice, and there are white-supremacists organizing online who are trying to start a race war—those causes may look the same to an algorithm, and they may look the same to the people who created those algorithms, but they are not the same.

    You cannot address the fact that Facebook and YouTube have become hubs of violent xenophobic conspiratorial content unless you are willing to recognize that Facebook and YouTube actively push violent xenophobic conspiratorial content.

    It is certainly true that there are activist movements from the left and the right organizing online at the moment, but when you watch a movie trailer on YouTube the next recommended video isn’t going to be a talk by Angela Davis.

    “it’s the critics”

    Much of the content of The Social Dilemma is unsettling, and the film makes it clear that change is necessary. Nevertheless, the film ends on a positive note. Pivoting away from gloominess, the film shows the rapt audience nodding as Harris speaks of the need for “humane technology,” and this assembled cast of reformed insiders is presented as proof that Silicon Valley is waking up to the need to take responsibility. Near the film’s end, Jaron Lanier hopefully comments that:

    it’s the critics that drive improvement. It’s the critics who are the true optimists.

    Thus, the sense that is conveyed at the film’s close is that despite the various worries that had been expressed—the critics are working on it, and the critics are feeling good.

    But, who are the critics?

    The people interviewed in the film, obviously.

    And that is precisely the problem. “Critic” is something of a challenging term to wrestle with as it doesn’t necessarily take much to be able to call yourself, or someone else, a critic. Thus, the various insiders who are interviewed in the film can all be held up as “critics” and can all claim to be “critics” thanks to the simple fact that they’re willing to say some critical things about Silicon Valley and social media. But what is the real content of the criticisms being made? Some critics are going to be more critical than others, so how critical are these critics? Not very.

    The Social Dilemma is a redemption tour that allows a bunch of remorseful Silicon Valley insiders to rebrand themselves as critics. Based on the information provided in the film it seems fairly obvious that a lot of these individuals are responsible for causing a great deal of suffering and destruction, but the film does not argue that these men (and they are almost entirely men) should be held accountable for their deeds. The insiders have harsh things to say about algorithms, they too have been buffeted about by nonstop nudging, they are also concerned about the rabbit hole, they are outraged at how “surveillance capitalism” has warped technological possibilities—but remember, they meant well, and they are very sorry.

    One of the fascinating things about The Social Dilemma is that in one scene a person will proudly note that they are responsible for creating a certain thing, and then in the next scene they will say that nobody is really to blame for that thing. Certainly not them, they thought they were making something great! The insiders simultaneously want to enjoy the cultural clout and authority that comes from being the one who created the like button, while also wanting to escape any accountability for being the person who created the like button. They are willing to be critical of Silicon Valley, they are willing to be critical of the tools they created, but when it comes to their own culpability they are desperate to hide behind a shield of “I meant well.” The insiders do a good job of saying remorseful words, and the camera catches them looking appropriately pensive, but it’s no surprise that these “critics” should feel optimistic, they’ve made fortunes utterly screwing up society, and they’ve done such a great job of getting away with it that now they’re getting to elevate themselves once again by rebranding themselves as “critics.”

    To be a critic of technology, to be a social critic more broadly, is rarely a particularly enjoyable or a particularly profitable undertaking. Most of the time, if you say anything critical about technology you are mocked as a Luddite, laughed at as a “prophet of doom,” derided as a technophobe, accused of wanting everybody to go live in caves, and banished from the public discourse. That is the history of many of the twentieth century’s notable social critics who raised the alarm about the dangers of computers decades before most of the insiders in The Social Dilemma were born. Indeed, if you’re looking for a thorough retort to The Social Dilemma you cannot really do better than reading Joseph Weizenbaum’s Computer Power and Human Reason—a book which came out in 1976. That a film like The Social Dilemma is being made may be a testament to some shifting attitudes towards certain types of technology, but it was not that long ago that if you dared suggest that Facebook was a problem you were denounced as an enemy of progress.

    There are many phenomenal critics speaking out about technology these days. To name only a few: Safiya Noble has written at length about the ways that the algorithms built by companies like Google and Facebook reinforce racism and sexism; Virginia Eubanks has exposed the ways in which high-tech tools of surveillance and control are first deployed against society’s most vulnerable members; Wendy Hui Kyong Chun has explored how our usage of social media becomes habitual; Jen Schradie has shown the ways in which, despite the hype to the contrary, online activism tends to favor right-wing activists and causes; Sarah Roberts has pulled back the screen on content moderation to show how much of the work supposedly being done by AI is really being done by overworked and under-supported laborers; Ruha Benjamin has made clear the ways in which discriminatory designs get embedded in and reified by technical systems; Christina Dunbar-Hester has investigated the ways in which communities oriented around technology fail to overcome issues of inequality; Sasha Costanza-Chock has highlighted the need for an approach to design that treats challenging structural inequalities as the core objective, not an afterthought; Morgan Ames expounds upon the “charisma” that develops around certain technologies; and Meredith Broussard has brilliantly inveighed against the sort of “technochauvinist” thinking—the belief that technology is the solution to every problem—that is so clearly visible in The Social Dilemma. To be clear, this list of critics is far from all-inclusive. There are numerous other scholars who certainly could have had their names added here, and there are many past critics who deserve to be named for their disturbing prescience.

    But you won’t hear from any of those contemporary critics in The Social Dilemma. Instead, viewers of the documentary are provided with a steady set of mostly male, mostly white, reformed insiders who were unable to predict that the high-tech toys they built might wind up having negative implications.

    It is not only that The Social Dilemma ignores most of the figures who truly deserve to be seen as critics, but that by doing so what The Social Dilemma does is set the boundaries for who gets to be a critic and what that criticism can look like. The world of criticism that The Social Dilemma sets up is one wherein a person achieves legitimacy as a critic of technology as a result of having once been a tech insider. Thus what the film does is lay out, and then set about policing the borders of, what can pass for acceptable criticism of technology. This not only limits the cast of critics to a narrow slice of mostly white mostly male insiders, it also limits what can be put forth as a solution. You can rest assured that the former insiders are not going to advocate for a response that would involve holding the people who build these tools accountable for what they’ve created. On the one hand it’s remarkable that no one in the film really goes after Mark Zuckerberg, but many of these insiders can’t go after Zuckerberg—because any vitriol they direct at him could just as easily be directed at them as well.

    It matters who gets to be deemed a legitimate critic. When news networks are looking to have a critic on it matters whether they call Tristan Harris or one of the previously mentioned thinkers, when Facebook does something else horrendous it matters whether a newspaper seeks out someone whose own self-image is bound up in the idea that the company means well or someone who is willing to say that Facebook is itself the problem. When there are dangerous fires blazing everywhere it matters whether the voices that get heard are apologetic arsonists or firefighters.

    Near the film’s end, while the credits play, as Jaron Lanier speaks of Silicon Valley he notes “I don’t hate them. I don’t wanna do any harm to Google or Facebook. I just want to reform them so they don’t destroy the world. You know?” And these comments capture the core ideology of The Social Dilemma, that Google and Facebook can be reformed, and that the people who can reform them are the people who built them.

    But considering all of the tangible harm that Google and Facebook have done, it is far past time to say that it isn’t enough to “reform” them. We need to stop them.

    Conclusion: On “Humane Technology”

    The Social Dilemma is an easy film to criticize. After all, it’s a highly manipulative piece of film making, filled with overly simplified claims, historical inaccuracies, conviction lacking politics, and a cast of remorseful insiders who still believe Silicon Valley’s basic mythology. The film is designed to scare you, but it then works to direct that fear into a few banal personal lifestyle tweaks, while convincing you that Silicon Valley really does mean well. It is important to view The Social Dilemma not as a genuine warning, or as a push for a genuine solution, but as part of a desperate move by Silicon Valley to rehabilitate itself so that any push for reform and regulation can be captured and defanged by “critics” of its own choosing.

    Yet, it is too simple (even if it is accurate) to portray The Social Dilemma as an attempt by Silicon Valley to control both the sale of flamethrowers and fire extinguishers. Because such a focus keeps our attention pinned to Silicon Valley. It is easy to criticize Silicon Valley, and Silicon Valley definitely needs to be criticized—but the bright-eyed faith in high-tech gadgets and platforms that these reformed insiders still cling to is not shared only by them. The people in this film blame “surveillance capitalism” for warping the liberatory potential of Internet connected technologies, and many people would respond to this by pushing back on Zuboff’s neologism to point out that “surveillance capitalism” is really just “capitalism” and that therefore the problem is really that capitalism is warping the liberatory potential of Internet connected technologies. Yes, we certainly need to have a conversation about what to do with Facebook and Google (dismantle them). But at a certain point we also need to recognize that the problem is deeper than Facebook and Google, at a certain point we need to be willlng to talk about computers.

    The question that occupied many past critics of technology was the matter of what kinds of technology do we really need? And they were clear that this was a question that was far too important to be left to machine-worshippers.

    The Social Dilemma responds to the question of “what kind of technology do we really need?” by saying “humane technology.” After all, the organization The Center for Humane Technology is at the core of the film, and Harris speaks repeatedly of “humane technology.” At the surface level it is hard to imagine anyone saying that they disapprove of the idea of “humane technology,” but what the film means by this (and what the organization means by this) is fairly vacuous. When the Center for Humane Technology launched in 2018, to a decent amount of praise and fanfare, it was clear from the outset that its goal had more to do with rehabilitating Silicon Valley’s image than truly pushing for a significant shift in technological forms. Insofar as “humane technology” means anything, it stands for platforms and devices that are designed to be a little less intrusive, that are designed to try to help you be your best self (whatever that means), that try to inform you instead of misinform you, and that make it so that you can think nice thoughts about the people who designed these products. The purpose of “humane technology” isn’t to stop you from being “the product,” it’s to make sure that you’re a happy product. “Humane technology” isn’t about deleting Facebook, it’s about renewing your faith in Facebook so that you keep clicking on the “like” button. And, of course, “humane technology” doesn’t seem to be particularly concerned with all of the inhumanity that goes into making these gadgets possible (from mining, to conditions in assembly plants, to e-waste). “Humane technology” isn’t about getting Ben or Isla off their phones, it’s about making them feel happy when they click on them instead of anxious. In a world of empowered arsonists, “humane technology” seeks to give everyone a pair of asbestos socks.

    Many past critics also argued that what was needed was to place a new word before technology – they argued for “democratic” technologies, or “holistic” technologies, or “convivial” technologies, or “appropriate” technologies, and this list could go on. Yet at the core of those critiques was not an attempt to salvage the status quo but a recognition that what was necessary in order to obtain a different sort of technology was to have a different sort of society. Or, to put it another way, the matter at hand is not to ask “what kind of computers do we want?” but to ask “what kind of society do we want?” and to then have the bravery to ask how (or if) computers really fit into that world—and if they do fit, how ubiquitous they will be, and who will be responsible for the mining/assembling/disposing that are part of those devices’ lifecycles. Certainly, these are not easy questions to ask, and they are not pleasant questions to mull over, which is why it is so tempting to just trust that the Center for Humane Technology will fix everything, or to just say that the problem is Silicon Valley.

    Thus as the film ends we are left squirming unhappily as Netflix (which has, of course, noted the fact that we watched The Social Dilemma) asks us to give the film a thumbs up or a thumbs down – before it begins auto-playing something else.

    The Social Dilemma is right in at least one regard, we are facing a social dilemma. But as far as the film is concerned, your role in resolving this dilemma is to sit patiently on the couch and stare at the screen until a remorseful tech insider tells you what to do.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to Calculation. New York: WH Freeman & Co.
  • Moira Weigel — Palantir Goes to the Frankfurt School

    Moira Weigel — Palantir Goes to the Frankfurt School

    Moira Weigel

    This essay has been peer-reviewed by “The New Extremism” special issue editors (Adrienne Massanari and David Golumbia), and the b2o: An Online Journal editorial board.

    Since the election of Donald Trump, a growing body of research has examined the role of digital technologies in new right wing movements (Lewis 2018; Hawley 2017; Neiwert 2017; Nagle 2017). This article will explore a distinct, but related, subject: new right wing tendencies within the tech industry itself. Our point of entry will be an improbable document: a German language dissertation submitted by an American to the faculty of social sciences at J. W. Goethe University of Frankfurt in 2002. Entitled Aggression in the Life-World, the dissertation aims to describe the role that aggression plays in social integration, or the set of processes that lead individuals in a given society to feel bound to one another. To that end, it offers a “systematic” reinterpretation of Theodor Adorno’s Jargon of Authenticity (1973). It is of interest primarily because of its author: Alexander C. Karp.[1]

    Karp, as some readers may know, did not pursue a career in academia. Instead, he became the CEO of the powerful and secretive data analytics company, Palantir Technologies. His dissertation has inspired speculation for years, but no journalist or scholar has yet analyzed it. Doing so, I will argue that it offers insight into the intellectual formation of an influential network of actors in and around Silicon Valley, a network articulating ideas and developing business practices that challenge longstanding beliefs about how Silicon Valley thinks and works.

    For decades, a view prevailed that the politics of both digital technologies and most digital technologists were liberal, or neoliberal, depending on how critically the author in question saw them. Liberalism and neoliberalism are complex and contested concepts. But broadly speaking, digital networks have been seen as embodying liberal or neoliberal logics insofar as they treated individuals as abstractly equal, rendering social aspects of embodiment like race and gender irrelevant, and allowing users to engage directly in free expression and free market competition (Kolko and Nakamura, 2000; Chun 2005, 2011, 2016). The ascendance of the Bay Area tech industry over competitors in Boston or in Europe was explained as a result of its early adoption of new forms of industrial organization, built on flexible, short-term contracts and a strong emotional identification between workers and their jobs (Hayes 1989; Saxenian 1994).

    Technologists themselves were said to embrace a new set of values that the British media theorists Richard Barbrook and Andy Cameron dubbed the “Californian Ideology.” This “anti-statist gospel of cybernetic libertarianism… promiscuously combine[d] the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies,” they wrote; it answered the challenge posed by the social liberalism of the New Left by “resurrecting economic liberalism” (1996, 42 & 47). Fred Turner attributed this synthesis to the “New Communalists,” members of the counterculture who “turn[ed] away from questions of gender, race, and class, and toward a rhetoric of individual and small group empowerment” (2006, 97). Nonetheless, he reinforced the broad outlines that Barbrook and Cameron had sketched. Turner further showed that midcentury critiques of mass media, and their alleged tendency to produce authoritarian subjects, inspired faith that digital media could offer salutary alternatives—that “democratic surrounds” would sustain democracy by facilitating the self-formation of democratic subjects (2013). 

    Silicon Valley has long supported Democratic Party candidates in national politics and many tech CEOs still subscribe to the “hybrid” values of the Californian Ideology (Brookman et al. 2019). However, in recent years, tensions and contradictions within Silicon Valley liberalism, particularly between commitments to social and economic liberalism, have become more pronounced. In the wake of the 2016 presidential election, several software engineers emerged as prominent figures on the “alt-right,” and newly visible white nationalist media entrepreneurs reported that they were drawing large audiences from within the tech industry.[2] The leaking of information from internal meetings at Google to digital outlets like Breitbart and Vox Popoli suggests that there was at least some truth to their claims (Tiku 2018). Individual engineers from Google, YouTube, and Facebook have received national media attention after publicly criticizing the liberal culture of their (former) workplaces and in some cases filing lawsuits against them.[3] And Republican politicians, including Trump (2019a, 2019b), have cited these figures as evidence of “liberal bias” at tech firms and the need for stronger government regulation (Trump 2019a; Kantrowitz 2019).

    Karp’s Palantir cofounder (and erstwhile roommate) Peter Thiel looms large in an emerging constellation of technologists, investors, and politicians challenging what they describe as hegemonic social liberalism in Silicon Valley. Thiel has been assembling a network of influential “contrarians” since he founded the Stanford Review as an undergraduate in the late 1980s (Granato 2017). In 2016, Thiel became a highly visible supporter of Donald Trump, speaking at the Republican National Convention, donating $1.25 million in the final weeks of Trump’s campaign for president (Streitfeld 2016a), and serving as his “tech liaison” during the transition period (Streitfeld 2016b). (Earlier in the campaign, Thiel had donated $1 million to the Defeat Crooked Hillary Super PAC backed by Robert Mercer, and overseen by Steve Bannon and Kellyanne Conway; see Green 2017, 200.) Since 2016, he has met with prominent figures associated with the alt-right and “neoreaction”[4] and donated at least $250,000 to support Trump’s reelection in 2020 (Federal Election Commission 2018). He has also given to Trump allies including Missouri Senator Josh Hawley, who has repeatedly attacked Google and Facebook and sponsored multiple bills to regulate tech platforms, citing the threat that they pose to conservative speech.[5]

    Thiel’s affinity with Trumpism is not merely personal or cultural; it aligns with Palantir’s business interests. According to a 2019 report by Mijente, since Trump came into office in 2017, Palantir contracts with the United States government have increased by over a billion dollars per year. These include multiyear contracts with the US military (Judson 2019; Hatmaker 2019) and with Immigrations and Customs Enforcement (ICE) (MacMillan and Dwoskin 2019); Palantir has also worked with police departments in New York, New Orleans, and Los Angeles (Alden 2017; Winston 2018; Harris 2018).

    Karp and Thiel have both described these controversial contracts using the language of “nation” and “civilization.” Confronted by critical journalistic coverage (Woodman 2017, Winston 2018, Ahmed 2018) and protests  (Burr 2017, Wiener 2017), as well as internal actions by concerned employees (MacMillan and Dwoskin, 2019), Thiel and Karp have doubled down, characterizing the company as “patriotic,” in contrast to its competitors. In an interview conducted at Davos in January 2019, Karp said that Silicon Valley companies that refuse to work with the US government are “borderline craven” (2019b). At a speech at the National Conservatism Conference in July 2019, Thiel called Google “seemingly treasonous” for doing business with China, suggested that the company had been infiltrated by Chinese agents, and called for a government investigation (Thiel 2019a). Soon after, he published an Op Ed in the New York Times that restated this case (Thiel 2019b).

    However, Karp has cultivated a very different public image from Thiel’s, supporting Hillary Clinton in 2016, saying that he would vote for any Democratic presidential candidate against Trump in 2020 (Chafkin 2019), and—most surprisingly—identifying himself as a Marxist or “neo-Marxist” (Waldman et al. 2018, Mac 2017, Greenberg 2013). He also refers to himself as a “socialist” (Chafkin 2019) and according to at least one journalist, regularly addresses his employees on Marxian thought (Greenberg 2013). On one level, Karp’s dissertation clarifies what he means by this: For a time, he engaged deeply with the work of several neo-Marxist thinkers affiliated with the Institute for Social Research in Frankfurt. On another level, however, Karp’s dissertation invites further perplexity, because right wing movements, including Trump’s, evince special antipathy for precisely that tradition.

    Starting in the early 1990s, right-wing think tanks in both Germany and the United States began promoting conspiratorial narratives about critical theory. The conspiracies allege that, ever since the failure of “economic Marxism” in World War I, “neo-“ or “cultural Marxists” have infiltrated academia, media, and government. From inside, they have carried out a longstanding plan to overthrow Western civilization by criticizing Western culture and imposing “political correctness.” To the extent that it attaches to real historical figures, the story typically begins with Antonio Gramsci and György Lukács, goes through Max Horkheimer, Theodor Adorno, and other wartime émigrés to the United States, particularly those involved in state-sponsored mass media research, and ends abruptly with Herbert Marcuse and his influence on student movements of the 1960s (Moyn 2018; Huyssen 2017; Jay 2011; Berkowitz 2003).

    The term “Cultural Marxism” directly echoes the Nazi theory of “Cultural Bolshevism”; the early proponents of the Cultural Marxism conspiracy theory were more or less overt antisemites and white nationalists (Berkowitz 2003). However, in the 2000s and 2010s, right wing politicians and media personalities helped popularize it well beyond that sphere.[6] During the same time, it has gained traction in Silicon Valley, too.  In recent years, several employees at prominent tech firms have publicly decried the influence of Cultural Marxists, while making complaints about “political correctness” or lack of “viewpoint diversity.”[7]

    Thiel has long expressed similar frustrations.[8] So how is it that this prominent opponent of “cultural Marxism” works with a self-described neo-Marxist CEO? Aggression in the Life World casts light on the core beliefs that animate their partnership. The idiosyncratic adaptation of Western Marxism that it advances does not in fact place Karp at odds with the nationalist projects that Thiel has advocated, and Palantir helps enact. On the contrary, by attempting to render critical theoretical concepts “systematic,” Karp reinterprets them in a way that legitimates the work he would go on to do. Shortly before Palantir began developing its infrastructure for identification and authentication, Aggression in the Life-World articulated an ideology of these processes.

    Freud Returns to Frankfurt

    Tech industry legend has it that Karp wrote his dissertation under Jürgen Habermas (Silicon Review 2018; Metcalf 2016; Greenberg 2013). In fact, he earned his doctorate from a different part of Goethe University than the one in which Habermas taught: not at the Institute for Social Research but in the Division of Social Sciences. Karp’s primary reader was the social psychologist Karola Brede, who then held a joint appointment at Goethe University’s Sociology Department and at the Sigmund Freud Institute; she and her younger colleague Hans-Joachim Busch appear listed as supervisors on the front page. The confusion is significant, and not only because it suggests an exaggeration. It also obscures important differences of emphasis and orientation between Karp’s advisors and Habermas. These differences directly shaped Karp’s graduate work.

    Habermas did engage with psychoanalysis early in his career.  In the spring and summer of 1959, he attended every one of a series of lectures organized by the Institute for Social Research to mark the centenary of Freud’s birth (Müller-Doohm 2016, 79; Brede and Mitscherlich-Nielsen 1996, 391). He went on to become close friends and even occasionally co-teach  (Brede and Mitscherlich-Nielsen 1996, 395) with one of the organizers and speakers of this series, Alexander Mitscherlich, who had long campaigned with Frankfurt School founder Max Horkheimer for the funds to establish the Sigmund Freud Institute and became the first director when it opened the following year. In 1968, shortly after Mitscherlich and his wife, Margarete, published their influential book, The Inability to Mourn, Habermas developed his first systemic critical social theory in Knowledge and Human Interests (1972). Nearly one third of that book is devoted to psychoanalysis, which Habermas treats as exemplary of knowledge constituted by the “critical” or “emancipatory interest”—that is, the species interest in engaging in critical reflection in order to overcome domination. However, in the 1970s, Habermas turned away from that book’s focus on philosophical anthropology toward the ideas about linguistic competence that culminated in his Theory of Communicative Action; in 1994, Margarete Mitscherlich recounted that Habermas had “gotten over” psychoanalysis in the process of writing that book (1996, 399). Karp’s interest in the theory of the drives, and in aggression in particular, was not drawn from Habermas but from scholars at the Freud Institute, where it was a major focus of research and public debate for decades.

    Freud himself never definitively decided whether he believed that a death drive existed. The historian Dagmar Herzog has shown that the question of aggression—and particularly the question of whether human beings are innately driven to commit destructive acts—dominated discussions of psychoanalysis in West Germany in the 1960s and 1970s. “In no other national context would the attempt to make sense of aggression become such a core preoccupation,” Herzog writes (2016, 124). After fascism, this subject was highly politicized. For some, the claim that aggression was a primary drive helped to explain the Nazi past: if all humans had an innate drive to commit violence, Nazi crimes could be understood as an extreme example of a general rule. For others, this interpretation risked naturalizing and normalizing Nazi atrocities. “Sex-radicals” inspired by the work of Wilhelm Reich pointed out that Freud had cited the libido as the explanation for most phenomena in life. According to this camp, Nazi aggression had been the result not of human nature but of repressive authoritarian socialization. In his own work, Mitscherlich attempted to elaborate a series of compromises between the conservative position (that hierarchy and aggression were natural) and the radical one (that new norms of anti-authoritarian socialization could eliminate hierarchy entirely; Herzog 2016, 128-131). Klaus Horn, the long-time director of the division of social psychology at the Freud Institute, whose collected writings Karp’s supervisor Hans-Joachim Busch edited, contested the terms of the disagreement. The entire point of sophisticated psychoanalysis, Horn argued, was that culture and biology were mutually constitutive and interacted continuously; to name one or the other as the source of human behavior was nonsensical (Herzog 2016, 135).

    Karp’s primary advisor, Karola Brede, who joined the Sigmund Freud Institute in 1967, began her career in the midst of these debates (Bareuther et al. 1989, 713). In her first book, published in 1972, Brede argued that “psychosomatic” disturbances had to be understood in the context of socialization processes. Not only did neurotic conflicts play a role in somatic illness; such illness constituted “socio-pathological” expressions of an increase in the forms of repression required to integrate individuals into society (Brede 1972). In 1976, Brede published a critique of Konrad Lorenz, whose bestselling work, On Aggression, had triggered much of the initial debate with Alexander Mitscherlich and others at the Institute, in the journal Psyche (“Der Trieb als humanspezifische Kategorie”; see Herzog 2016, 125-7).  Since the 1980s, her monographs have focused on work and workplace sociology, and on the role that psychoanalysis should play in critical social theory. Individual and Work (1986) explored the “psychoanalytic costs involved in developing one’s own labor power.” The Adventures of Adjusting to Everyday Work (1995) drew on empirical studies of German workplaces to demonstrate that psychodynamic processes played a key role in professional life, shaping processes of identity formation, authoritarian behavior, and gendered self-identity in the workplace. In that book, Brede criticizes Habermas for undervaluing psychoanalytic concepts—and unconscious aggression in particular—as social forces. Brede argues that the importance that Habermas assigned to “intention” in Theory of Communicative Action prevented him from recognizing the central role that the unconscious played in constituting identity, action, and subjectivity (1995, 223 & 225). At the same time, she was editing multiple volumes on psychoanalytic theory, including feminist perspectives in psychoanalysis, and in a series of journal articles in the 1990s, developed a focus on antisemitism and Germany’s relationship to its troubled history (Brede 1995, 1997, 2000).

    During his time as a PhD student, Karp seems to have worked very closely with Brede. The sole academic journal article that he published he co-authored with her in 1997. (An analysis of Daniel Goldhagen’s bestselling 1996 study, Hitler’s Willing Executioners, the article attempted to build on Goldhagen’s thesis by characterizing a specific, “eliminationist” form of antisemitism that Karp and Brede argued could only be understood from the perspective of Freudian psychoanalytic theory; see Brede and Karp 1997, 621-6.) Karp wrote the introduction for a volume of the Proceedings of the Freud Institute, which Brede edited (Brede et al. 1999, 5-7). The chapter that Karp contributed to that volume would appear in his dissertation, three years later, in almost identical form. Karp’s dissertation itself also closely followed the themes of Brede’s research.

    Aggression in the Life World

    The full title of Karp’s dissertation captures its patchwork quality: Aggression in the Life-World: Expanding Parsons’ Concept of Aggression Through a Description of the Connection Between Jargon, Aggression, and Culture. “This work began,” the opening sentences recall, “with the observation that many statements have the effect of relieving unconscious drives, not in spite, but because, of the fact that they are blatantly irrational” (Karp 2002, 2). Karp proposes that such statements provide relief by allowing a speaker to have things both ways: to acknowledge the existence of a social order and, indeed, demonstrate specific knowledge of that order while, at the same time, expressing taboo wishes that contravene social norms. As result, rather than destroy social order, such irrational statements integrate the speaker into society while also providing compensation for the pains of being integrated. To describe these kinds of statements Karp indicates that he will borrow a concept from the late work of Adorno: “jargon.” However, Karp announces that he will critique Adorno for depending too much on the very phenomenological tradition that his Jargon of Authenticity is meant to criticize. Adorno’s concept is not a concept at all, Karp alleges, but a “reservoir for collecting Adorno-poetry” (Sammelbecken Adornoscher Dichtung) (2002, 58). Karp’s own goal is to clarify jargon into an analytical concept that could then be incorporated into a classical sociological framework. As synecdoche for classical sociology, Karp takes the work of Talcott Parsons.

    The second chapter of Karp’s dissertation, a reading and critique of Parsons, had appeared in the Freud Institute publication, Cases for the Theory of the Drives. In his editor’s introduction to that volume, Karp had stated that the goal of their group had been to integrate psychoanalytic concepts in general and Freud’s theory of the drives in particular into frameworks provided by classical sociology. The volume begins with an essay by Brede on the failure of sociology as a discipline to account for the role that aggression plays in social integration. (Brede 1999, 11-45, credits Georg Simmel with having developed an account of the active role that aggression played in creating social cohesion; more on that below.) Karp reiterates Brede’s complaint, directing it against Parsons, whose account of aggression he calls “incomplete” or “watered down” (2002, 11). In the version that appears in his dissertation, several sections of literature review establish background assumptions and describe what Karp takes to be Parsons’ achievement: integrating the insights of Émile Durkheim and Sigmund Freud. Taking, from Durkheim, a theory of how societies develop systems of norms, and from Freud, how individuals internalize them, Parsons developed an account of culture as the site where the integration of personality and society takes place.

    For Parsons, pace Karp, culture itself is best understood as a system constituted through “interactions.” Karp credits Parsons with shifting the paradigm from a subject of consciousness to a subject in communication—translating the Freudian superego into sociological form, so that it appears, not as a moral enforcer, but as a psychic structure communicating cultural norms to the conscious subject. Yet, Karp protests that there are, in fact, parts of personality not determined by culture, and not visible to fellow members of a culture so long as an individual does not deviate from established norms of interaction. Parsons’ theory of aggression remains incomplete on at least two counts, then. First, Karp argues, Parsons fails to recognize aggression as a primary drive, treating it only as a secondary result that follows when the pleasure principle finds itself thwarted. Karp, by contrast, adopts the position that a drive toward death or destruction is at least as fundamental as the pleasure principle. Second, because Parsons defines aggression in terms of harms to social norms, he cannot explain how aggression itself can become a social norm, as it did in Nazi Germany. For an explanation of how aggressive impulses come to be integrated into society, Karp turns instead to Adorno.

    In Adorno’s Jargon of Authenticity, Karp found an account of how aggression constitutes itself in language and, through language, mediates social integration (2002, 57). Adorno’s lengthy essay, which he had originally intended to constitute one part of Negative Dialectics, resists easy summary. The essay begins by identifying theological overtones that, Adorno says, emanate from the language used by German existentialists—and by Martin Heidegger in particular. Adorno cites not only “authenticity,” but terms like “existential,” “in the decision,” “commission,” “appeal,” and “encounter,” as exemplary” (3). While the existentialists claim that such language constitutes a form of resistance to conformity, Adorno argues that it has in fact become highly standardized: “Their unmediated language they receive from a distributor” (14). Making fetishes of these particular terms, the existentialists decontextualize language in several respects. They do so at the level of the sentence—snatching certain, favored words out of the dialectical progression of thought as if meaning could exist without it. At the same time, the existentialist presents “words like ‘being’ as if they were the most concrete terms” and could obviate abstraction, the dialectical movement within language. The function of this rhetorical practice is to make reality seem simply present, and give the subject an illusion of self-presence—replacing consciousness of historical conditions with an illusion of immediate self-experience. The “authenticity” generated by jargon therefore depends on forgetting or repressing the historically objective realities of social domination.

    Beyond simply obscuring the realities of domination, Adorno continues, the jargon of authenticity spiritualizes them.  For instance, Martin Heidegger turns the real precarity of people who might at any time lose their jobs and homes into a defining condition of Dasein: “The true need for residence consists in the fact that mortals must first learn to reside” (26). The power of such jargon—which transforms the risk of homelessness into an essential trait of Dasein—comes from the fact that it expresses human need, even as it disavows it. To this extent, jargon has an a- or even anti-political character: it disguises current and contingent effects of social domination into eternal and unchangeable characteristics of human existence. “The categories of jargon are gladly brought forward, as though they were not abstracted from generated and transitory situations but rather belonged to the essence of man,” Adorno writes. “Man is the ideology of dehumanization” (48). Jargon turns fascist insofar as it leads the person who uses it to perceive historical conditions of domination—including their own domination—as the very source of their identity. “Identification with that which is inevitable remains the only consolation of this philosophy of consolation.” Adorno writes. “Its dignified mannerism is a reactionary response to the secularization of death” (143, 144).

    Karp says at the outset that his goal is to make Adorno’s collection of observations about jargon “systematic.” In order to do so, he approaches the subject from a different perspective than Adorno did: focused on the question of what psychological needs jargon fulfills. For Karp, the achievement of jargon lies in its “double function” (Doppelfunktion). Jargon both acknowledges the objective forces that oppress people and allows people to adapt or accommodate themselves to those same forces by eternalizing them—removing them from the context of the social relations where they originate, and treating them as features of human existence in general. Jargon addresses needs that cannot be satisfied, because they reflect the realities of living in a society characterized by domination, but also cannot be acted upon, because they are taboo. For Karp, insofar as jargon is a kind of speech that designates speakers as belonging to an in-group, it also expresses an unconscious drive toward aggression. In jargon we see the aggression that drives individuals to exclude others from the social world doing its binding work. It is on these grounds that Karp argues that aggression is a constitutive part of jargon—its ever-present, if unacknowledged, obverse.

    Karp grants that Adorno is concerned with social life. The Jargon of Authenticity investigates precisely the social function of ontology, or how it turns “authenticity” into a cultural form, circulated within mass culture. Adorno also alludes to the specifically German inheritance of jargon—the resemblance between Heidegger’s celebration of völkisch rural life and Nazi celebration of the same (1973, 3). Yet, Karp argues, Adorno does not provide an account of how a deception or illusion of authenticity came to be a structure in the life-world. Even as he criticizes phenomenological ontology, Adorno relies on a concept of language that is itself phenomenological. Echoing critiques by Axel Honneth (1991) of Horkheimer and Adorno’s failures to account for the unique domain of “the social,” Karp turns to the same thinkers Karola Brede used in her article on “Social Integration and Aggression”: Sigmund Freud and Georg Simmel.

    In that article, Brede develops a reading that joins Freud and Simmel’s accounts of the role of the figure of “the stranger” in modern societies. In Civilization and its Discontents, Brede argues, Freud described “strangers” in terms that initially appear incompatible with the account Simmel had put forth in his famous 1908 “Excursus on the Stranger.” Simmel described the mechanisms whereby social groups exclude strangers in order to eliminate danger—thereby controlling the “monstrous reservoir of aggressivity” that would otherwise threaten social structure. (The quote is from Parsons.) Freud wrote that, despite the Biblical commandment to love our neighbors, and the ban on killing, we experience a hatred of strangers, because they make us experience what is strange in us, and fear what in them cannot be fit into our cultural models. Brede concludes that it is only by combining Freudian psychodynamics with Simmel’s account of the role of exclusion in social formation that critical social theory could account for the forms of violence that dominated the history of the twentieth century (Brede 199, 43).

    Karp contrasts Adorno with both Freud and Simmel, and finds Adorno to be more pessimistic than either of these predecessors. Compared to Freud, who argued that culture successfully repressed both libidinal and destructive drives in the name of moral principles, Karp writes that Adorno regarded culture as fundamentally amoral. Rather than successfully repressing antisocial drives, Karp writes, late capitalist culture sates its members with “false satisfactions.” People look for opportunities to express their needs for self-preservation. However, since they know that their needs cannot be fully satisfied, they simultaneously fall over themselves to destroy the memory of the false fulfillment they have had. Repressed awareness of the false nature of their own satisfaction produces the ambient aggression that people take out on strangers.

    For Simmel, the stranger is part of all modern societies, Karp writes. For Adorno, the stranger extends an invitation to violence. Jargon gains its power from the fact that those who speak, and hear, it really are searching for a lost community. The very presence of the stranger demonstrates that such community cannot be simply given; jargon is powerful precisely in proportion to how much the shared context of life has been destroyed.  It therefore offers a “dishonest answer to an honest longing” for intersubjectivity, gaining strength in proportion to the intensity the need that has been thwarted (Karp 2002, 85).  Wishes that contradict social norms are brought into the web of social relations (Geflecht der Lebenswelt), in such a way that they do not need to be sanctioned or punished for violating social norms (91). On the contrary, they serve to bind members of social groups to one another.

    Testing Jargon

    As a case study to demonstrate the usefulness of his modified concept of jargon, Karp takes up a notorious episode in post-wall German intellectual history: a speech that the celebrated novelist Martin Walser gave in October 1998, at St. Paul’s Church in Frankfurt. The occasion was Walser’s acceptance of the 1998 Peace Prize of the German Book Trade. The novelist had traveled a complex political itinerary by the late 1990s. Documents released in 2007 would uncover the fact that as a teenager, during the final years of the Second World War, Walser joined the Nazi Party and fought as a member of the Wehrmacht. But he first became publicly known as a left-wing writer. In the 1950s, Walser attended meetings of the informal but influential German writer’s association Gruppe 47 and received their annual literary prize for his short story, “Templones Ende”; in 1964 he attended the Frankfurt Auschwitz trials, where low ranking officials were charged and convicted for crimes that they had perpetrated during the Holocaust. In his 1965 essay about that experience, “Our Auschwitz,” Walser insisted on the collective responsibility of Germans for the horrors of the Nazi period; indeed he criticized the emphasis on spectacular cruelty at the trial, and in the media, to the extent that this emphasis allowed the public to maintain an imaginary distance between themselves and the Nazi past (Walser 2015, 217-56). Walser supported Social Democratic Party member Willy Brandt for Chancellor and even joined the German Communist Party during that decade. By the 1980s, however, Walser was widely perceived to have migrated back to the right. And when he gave his speech “Experiences Composing a Sermon” on the sixtieth anniversary of Kristallnacht, he used the occasion to attack the public culture of Holocaust remembrance. Walser described this culture as a “moral cudgel” or “bludgeon” (Moralkeule).

    “Experiences Composing a Sermon” adopts a stream of consciousness, rather than argumentative, style in order to explain why Walser refused to do what he said was expected of him: to speak about the ugliness of German history. Instead, he argued that no further collective memorialization of the Holocaust was necessary. There was no such thing, he said, as collective or shared conscience at all: conscience should be a private matter. Critics and intellectuals he disparaged as “preachers” were “instrumentalizing” and “vulgarizing” memory, when they exhorted the public constantly to reflect on the crimes of the Nazi period. “There is probably such a thing as the banality of good,” Walser quipped, echoing Hannah Arendt (2015, 513). He did not spell out what ends he thought that these “preachers” aimed to instrumentalize German guilt for. He concluded by abruptly calling on the newly elected president Roman Herzog, who was in attendance, to free the former East German spy, Rainer Rupp, from prison. Walser’s speech received a standing ovation—though not, notably, from Ignatz Bubis, then the president of the Central Council of Jews in Germany, who was also in attendance. The next day, in the Frankfurter Allgemeine Zeitung, Bubis called the speech an act of “intellectual arson” (geistiges Brandstiftung). The controversy that followed generated a huge amount of debate among German intellectuals and in the German and international media (Cohen 1998). Two months later, the offices of the Frankfurter Allgemeine Zeitung hosted a formal debate between the two men. It lasted for four hours. FAZ published a transcript of their conversation in a special supplement (Walser and Bubis 1999).

    In February and March 1999, Karola Brede delivered two lectures about the controversy at Harvard University, which she subsequently published in Psyche (2000, 203-33). Brede examined both the text of Walser’s original speech and the transcript of his debate with Bubis in order to determine, first, why Walser’s speech had been received so enthusiastically, and second, whether Walser, despite eschewing explicitly antisemitic language, had in fact “taken the side of anti-Semites.” In order to explain why Walser’s speech had attracted so much attention, Brede carried out a close textual analysis. She found that, although Walser had not presented a very cogent argument, he had successfully staged a “relieving rhetoric” (Entlastungsrhetorik) that freed his audience from the sense of awkwardness or self-consciousness that they felt talking about Auschwitz in public and replaced these negative feelings with a positive sense of heightened self-regard. Brede argued that Walser used jargon, in the sense of Adorno’s “jargon of authenticity,” in order to flatter listeners into thinking that they were taking part in a daring intellectual exercise, while in fact activating anti-intellectual feelings. (In a footnote she recommended an “unpublished paper” by Karp, presumably from his dissertation, for further reading; Brede 2000, 215). She concluded that indeed Walser had taken the side of antisemites because, in both his speech and his subsequent debate with Bubis, he constructed a point of identification for listeners (“we Germans”) that systematically excluded German Jews (203). By organizing his speech entirely around “perpetrators” and the “critics” who shamed them, Walser elided the perspective of the Nazi’s victims. Invoking Simmel’s essay on “The Stranger” again, Brede argued that Walser’s behavior during his debate with Bubis offered a model of how unconscious aggression could drive social integration through exclusion. Regardless of what Walser said he felt, to the extent that his rhetoric excluded Bubis from his definition of “we Germans” as a Jew, his conduct had been antisemitic.

    In the final chapter of his dissertation, Karp also offers a reading of Walser’s prize acceptance speech, arguing that Walser made use of jargon in Adorno’s sense. Like Brede, Karp bases his argument on close textual analysis. He catalogs several specific literary strategies that, he says, enabled Walser to appeal to the unconscious or repressed emotions of his listeners without having to convince them. First, Karp tracks how Walser played with pronouns in the opening movement of the speech in order to eliminate distance and create identification between himself and his audience. Walser shifted from describing himself in the third person singular (the “one who had been chosen” for the prize) to the first-person plural (“we Germans”). At the same time, by making vague references to intellectuals who had made public remembrance and guilt compulsory, Walser created the sensation that he and the listeners he has invited to identify with his position (“we”) were only responding to attacks from outside—that “we” were the real victims. (In her article, Brede had quipped that this narrative of victimhood “could have come from a B-movie Western”; Brede 2000, 214). Through this technique, Karp writes, Walser created the impression that if “we” were to strike back against the “Holocaust preachers,” this would only be an act of self-defense.

    Karp stresses that the content of “Experiences Composing a Sermon” was less important than the effect that these rhetorical gestures had of making listeners feel that they belonged to Walser’s side. In the controversy that followed Walser’s acceptance speech, critics often asked which “intellectuals” he had meant to criticize; these critics, Karp says, missed the point. It was not the content of the speech, but its form, that mattered. It was through form that Walser had identified and addressed the psychological needs of his audience. That form did not aim to convince listeners; it did not need to. It simply appealed to (repressed) emotions that they were already experiencing.

    For Adorno, the anti-political or fascist character of jargon was directly tied to the non-dialectical concept of language that jargon advanced. By eliminating abstraction from philosophical language, and detaching selected words from the flow of thought, jargon made absent things seem present. By using such language, existentialism attempted to construct an illusion that the subject could form itself outside of history. By raising historically contingent experiences of domination to defining features of the human, jargon presented them as unchangeable. And by identifying humanity itself with those experiences, it identified the subject with domination.

    Karp does not demonstrate that Walser’s “jargon” performed any of these functions, precisely. Rather, he focuses on the psychodynamics motivating his speech. Karp proposes that the pain (Leiden) that Walser’s speech expressed resembled the “domination” (Zwang) that Adorno recognized in jargon. While Adorno’s jargon made the absent or abstract seem present, through an act of linguistic fetishization, Walser’s jargon embodied the obverse impulse: to wish the discomfort created by the presence of history’s victims away.

    Karp is less concerned with the history of domination, that is, than with Freudian drives. For Adorno, the purpose of carrying out a determinate negation of jargon was to create the conditions of possibility for critical theory to address the real needs to which jargon constituted a false response. For Karp, the interest of the project is more technical: his goal is to uncover forms and patterns of speech that admit aggression into social life and give it a central role in consolidating identity. By combining culturally legitimated expressions with taboo ones, Karp argues, Walser created an environment in which his controversial opinion could be accepted as “obvious” or “self-evident” (selbstverständlich) by his audience. That is, Walser created a linguistic form through which aggression could be integrated into the life-world.

    Unlike Adorno (or Brede), Karp refrains from making any normative assessment of this achievement. His “systematization” of the concept of jargon empties that concept of the critical force that Adorno meant for it to carry. If anything, the tone of the final pages of Aggression in the Life-World is forgiving. Karp concludes by arguing that Walser was not necessarily aware of the meaning of his speech—indeed, that he probably was not. By allowing his audience to express their taboo wishes to be done with Holocaust remembrance, Karp writes, Walser convinced them that, “these taboos should never have existed.” Then he cuts to his bibliography.

    Grand Hotel California Abyss

    The abruptness of the ending of Aggression in the Life-World is difficult to interpret. At one level, Karp’s apparent lack of interest in the ethical and political implications of his case study reflects his stated goals and methods. From the beginning, he has set out to reveal that the social is constituted through acts of unconscious aggression, and that this aggression becomes legible in specific linguistic interactions, rather than to evaluate the effects of aggression itself. Reading Walser, Karp explicitly privileges form over content, treating the former as symptomatic of unstated meanings and effects. Granting the critic authority over the text he is analyzing, such an approach presumes the author under analysis to be ignorant, if not innocent, of what he really has at stake; it treats conscious attitudes and overt arguments as holding, at most, a secondary interest. At another level, the banal explanations for Karp’s tone and brevity may be the most plausible. He was writing in a non-native language; like many graduate students, he may have finished in haste.[9] In any case, his decision to eschew the kinds of judgments made by both his subject, Adorno, and his mentor, Brede is striking—all the more so because Karp is descended from German Jews and “grew up in a Jewish family” (Karp 2019a). This choice reflects a different mode of engagement with critical theory than scholars of either digital media or digitally mediated right-wing movements have observed.

    Historians have shown that the Frankfurt School critiques of mass media helped shape the idea that digital media could constitute a more democratic alternative. Fred Turner has argued that the research Adorno conducted on the role of radio and cinema in shaping the authoritarian personality, as well as the proximity of Frankfurt School scholars to the Bauhaus and other practicing artists, generated a set of beliefs about the democratic character of interactivity (Turner 2013). Orit Halpern is more critical of the essentially liberal assumptions of media and technology critique in which she, too, places Adorno (2015, 18-19). However, like Turner, Halpern identifies the emergence of interactivity as a key epistemic shift away from the Frankfurt School paradigm that opposed “attention” and “distraction.”  Cybernetics redefined the problem of “spectatorship” by transforming the spectator from an individual into a site of perceptions and cognitions—an “interface or infrastructure for information processing.” Where radio, cinema, and television had promoted conformity and passivity, cybernetic media promised to facilitate individual choice and free expression (2015, 224-6).

    More recently, critics and scholars attempting to account for the phobic fascination that new right-wing movements show for “cultural marxism” have analyzed it in a variety of ways. The least sophisticated take at face value the claims of “alt-right” figures that they are only reacting to the ludicrous and pernicious excesses of their opponents.[10] More substantial interpretations have described the far right fixation on the Frankfurt School as a “dialectic of counter-Enlightenment” or form of “inverted appropriation.” Martin Jay (2011) and Andreas Huyssen (2017, 2019) both argue that the attraction of critical theory for the right lies in the dynamics of projection and disavowed recognition that it sets in motion. As Huyssen puts it, “wider circles of American white supremacists and their publications… have been drawn to critique and deconstruction because, on those traditions, they project their own destructive and nihilistic tendencies” (2017).

    Aggression in the Life World does none of these things. Karp’s dissertation does not take up the critiques of mass media or the authoritarian personality that were canonized in the Anglo-American world at all, much less use them to develop democratic alternatives. Nor does it project its own penchant for destruction onto its subjects. In contrast with the “lunatic fringe” (Jay, 30) Karp does not carry out an “inverted appropriation” of critical theory, so much as a partial one.  He adapts Frankfurt School concepts for technical purposes, making them more instrumentally useful to the disciplines of sociology or social psychology by abstracting them from their contexts. In the process, he also abandons the Frankfurt School commitment to emancipation. It is at this level of abstraction that his neo-Marxism—from which Marx and materialism have all but disappeared—can coexist with the nationalism that he and Thiel invoke to defend Palantir.

    I asked at the beginning of this paper what beliefs Karp shares with Peter Thiel and what their common commitments might reveal about the self-consciously “contrarian” or “heterodox” network of actors that they inhabit. One answer that Aggression in the Life World makes evident is that both men regard the desire to commit violence as a constant, founding fact of human life. Both also believe that this drive expresses itself in social forms like language or group structure, even if speakers or group members remain unaware of their own motivations. These are ideas that Thiel attributes to the work of the eclectic French theorist René Girard, with whom he studied at Stanford, and whose theories of mimetic desire, scapegoating, and herd mentality he has often cited. In 2006 Thiel’s nonprofit foundation established an institute to promote the study of Girard and support the further development of mimetic theory; this organization, Imitatio, remains one of the foundation’s three major projects (Daub 2020, 97-112).

    The text that Karp chose to analyze, as his case study, also shares a set of concerns with Thiel’s writings and statements against campus multiculturalism and political correctness; Walser’s speech became a touchstone of debates about historical memory in Germany, in which the newly imported Americanism politische Korrektheit circulated widely. In his dissertation, Karp does not celebrate Walser’s taboo speech in the same way that Thiel and his associates have sometimes celebrated violations of speech norms.[11] However, he does assert that jargon, and the unconscious aggression that it expresses, plays a role in the formation of all social groups, and refrains from evaluating whether Walser’s jargon was particularly problematic. Of course, the term “jargon” itself became a commonplace during the U. S. culture wars in the 1980s and 1990s, used to accuse academics and university administrators who purported to be speaking for vulnerable populations of in fact deploying obscure terms to aggrandize themselves. Thiel and his co-author David O. Sacks devote a chapter of The Diversity Myth to an account of how the vagueness of the word “multiculturalism” enabled activists and administrators at Stanford to use it in this manner (1995, 23-49). The idea that such terms express ressentiment and a will to power is consistent with the theoretical framework that Karp went on to develop.

    Ironically, by attempting to expunge jargon of its subjective or impressionistic content, Karp renders it less materially objective. Rather than locating jargon in specific experiences of modernity, he transforms it into an expression of drives that, because they are timeless, are merely psychological. Karp makes a version of the eternalizing move that Adorno criticizes in Heidegger, in other words. Rather than elevating precarity into the essence of the human, Karp makes aggressive violence the substance of the social. In the process, he empties the concept of jargon of its critical power. When he arrives at the end of Walser’s speech, a speech that Karp characterizes as consolidating community based on unspeakable aggression, he can conclude only that it was effective.

    A still greater irony in retrospect may be how, in Karp’s telling, Adorno’s jargon anticipates the software tools Palantir would develop. By tracing the rhetorical patterns that constitute jargon in literary language, Karp argues that he can reveal otherwise hidden identities and affinities—and the drive to commit violence that lies latent in them. By looking back to Adorno, he points toward a possible critique of big data analytics as a kind of authenticity jargon. That is, a way of generating and eternalizing false forms of selfhood. In data analysis, the role of the analyst is not to demystify and dispel reification. On the contrary, it is precisely to fix identity from its digital traces and to make predictions on the basis of the same. For Adorno, jargon is a form of language that seems to authenticate identity—but only seems to. The identities it makes available to the subject are based on an illusion that jargon sustains by suppressing the self-difference that historicity introduces into language. The illusion it offers is of timeless “human” experience. It covers for domination insofar as it makes the human condition—or rather, human conditions as they are at the time of speaking—appear unchangeable.

    Big data analytics could be said to constitute an authenticity jargon in this sense: although they treat the data set under analysis as having something like an unconscious, they eliminate the temporal gaps and spaces of ambiguity that drive psychoanalytic interpretation. In place of interpretation, data analytics substitutes correlations that it treats simply as given. To a machine learning algorithm that has been trained on data sets that include zip codes and rates of defaulting on mortgage payments, for instance, it does not matter why mortgagees in a given zip code may have been more likely to default in the past. Nor will the algorithm that recommends rejecting a loan application necessarily explain that the zip code was the deciding factor. Like the existentialist’s illusion of immediate experience these procedures generate an aura of incontestable self-evidence.

    As in Adorno, here, the loss of particular contexts can serve to conceal, and thus perpetuate, domination. Algorithms take the histories of oppression embedded in training data and project them into the future, via predictions that powerful institutions then act on. If the identities constituted in this way are false, the reifications they generate do real work, and can cause real harm. And yet, to read these figures historically is to recognize that they need not come true. This is not an interpretive path that Karp pursues. But for those of us concerned about the relationship between digital technologies and justice, this repressed insight of his dissertation is the most critical to follow.

    _____

    Moira Weigel is a Junior Fellow at the Harvard Society of Fellows and an editor and cofounder of Logic Magazine. She received her PhD from the combined program in Comparative Literature and Film and Media Studies at Yale University in 2017.

    Back to the essay

    _____

    Notes

    [1] Translations from German are mine unless otherwise noted.

    [2] In 2017, when activists doxxed the founder of the neofascist blog the Right Stuff and the antisemitic podcasts Fash the Nation and The Daily Shoah, who went by the alias Mike Enoch, they revealed that he was in fact a programmer named Michael Peinovich (Marantz 2019, 275-9). Curtis Yarvin, who wrote a widely read blog advocating the end of democracy under the name Mencius Moldbug, also worked as a software engineer (Gray 2017). Several journalists have documented the interest that figures in or adjacent to the tech industry evince with Yarvin’s Neoreaction (NRx) or Dark Enlightenment (Gray 2017; Goldhill 2017). Prominent white nationalist media entrepreneurs also claim to have substantial followings in the tech industry. In 2017, Andrew Anglin told a Mother Jones reporter that Santa Clara County was the highest source of inbound traffic to his website, The Daily Stormer; Chuck Johnson said the same about his (now defunct) website Got News (Harkinson 2017). In response to an interview question about his “average” supporter, the white nationalist Richard Spencer claimed that, “many in the Alt-Right are tech savvy or actually tech professionals” (Hawley 2017, 78).

    [3] James Damore, the engineer who wrote the July 2017 memo, “Google’s Ideological Echo Chamber,” and was subsequently fired, toured the right wing speaking circuit (Tiku 2019, 85-7). Brian Amerige, the Facebook engineer who identified himself to the New York Times in July 2018 as the creator of a conservative group on Facebook’s internal forum, Workplace, and then left the company, did the same (Conger and Frankel 2018). Shortly after, it was reported that Oculus cofounder Palmer Luckey’s departure from the company in 2017 had also been driven by conflicts with management over his support of Donald Trump (Grind and Hagey 2018); Luckey has since publicly claimed to speak on behalf of a silent majority of “tech conservatives” (Luckey 2018). Arne Wilberg, a long time recruiter of technical employees for Google and YouTube, filed a reverse discrimination suit in 2018, alleging that he had been fired for “opposing illegal hiring practices… systematically discriminating in favor of job applicants who are Hispanic, African American, or female, against Caucasian and Asian men” (Wilberg v. Google 2018). Most recently, in August 2019, The Wall Street Journal reported that the former Google engineer Kevin Cernekee had been fired in 2017 in retaliation for expressing “conservative” viewpoints on internal listservs (Copeland 2019). Former colleagues subsequently published screenshots showing that, among other things, Cernekee had proposed raising money for a bounty for finding the masked protestor who punched Richard Spencer at the Presidential inauguration in 2017 using WeSearchr, the now-defunct fundraising platform run by Holocaust “revisionist” Chuck C. Johnson. They also shared screenshots showing that Cernekee had defended two neo-Nazi organizations, The Traditionalist Workers Party and Golden State Skinheads, suggesting that they should “rename themselves to something normie-compatible like ‘The Helpful Neighborhood Bald Guys’ or the ‘Open Society Institute’” (Wacker 2019; Tiku 2019, 84). Like Damore, Amerige, and Wilberg, Cernekee received national media coverage.

    [4] For instance, emails that BuzzFeed reporter Joe Bernstein obtained from Breitbart.com stated that Thiel invited Curtis Yarvin to watch the 2016 election results at his home in Hollywood Hills, where he had previously hosted Breitbart tech editor Milo Yiannopoulos; New Yorker writer Andrew Marantz reported running into Thiel at the “DeploraBall” that took place on the eve of Trump’s inauguration (2019, 47-9).

    [5] Thiel supported Hawley’s campaign for Attorney General of Missouri in 2016 (Center for Responsive Politics); in that office, Hawley initiated an antitrust investigation of Google (Dave 2017) and a probe into Facebook exploitation of user data (Allen 2018). Thiel later donated to Hawley’s 2018 Senate campaign (Center for Responsive Politics); in the Senate, Hawley has sponsored multiple bills to regulate tech platforms (US Senate 2019a, 2019b, 2019c, 2019d, 2019e, 2019f, 2019g). These activities earned him praise from Trump at a White House Social Media Summit on the theme of liberal bias at tech companies, where Hawley also spoke (Trump 2019a).

    [6] Pat Buchanan devoted a chapter to the subject, entitled “The Frankfurt School Comes to America,” in his 2001 Death of the West. Breitbart editor Michael Walsh published an entire book about critical theory, in which he described it as “the very essence of Satanism” (Walsh 2016, 50). Andrew Breitbart himself devoted a chapter to it in his memoir (Breitbart 2011, 113). Jordan Peterson more often rails against “postmodernism,” or “political correctness.” However, he too regularly refers to “Cultural Marxism”; at time of writing, an explainer video that he produced for the pro-Trump Epoch Times, has tallied nearly 750,000 views on YouTube (Peterson 2017).

    [7] The memo that engineer James Damore circulated to his colleagues at Google presented a version of the Cultural Marxism conspiracy in its endnotes, as fact. “As it became clear that the working class of the liberal democracies wasn’t going to overthrow their ‘capitalist oppressors,’” Damore wrote, “the Marxist intellectuals transitioned from class warfare to gender and race politics” (Conger 2017). The group that Brian Amerige started on Facebook Workplace was called “Resisting Cultural Marxism” (Conger and Frankel 2018).

    [8] The Stanford Review, which Thiel founded late in his sophomore year and edited throughout his junior and senior years at the university, devoted extensive attention to questions of speech on Stanford’s campus, which became a focal point of the US culture wars and drew international media attention when the academic senate voted to (slightly) revise its core curriculum in 1988 (see Hartman 2019, 227-30). In 1995, with fellow Stanford alumnus (and later PayPal Chief Operating Officer) David O. Sacks, Thiel published The Diversity Myth, a critique of the “debilitating” effects of “political correctness” on college campuses that, among other things, compared multicultural campus activists to “the bar scene from Star Wars” (xix). In 2018 he moved to Los Angeles, saying that political correctness in San Francisco had become unbearable (Peltz and Pierson 2018; Solon 2018) and in 2019 Founders Fund, the venture capital firm where he is a partner, announced that they would be sponsoring a conference to promote “thoughtcrime” (Founders Fund 2019).

    [9] Aggression in the Life World is significantly shorter than either of the other two dissertations submitted to the sociology department at Frankfurt that year: Margaret Ann Griesese’s The Brazilian Women’s Movement Against Violence clocked in at 314 pages, and Konstantinos Tsapakidis, Collective Memory and Cultures of Resistance in Ancient Greek Music at 267; Karp’s is 129.

    [10] Angela Nagle (2017) put forth an extreme version of this argument, arguing that the excesses of “social justice warrior” identity politics provoked the formation of the alt-right and that trolls like Milo Yiannopoulos were only replicating tactics of “transgression” that had been pioneered by leftist intellectuals like bell hooks and institutionalized on liberal campuses and in liberal media. Kakutani similarly argued that the Trumpist right was simply taking up tactics that the relativism of “postmodernism” had pioneered in the 1960s (2018, 18).

    [11] In The Diversity Myth Sacks and Thiel describe on instance of resistance to the Stanford speech code, which was adopted in May 1990 and revoked in March 1995, as heroic. The incident took place on the night of January 19, 1992, when three members of the Alpha Epsilon Pi fraternity, Michael Ehrman, Keith Rabois, and Bret Scher, were walking home from a party through one of Stanford’s residential dormitories. Rabois, then a first year law student, began shouting slurs at the home of a resident tutor in the dormitory, who had been involved in the expulsion of Ehrman’s brother Ken from residential housing four years earlier, after Ken called the resident tutor assigned to him a “faggot.” “Faggot! Hope you die of AIDS!” Rabois shouted. “Can’t wait until you die, faggot.” He later confirmed and defended these statements in a letter to the Stanford Daily. “Admittedly, the comments made were not very articulate, nor very intellectual nor profound,” he wrote. “The intention was for the speech to be outrageous enough to provoke a thought of ‘Wow, if he can say that, I guess I can say a little more than I thought.” The speech code, which had not until that point been used to punish any student, was not used to punish Rabois; however, Thiel and Sacks describe the criticism of Rabois from administrators and fellow students that followed as a “witch hunt” (1995, 162-75). Rabois subsequently transferred to Harvard but later worked with Thiel at PayPal and later as a partner at Founders Fund. More recently, the blog post that Founders Fund published to announce the Hereticon conference cited in Footnote 8, described violating taboos on speech as its goal: “Imagine a conference for people banned from other conferences. Imagine a safe space for people who don’t feel safe in safe spaces. Over three nights we’ll feature many of our culture’s most important troublemakers in the fields of knowledge necessary to the progressive improvement of our civilization” (2019).

    _____

    Works Cited

  • Sareeta Amrute — Sounding the Flat Alarm (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    Sareeta Amrute — Sounding the Flat Alarm (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    a review of Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019)

    by Sareeta Amrute

    Shoshana Zuboff’s The Age of Surveillance Capitalism begins badly: the author’s house burns down. Her home is struck by lightning, it takes Zuboff a few minutes to realize the enormity of the conflagration happening all around her and escape. The book, written after the fire goes out, is a warning about the enormity of the changes kindled while we slept. Zuboff describes a world in which autonomy, agency, and privacy–the walls of her house–are under threat by a corporate apparatus that records everything in order to control behavior. That act of monitoring and recording inaugurates a new era in the development of capitalism that Zuboff believes is destructive of both individual liberty and democratic institutions.

    Surveillance Capitalism  is the alarm to all of us to get out of the house, lest it burn down all around us. In making this warning however, Zuboff discounts the long history of surveillance outside the middle class enclaves of Europe and the United States and assumes that protecting the privacy of individuals in that same location will solve the problem of surveillance for the Rest.

    The house functions as a metaphor throughout the book, first as a warning about how difficult it is to recognize a radical remaking of our world as it is happening: this change is akin to a lightning strike. The second is as an indicator of the kind of world we inhabit: it is a world that could be enhancing of life, instead it treats life as a resource to be extracted. The third uses the idea of house as protection to solve the other two problems.

    Zuboff contrasts an early moment of the digitally connected world, an internet of things that was on a closed circuit within one house, to the current moment, where the same devices are wired to the companies that make them. For Zuboff, that difference demonstrates the exponential changes that happened in between the early promise of the internet and its current malformation. Surveillance Capital argues that from the connective potential of the early Internet has come the current dystopian state of affairs, where human behavior is monitored by companies in order to nudge that behavior toward predetermined ends. In this way, Surveillance Capitalism reverses an earlier moment of connectivity boosterism, exemplified by the title of Thomas Friedman’s popular 2005 book, The World is Flat, which celebrated technologically-produced globalization.[1] The decades from the mid to late 2000s witnessed a significant critique of the flat world hypothesis, which could be summed up as an argument for both the vast unevenness of the world, and for the continuous remaking of global tropes into local and varied meanings. Yet, here we are again it seems in 2020, except instead of celebrating flatness, we are sounding the flat alarm.

    The book’s very dimensions–it is a doorstop, on purpose–act as an inoculation against the thinness and flatness Zuboff diagnoses as predominant features of our world. Zuboff argues that these features are unprecedented, that they mark an extreme deviation from capitalism as it has been. They therefore require both a new name and new analytic tools. The name
    “surveillance capitalism” describes information-gathering enterprises that are unprecedented in human history, and that information, Zuboff writes, is used to predict “our futures for the sake of others’ gain, not ours” (11). As tech companies increasingly use our data to steer behavior towards products and advertising, our ability to experience a deep interiority where we can exercise autonomous choice shrinks. Importantly for Zuboff, these companies collect not just data willingly giving, but the data exhaust that we often unknowingly and unintentionally emit as we move through a world mediated by our devices. Behavioral nudges mark for Zuboff the ultimate endpoint for a capitalism gone awry, a capitalism drives humans to abandon free will in favor of being governed by corporations that use aggregate data about individual interactions to determine future human action.

    Zuboff’s flat alarm usefully takes the reader through the philosophical underpinnings of behaviorism, following the work of B.F. Skinner, a psychologist working at Harvard in the mid-twentieth century who believed adjusting human behavior was a matter of changing external environments through positive and negative stimuli, or reinforcements. Zuboff argues that behaviorist attitudes toward the world, considered outré in their time, have moved to the heart of Silicon Valley philosophies of disruption, where they meet a particular kind of mode of capital accumulation driven by the logics of venture, neutrality, and macho meritocracies. The result is a kind of ideology of tools and of making humans into tools, that Zuboff terms instrumentarianism, at once driven to produce companies that are profitable for venture capitalists and investors and to treat human beings as sources of data to be turned toward profitability. Widespread surveillance is a necessary feature of this new world order because it is through that observation of every detail of human life that these companies can amass the data they need to turn a profit by predicting and ultimately controlling, or tuning, human behavior.

    Zuboff identifies key figures in the development of surveillance capitalism, including the aforementioned Skinner. Her particular mode of critique tends to focus on CEOs, and Zuboff reads their pronouncements as signs of the legacy of behaviorism in the C-Suites of contemporary firms. Zuboff also spends several chapters situating the critics of these surveillance capitalists as those who need to raise the flat world alarm. She compares this need to both her personal experience with the house fire and the experience of thinkers such as Hanah Arendt writing on totalitarianism. Here, she draws an explicit critique that conjoins totalitarianism and surveillance capital. Zuboff argues that just as totalitarianism was unthinkable as it was unfolding, so too does surveillance capitalism seem an impossible future given how we like to think about human behavior and its governance. Zuboff’s argument here is highly persuasive, since she is suggesting that the critics will always come to realize what it is they are critiquing just before it is too late to do anything about it. She also argues that behaviorism is in some sense the inverse of state-governed totalitarianism, since while totalitarianism attempted to discipline humans from the inside out, surveillance capitalism is agnostic when it comes to interiority–it only deals in and tries to engineer surface effects. For all this ‘neutrality’ over and against belief, it is equally oppressive, because it aims at social domination.

    Previous reviews have provided an overview of the chapters in this book; I will not repeat the exercise, except to say that the introduction nicely lays out her overall argument and could be used effectively to broach the topic of surveillance for many audiences. The chapters outlining B.F. Skinner’s imprint on behaviorist ideologies are also useful to provide historical context to the current age, as is the general story of Google’s turn toward profitability as told in Part I. And, yet, the promise of these earlier chapters–particularly the nice turn of phrase, the “‘behavioral means of production” yield in the latter chapters to an impoverished account of our options and of the contradictions at work within tech companies. These lacunae are due at least in part to Zuboff’s choice of revolutionary subject–the middle class consumer.

    Toward the end of Surveillance Capitalism, Zuboff rebuilds her house, this time with thicker walls. She uses her house’s regeneration to argue for a philosophical concept she calls the “right to sanctuary,” based largely on the writings of Gaston Bachelard, whose Poetics of Space describes for Zuboff how the shelter of home shapes “many of our most fundamental ways of making sense of experience” (477). Zuboff believes that surveillance capitalists want to bring down all these walls, for the sake of opening up our every action to collection and our every impulse to guidance from above. One might pause here and wonder whether the breaking down of walls is not fundamental to capitalism from the beginning, rather than an aberration of the current age. In other words, does the age of surveillance mark such a radical break from the general thrust of capital’s need to open up new markets and exploit new raw materials? Or, more to the point, for whom does it signify a radical aberration?  Posing this question would bring into focus the need to interrogate the complicitness of the very categories of autonomy, agency, and privacy in the extension of capitalism across geographies, and to historicize the production of interiority within that same frame.

    Against the contemporary tendency toward effacing the interior life of families and individuals, Zuboff offers sanctuary as the right to protection from surveillance. In this moment, that protection needs thick walls. For Zuboff, those walls need to be built by young people–one gets the sense that she is speaking across these sections to her own children and those of her children’s generation. The problem with describing sanctuary in this way is that it narrows the scope for both understanding the stakes of surveillance and recognizing where the battles for control over data will be fought.

    As a broadside, Surveillance Capitalism works through a combination of rhetoric and evidence. Zuboff hopes that a younger generation will fight the watchers for control over their own data. Yet, by addressing largely a well-off, college-educated, and young audience, Zuboff restricts the people who are being asked to take up the cause, and fails to ask the difficult question of what it would take to build a house with thicker walls for everyone.

    A persistent concern while reading this book is whether its analysis can encompass otherwheres. The populations that are most at risk under surveillance capitalism include immigrants, minorities, and workers, both within and outside the United States. The framework of data exhaust and its use to predict and govern behavior does not quite illuminate the uses of data collection to track border crossers, “predict” crime, and monitor worker movements inside warehouses. These relationships require an analysis that can get at the overlap between corporate and government surveillance, which Surveillance Capitalism studiously avoids. The book begins with an analysis of a system of exploitation based on turning data into profits, and argues that the new mode of production makes the motor of capitalism shift from products to information, a point well established by previous literature. Given this analysis, it astonishing that the last section of the book returns to a defense of individual rights, without stopping to question whether the ‘hive’ forms of organization that Zuboff finds in the logics of surveillance capital may have been a cooptation of radical kinds of social organizing arranged against a different model of exploitation. Leaderless movements like Occupy should be considered fully when describing hives, along with contemporary initiatives like tech worker cooperatives and technical alternatives like local mesh networks. The possibility that these radical forms of social organization may be subject to cooptation by the actors Zuboff describes never appears in the book. Instead, Zuboff appears to mistranslate theories of the subject that locate agency above or below the level of the individual to political acquiescence to a program of total social control. Without taking the step considering the political potential in ‘hive-like’ social organization, Zuboff’s corrective falls back on notions of individual rights and protections and is unable to imagine a new kind of collective action that moves beyond both individualism and behaviorism. This failure, for instance, skews Zuboff’s arguments toward the familiar ground of data protection as a solution rather than toward the more radical stances of refusal, which question data collection in the first place.

    Zuboff’s world is flat. It is a world in which there are Big Others that suck up an undifferentiated public’s data, Others whose objective is to mold our behavior and steal our free will. In this version of flatness, what was once described positively is now described negatively, as if we had collectively turned a rosy-colored smooth world flat black. Yet, how collective is this experience? How will it play out if the solutions we provide rely on bracketing out the question of what kinds of people and communities are afforded the chance to build thicker walls? This calls forth a deeper issue than simply that of a lack of inclusion of other voices in Zuboff’s account. After all, perhaps fixing the surveillance issue through the kinds of rights to sanctuary that Zuboff suggests would also fix the issue for those who are not usually conceived of as mainstream consumers.

    Except, historical examples ranging from Simone Browne’s explication of surveillance and slavery in Dark Matters to Achille Mbembe’s articulation of necropolitcs teach us that consumer protection is a thin filament on which to hang protection for all from overweaning surveillance apparati–corporate or otherwise. One could easily imagine a world where the privacy rights of well-heeled Americans are protected, but those of others continue to be violated. To reference one pertinent example, companies who are banking on monetizing data through a contractual relationship where individuals sell the data that they themselves own are simultaneously banking on those who need to sell their data to make money. In other words, as legal scholar Stacy-Ann Elvy notes (2017), in a personal data economy low-income consumers will be incentivized to sell their data without much concern for the conditions of sale, even while those who are well-off will have the means to avoid these incentives, resulting in the illusion of individual control and uneven access to privacy determined by degrees of socioeconomic vulnerability. These individuals will also be exposed to a greater degree of risk that their information will not stay secure.

    Simone Browne demonstrates that what we understand as surveillance was developed on and through black bodies, and that these populations of slaves and ex-slaves have developed strategies of avoiding detection, which she calls dark sousveillance. As Browne notes, “routing the study of contemporary surveillance” through the histories of “black enslavement and captivity opens up the possibility for fugitive acts of escape” even while it shows that the normative surveillance of white bodies was built on long histories of experimentations with black bodies (Browne 2015, 164). Achille Mbembe’s scholarship on necropolitics was developed through the insight that some life becomes killable, or in Jasbir Puar’s (2017) memorable phrasing, maimable, at the same time that other life is propagated. Mbembe proposes “necropolitcs” to describe “death worlds” where “death” not life, “is the space where freedom and negotiation happen” where “vast populations are subjected to conditions of life conferring on them the status of living dead” (Mbembe 2003, 40). The right to sanctuary appears to short circuit the spaces where life has already been configured as available for expropriation through perpetual wounding. Crucial to both Browne and Mbembe’s arguments is the insight that the study of the uneven harms of surveillance concomitantly surfaces the tactics of opposition and the archives of the world that provide alternative models of refuge outside the contractual property relationship evoked across the pages of Surveillance Capitalism.

    All those considered outside the ambit of individualized rights, including those in territories marked by extrajudicial measures, those deemed illegal, those perennially under threat, those who while at work are unprotected, those in unseen workplaces, and those simply unable to exercise rights to privacy due to law or circumstance, have little place in Zuboff’s analysis. One only has to think of Kashmir, and the access that people with no ties to this place will now have to building houses there, to begin to grasp the contested politics of home-building.[2] Without an acknowledgement of the limits of both the critique of surveillance capitalism and the agents of its proposed solutions, it seems this otherwise promising book will reach the usual audiences and have the usual effect of shoring up some peoples’ and places’ rights even while making the rest of the world and its populations available for experiments in data appropriation.

    _____

    Sareeta Amrute is Associate Professor of Anthropology at the University of Washington. Her scholarship focuses on contemporary capitalism and ways of working, and particularly on the ways race and class are revisited and remade in sites of new economy work, such as coding and software economies. She is the author of the book Encoding Race, Encoding Class: Indian IT Workers in Berlin (Duke University Press, 2016) and recently published the article “Of Techno-Ethics and Techno-Affects” in Feminist Review.

    Back to the essay

    _____

    Notes

    [1] Friedman (2005) attributes this phrase to Nandan Nilekani, then Co-Chair, of Indian Tech company Infosys (and subsequently Chair of the Unique Identification Authority of India).

    [2] Until 2019, Articles 370 and 35A of the Indian Constitution granted the territories of Jammu and Kashmir special status, which allowed the state to keep on it’s books laws restricting who could buy land and property in Kashmir by allowing the territories to define who counted as a permanent resident.. After the abrogation of Article 370, rumors swirled that the rich from Delhi and elsewhere would now be able to purchase holiday homes in the area. See e.g. Devansh Sharma, “All You Need to Know about Buying Property in Jammu and Kashmir“; Parvaiz Bukhari, “Myth No 1 about Article 370: It Prevents Indians from Buying Land in Kashmir.”

    _____

    Works Cited

    • Browne, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Durham, NC: Duke University Press.
    • Elvy, Stacy-Ann. 2017. “Paying for Privacy and the Personal Data Economy.” Columbia Law Review 117:6 (Oct). 1369-1460.
    • Friedman, Thomas. 2005. The World Is Flat: A Brief History of the Twenty-First Century. New York: Farrar, Straus and Giroux.
    • Mbembe, Achille. 2003. “Necropolitics.” Public Culture 15:1 (Winter). 11-40.
    • Mbembe, Achille. 2019. Necropolitics. Durham, NC: Duke University Press.
    • Puar, Jasbir K. 2017. The Right to Maim: Debility, Capacity, Disability. Durham, NC: Duke University Press.

     

  • David Newhoff —  The Harms of Digital Tech and Tech Law (Review of Goldberg, Nobody’s Victim: Fighting Psychos, Stalkers, Pervs, and Trolls)

    David Newhoff — The Harms of Digital Tech and Tech Law (Review of Goldberg, Nobody’s Victim: Fighting Psychos, Stalkers, Pervs, and Trolls)

    a review of Carrie Goldberg (with Jeannine Amber), Nobody’s Victim: Fighting Psychos, Stalkers, Pervs, and Trolls (Plume, 2019)

    by David Newhoff

    ~

    During an exchange on my blog in 2014 with an individual named Anonymous—it must have been a very popular baby name at some point—I was told, “Yes, yes, David, show us on the doll where the Internet touched you, because we all know that all evil comes from there.”  That discussion was in context to the internet industry’s anti-copyright agenda, but the smugness of the response, lurking behind a concealed identity while making an eye-rolling allusion to sexual assault, is characteristic of the tech-bro culture that dismisses any conversation about the darker aspects of digital life.  In fact, I am fairly sure it was the same Anonymous who decided that I had “failed the free speech test” because I wrote encouragingly about the prospect of making the conduct generally referred to as “revenge porn” a federal crime.

    Those old exchanges, conducted in the safety of the abstract, came rushing into the foreground while I read attorney Carrie Goldberg’s Nobody’s Victim:  Fighting Psychos, Stalkers, Pervs, and Trolls, because Goldberg and her colleagues do not address conduct like “revenge porn” in the abstract: they deal with it as a tangible and terrifying reality.  It is at her Brooklyn law firm where the victims of that crime (and other forms of harassment and abuse) arrive shattered, frightened and suicidally desperate to escape the hell their lives have become—often with the push of a button.  These are people who can show us exactly how and where the “internet touched” them, and Goldberg’s book is a harrowing tutorial in the various ways online platforms provide opportunity, motive, sanctuary, and even profit for individuals who purposely choose to destroy other human beings.

    Nobody’s Victim reads like an anthology of short thriller/horror stories but for the fact that each of the terrorized protagonists is a real person, and far too many of them are children.  These infuriating anecdotes are interwoven with the story of Goldberg’s own transformation from a young woman nearly destroyed by predatory men to become, as she puts it, the attorney she needed when she was in trouble.  The result is both an inspiring narrative of personal triumph over adversity and a rigorous critique of our inadequate legal framework, which needlessly exacerbates the suffering of people targeted by life-threatening attacks—attacks that were simply not possible before the internet as we know it.

    Covering a lot of ground—from stalking to sextortion—Goldberg tells the stories of her archetypal clients, along with her own jaw-dropping experiences, in a voice that pairs the discipline of a lawyer with the passion of a crusader. “We can be the army to take these motherfuckers down,” her introduction concludes, and “What happened to you matters,” is the mantra of her epilogue.  It is clear that the central message she wants to convey is one of empowerment for the constituency she represents, but the details are chilling to say the least.

    Anyone anywhere can have his or her life torn apart by remote control—i.e. via the web.  All the malefactor really needs is basic computer skills, a little too much time on his hands, and a profoundly broken moral compass.  Psychos, stalkers, pervs, trolls, and assholes are all specific types of criminals in the “Carrie Goldberg Taxonomy of Offenders.”  For instance, the ex-boyfriend who uploads non-consensual intimate images to a revenge-porn site is a psycho, while the site operator, profiting off the misery of others, is an asshole.

    As Goldberg notes in Chapter 6, by the year 2014, there were about 3,000 websites dedicated to hosting revenge porn.  That is a hell of a lot of guys willing to expose their ex-girlfriends to a range of potential trauma—these include public humiliation, job loss, relationship damage, sexual assault, PTSD, and suicide—simply because their partner broke off the relationship.  This volume of men engaging in revenge porn does seem to imply that the existence of the technology itself becomes a motive or rationale for the conduct, but that is perhaps a subject to explore in another post.

    One theme that comes through loud and clear for me in Nobody’s Victim—particularly in context to the editorial scope of my blog—is that the individual conduct of the psychos, et al is only slightly less maddening than our systemic failure to protect the victims.  As a cyber-policy matter, that means the chronic misinterpretation of Section 230 of the Communications Decency Act as a speech-right protection and a blanket liability shield for online service providers.

    Taking on Section 230

    Goldberg’s most high-profile client, Matthew Herrick, was the target of a disgruntled ex-boyfriend named Juan Carlos Gutierrez, who tried, via the gay dating app Grindr, to get Herrick at least raped, if not murdered.  By creating several Grindr accounts designed to impersonate Herrick, Gutierrez posted invitations to seek him out for rough, “rape-fantasy” sex, including messages that any protests to stop should be taken as “part of the game.”  Hundreds of men swarmed into Herrick’s life for more than a year—appearing at his home and work, often becoming verbally or physically aggressive upon discovering that he was not offering what they were looking for.

    With Goldberg’s help, Herrick succeeded in getting Gutierrez convicted on felony charges, but what they could never obtain was even the most basic form of assistance from Grindr.  You might think it would be at least common courtesy for an internet business to remove accounts that falsely claim to be you—particularly when those accounts are being used to facilitate criminal threats to your safety and livelihood.  In fact, the smaller dating app Gutierrez had been using called Scruff eagerly and sympathetically complied with Herrick’s plea for help.  But Grindr told him to fuck off by saying, “There’s nothing we can do.”

    Herrick, through Goldberg, sued Grindr for “negligence, deceptive business practices and false advertising, intentional and negligent infliction of emotional distress, failure to warn, and negligent misrepresentation.”  They lost in both the District Court and in the Second Circuit Court of Appeals, principally because most courts continue to read Section 230 of the CDA as absolute immunity for online service providers.  This cognitive dissonance, which chooses to ignore the fact that a matter like Herrick’s plight is wholly unrelated to free speech, is emphasized in an amicus brief that the Electronic Frontier Foundation (EFF) filed in the Second Circuit appeal on behalf of Grindr:

    Intermediaries allow Internet users to connect easily with family and friends, follow the news, share opinions and personal experiences, create and share art, and debate politics. Appellant’s efforts to circumvent Section 230’s protections undermine Congress’s goal of encouraging open platforms and robust online speech.

    Isn’t that pretty?  But what the fuck has any of it got to do with using internet technologies to impersonate someone; to commit libel, slander, or defamation in his/her name; to deploy violent people (or in some cases SWAT teams) against a private individual; or to get someone fired or arrested—and all for the perpetrator’s amusement, vengeance, or profit?  None of that conduct is remotely protected by the speech right, and all of it—all of it—infringes the speech rights and other civil liberties of the victims.  Perhaps most absurdly, organizations like EFF choose to overlook the fact that the first right being denied to someone in Herrick’s predicament is the right to safely access all those invaluable activities enabled by online “intermediaries.”

    No, Grindr did not commit those crimes, but let’s be real.  What was Herrick asking Grindr to do?  Remove the conduits through which crimes were being committed against him—online accounts pretending to be him.  Scruff complied, and I didn’t feel a tremor in the free speech right, did you?   If we truly cannot make a legal distinction between Herrick’s circumstances and all that frilly bullshit the EFF likes to repeat ad nauseum, then, we are clearly too stupid to reap the benefits of the internet while mitigating its harms.

    Suffice to say, a fight over Section 230 is indeed brewing.  As it heats up, Silicon Valley will marshal its seemingly endless resources to defend the status quo, and they will carpet bomb the public with messages that any change to this law will be an existential threat to the internet as we know it.  There is some truth to that, of course, but the internet as we know it needs a lot of work.  Meanwhile, if anyone is going to win against Big Tech’s juggernaut on this issue, it will be thanks to the leadership of (mostly) women like Carrie Goldberg, her colleagues, and her clients.

    It is an unfortunate axiom that policy rarely changes without some constituency suffering harm for a period of time; and those are exactly the people whose stories Goldberg is in a position to tell—in court, in Congress, and to the public.  If you read Nobody’s Victim and still insist, like my friend Anonymous, this is all a theoretical debate about anomalous cases, largely mooted by the speech right, there’s a pretty good chance you’re an asshole—if not a psycho, stalker, perv, or troll.  And that clock you hear ticking is actually the sound of Carrie Goldberg’s signature high heels heading your way.

    _____

    David Newhoff is a filmmaker, writer, and communications consultant, and an activist for artist’s rights, especially as they pertain to the erosion of copyright by digital technology companies. He is writing a book about copyright due out in Fall 2020. He writes about these issue frequently as @illusionofmore on Twitter and on the blog The Illusion of More, on which an earlier version of this review first appeared.

    Back to the essay

  • Audrey Watters — Education Technology and The Age of Surveillance Capitalism (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    Audrey Watters — Education Technology and The Age of Surveillance Capitalism (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    a review of Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019)

    by Audrey Watters

    ~

    The future of education is technological. Necessarily so.

    Or that’s what the proponents of ed-tech would want you to believe. In order to prepare students for the future, the practices of teaching and learning – indeed the whole notion of “school” – must embrace tech-centered courseware and curriculum. Education must adopt not only the products but the values of the high tech industry. It must conform to the demands for efficiency, speed, scale.

    To resist technology, therefore, is to undermine students’ opportunities. To resist technology is to deny students’ their future.

    Or so the story goes.

    Shoshana Zuboff weaves a very different tale in her book The Age of Surveillance Capitalism. Its subtitle, The Fight for a Human Future at the New Frontier of Power, underscores her argument that the acquiescence to new digital technologies is detrimental to our futures. These technologies foreclose rather than foster future possibilities.

    And that sure seems plausible, what with our social media profiles being scrutinized to adjudicate our immigration status, our fitness trackers being monitored to determine our insurance rates, our reading and viewing habits being manipulated by black-box algorithms, our devices listening in and nudging us as the world seems to totter towards totalitarianism.

    We have known for some time now that tech companies extract massive amounts of data from us in order to run (and ostensibly improve) their services. But increasingly, Zuboff contends, these companies are now using our data for much more than that: to shape and modify and predict our behavior – “‘treatments’ or ‘data pellets’ that select good behaviors,” as one ed-tech executive described it to Zuboff. She calls this “behavioral surplus,” a concept that is fundamental to surveillance capitalism, which she argues is a new form of political, economic, and social power that has emerged from the “internet of everything.”

    Zuboff draws in part on the work of B. F. Skinner to make her case – his work on behavioral modification of animals, obviously, but also his larger theories about behavioral and social engineering, best articulated perhaps in his novel Walden Two and in his most controversial book Beyond Freedom and Dignity. By shaping our behaviors – through nudges and rewards “data pellets” and the like – technologies circumscribe our ability to make decisions. They impede our “right to the future tense,” Zuboff contends.

    Google and Facebook are paradigmatic here, and Zuboff argues that the former was instrumental in discovering the value of behavioral surplus when it began, circa 2003, using user data to fine-tune ad targeting and to make predictions about which ads users would click on. More clicks, of course, led to more revenue, and behavioral surplus became a new and dominant business model, at first for digital advertisers like Google and Facebook but shortly thereafter for all sorts of companies in all sorts of industries.

    And that includes ed-tech, of course – most obviously in predictive analytics software that promises to identify struggling students (such as Civitas Learning) and in behavior management software that’s aimed at fostering “a positive school culture” (like ClassDojo).

    Google and Facebook, whose executives are clearly the villains of Zuboff’s book, have keen interests in the education market too. The former is much more overt, no doubt, with its Google Suite product offerings and its ubiquitous corporate evangelism. But the latter shouldn’t be ignored, even if it’s seen as simply a consumer-facing product. Mark Zuckerberg is an active education technology investor; Facebook has “learning communities” called Facebook Education; and the company’s engineers helped to build the personalized learning platform for the charter school chain Summit Schools. The kinds of data extraction and behavioral modification that Zuboff identifies as central to surveillance capitalism are part of Google and Facebook’s education efforts, even if laws like COPPA prevent these firms from monetizing the products directly through advertising.

    Despite these companies’ influence in education, despite Zuboff’s reliance on B. F. Skinner’s behaviorist theories, and despite her insistence that surveillance capitalists are poised to dominate the future of work – not as a division of labor but as a division of learning – Zuboff has nothing much to say about how education technologies specifically might operate as a key lever in this new form of social and political power that she has identified. (The quotation above from the “data pellet” fellow notwithstanding.)

    Of course, I never expect people to write about ed-tech, despite the importance of the field historically to the development of computing and Internet technologies or the theories underpinning them. (B. F. Skinner is certainly a case in point.) Intertwined with the notion that “the future of education is necessarily technological” is the idea that the past and present of education are utterly pre-industrial, and that digital technologies must be used to reshape education (and education technologies) – this rather than recognizing the long, long history of education technologies and the ways in which these have shaped what today’s digital technologies generally have become.

    As Zuboff relates the history of surveillance capitalism, she contends that it constitutes a break from previous forms of capitalism (forms that Zuboff seems to suggest were actually quite benign). I don’t buy it. She claims she can pinpoint this break to a specific moment and a particular set of actors, positing that the origin of this new system was Google’s development of AdSense. She does describe a number of other factors at play in the early 2000s that led to the rise of surveillance capitalism: notably, a post–9/11 climate in which the US government was willing to overlook growing privacy concerns about digital technologies and to use them instead to surveil the population in order to predict and prevent terrorism. And there are other threads she traces as well: neoliberalism and the pressures to privatize public institutions and deregulate private ones; individualization and the demands (socially and economically) of consumerism; and behaviorism and Skinner’s theories of operant conditioning and social engineering. While Zuboff does talk at length about how we got here, the “here” of surveillance capitalism, she argues, is a radically new place with new markets and new socioeconomic arrangements:

    the competitive dynamics of these new markets drive surveillance capitalists to acquire ever-more-predictive sources of behavioral surplus: our voices, personalities, and emotions. Eventually, surveillance capitalists discovered that the most-predictive behavioral data come from intervening in the state of play in order to nudge, coax, tune, and herd behavior toward profitable outcomes. Competitive pressures produced this shift, in which automated machine processes not only know our behavior but also shape our behavior at scale. With this reorientation from knowledge to power, it is no longer enough to automate information flows about us; the goal now is to automate us. In this phase of surveillance capitalism’s evolution, the means of production are subordinated to an increasingly complex and comprehensive ‘means of behavioral modification.’ In this way, surveillance capitalism births a new species of power that I call instrumentarianism. Instrumentarian power knows and shapes human behavior toward others’ ends. Instead of armaments and armies, it works its will through the automated medium of an increasingly ubiquitous computational architecture of ‘smart’ networked devices, things, and spaces.

    As this passage indicates, Zuboff believes (but never states outright) that a Marxist analysis of capitalism is no longer sufficient. And this is incredibly important as it means, for example, that her framework does not address how labor has changed under surveillance capitalism. Because even with the centrality of data extraction and analysis to this new system, there is still work. There are still workers. There is still class and plenty of room for an analysis of class, digital work, and high tech consumerism. Labor – digital or otherwise – remains in conflict with capital. The Age of Surveillance Capitalism as Evgeny Morozov’s lengthy review in The Baffler puts it, might succeed as “a warning against ‘surveillance dataism,’” but largely fails as a theory of capitalism.

    Yet the book, while ignoring education technology, might be at its most useful in helping further a criticism of education technology in just those terms: as surveillance technologies, relying on data extraction and behavior modification. (That’s not to say that education technology criticism shouldn’t develop a much more rigorous analysis of labor. Good grief.)

    As Zuboff points out, B. F. Skinner “imagined a pervasive ‘technology of behavior’” that would transform all of society but that, at the very least he hoped, would transform education. Today’s corporations might be better equipped to deliver technologies of behavior at scale, but this was already a big business in the 1950s and 1960s. Skinner’s ideas did not only exist in the fantasy of Walden Two. Nor did they operate solely in the psych lab. Behavioral engineering was central to the development of teaching machines; and despite the story that somehow, after Chomsky denounced Skinner in the pages of The New York Review of Books, that no one “did behaviorism” any longer, it remained integral to much of educational computing on into the 1970s and 1980s.

    And on and on and on – a more solid through line than the all-of-a-suddenness that Zuboff narrates for the birth of surveillance capitalism. Personalized learning – the kind hyped these days by Mark Zuckerberg and many others in Silicon Valley – is just the latest version of Skinner’s behavioral technology. Personalized learning relies on data extraction and analysis; it urges and rewards students and promises everyone will reach “mastery.” It gives the illusion of freedom and autonomy perhaps – at least in its name; but personalized learning is fundamentally about conditioning and control.

    “I suggest that we now face the moment in history,” Zuboff writes, “when the elemental right to the future tense is endangered by a panvasive digital architecture of behavior modification owned and operated by surveillance capital, necessitated by its economic imperatives, and driven by its laws of motion, all for the sake of its guaranteed outcomes.” I’m not so sure that surveillance capitalists are assured of guaranteed outcomes. The manipulation of platforms like Google and Facebook by white supremacists demonstrates that it’s not just the tech companies who are wielding this architecture to their own ends.

    Nevertheless, those who work in and work with education technology need to confront and resist this architecture – the “surveillance dataism,” to borrow Morozov’s phrase – even if (especially if) the outcomes promised are purportedly “for the good of the student.”

    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines, forthcoming from The MIT Press. She maintains the widely-read Hack Education blog, on which earlier version of this piece first appeared. and writes frequently for The b2o Review Digital Studies section on digital technology and education.

    Back to the essay