Reviews and analysis of scholarly books about digital technology and culture, as well as of articles, legal proceedings, videos, social media, digital humanities projects, and other emerging digital forms, offered from a humanist perspective, in which our primary intellectual commitment is to the deeply embedded texts, figures, themes, and politics that constitute human culture, regardless of the medium in which they occur.

  • Zachary Loeb — Specters of Ludd (Review of Gavin Mueller, Breaking Things at Work)

    Zachary Loeb — Specters of Ludd (Review of Gavin Mueller, Breaking Things at Work)

    a review of Gavin Mueller, Breaking Things at Work: The Luddites Were Right about Why You Hate Your Job (Verso, 2021)

    by Zachary Loeb

    A specter is haunting technological society—the specter of Luddism.

    Granted, as is so often the case with hauntings, reactions to this specter are divided: there are some who are frightened, others who scoff at the very idea of it, quite a few dream about designing high-tech gadgets with which to conclusively bust this ghost so that it can bother us no more, and still others are convinced that this specter is trying to tell us something important if only we are willing to listen. And though there are plenty of people who have taken to scoffing derisively whenever the presence of General Ludd is felt, there would be no need to issue those epithetic guffaws if they were truly directed at nothing. The dominant forces of technological society have been trying to exorcize this spirit, but instead of banishing this ghost they only seem to be summoning it.

    The problem with spectral Luddism is that one can feel its presence without necessarily understanding what it means. When one encounters Luddism in the world today it still tends to be as either a term of self-deprecation used to describe why someone has an old smartphone, or as an insult that is hurled at anyone who dares question “the good news” presented by the high priests of technology. With Breaking Things at Work: The Luddites Were Right About Why You Hate Your Job, Gavin Mueller challenges those prevailing attitudes and ideas about Luddism, instead articulating a perspective on Luddism that finds in it a vital analysis with which to respond to techno-capitalism. Luddism, in Mueller’s argument, is not simply a term to describe a specific group of workers at the turn of the 19th century, rather Luddism can be seen in workers’ struggles across centuries.

    At core, Breaking Things at Work is less of a history of Luddism, and more of a manifesto. Historic movements and theorists are thoughtfully engaged with throughout the volume, but this is consistently in service of making an argument about how we should be responding to technology in the present. While contemporary books about technology (even ones that advance a critical attitude) have a tendency to carefully couch any criticism in neatly worded expressions of love for technology, Mueller’s book is refreshing in the forthrightness with which he expresses the view that “technology often plays a detrimental role in working life, and in struggles for a better one” (4). In clearly setting out the particular politics of his book, Mueller makes his goal clear: “to make Marxists into Luddites” and “to turn people critical of technology into Marxists” (5). This is no small challenge, as Mueller notes that “historically Marxists have not been critical of technology” (4) on the one hand, and that “much of contemporary technological criticism comes from a  place of romantic humanism” (6) on the other hand. For Mueller “the problem of technology is its role in capitalism” (7), but the way in which many of these technologies have been designed to advance capitalism’s goals makes it questionable whether all of these machines can necessarily be repurposed. Basing his analysis on a history of class struggle, Mueller is not so much setting out to tell workers what to do, as much as he is putting a name on something that workers are already doing.

    Mueller begins the first chapter of his book by explaining who the actual Luddites were and providing some more details to explain the tactics for which they became legendary. As skilled craft workers in early 19th century England, the historic Luddites saw firsthand how the introduction of new machines resulted in their own impoverishment. Though the Luddites would become famous for breaking machines, it was a tactic they turned to only after their appeals to parliament to protect their trades went ignored. With broad popular support, the Luddites donned the anonymizing mask of General Ludd, and took up arms in their own defense. Contrary to the popular myth in which the Luddites smashed every machine out of a fit of wild hatred, the historic record shows that the Luddites were quite focused in their targets, picking workshops and factories where the new machines had been used as an excuse to lower wages. Luddism did not die out in its moment because the tactics were seen as pointless, rather the movement came to an end at the muzzle of a gun, as troops were deployed to quell the uprising—with many of the captured Luddites being either hanged or transported. Nevertheless, this was certainly not the last time that machine-breaking was taken up as a tactic: not long after the Luddite risings the Swing Riots were even more effective in their targeting of machinery. And, furthermore, as Mueller makes clear throughout his book, the tactic of seeing the machine as a site for resistance continues to this day.

    Perhaps the key takeaway from the historic Luddites is not that they smashed machines, but that they identified machinery as a site of political struggle. They did not take hammers to stocking frames out of a particular hatred for these contraptions; rather they took hammers to stocking frames as a way of targeting the owners of those stocking frames. These struggles, in which groups of workers came together with community support, demonstrate how the Luddite’s various tactics served as “practices of political composition” (16, italics in original text) whereby the Luddites came to see themselves as workers with shared interests that were in opposition to the interests of their employers. The Luddites were not to be assuaged by appeals to the idea of progress, or lurid fantasies of a high-tech utopia, they could see the technological changes playing out in real time in front of them, and what they could see there was not a distant future of plenty, but an immediate future of immiseration. The Luddites were not fools, quite the contrary: they saw exactly what the new machines meant for themselves and their communities, and so they decided to do something about it.

    Despite the popular support the Luddites enjoyed in their own communities, and the extent to which machine-breaking remained a common tactic even after the Luddite risings had been repressed, already in the 19th century more optimistic attitudes towards technology were ascendant. Mueller detects some of this optimism in Karl Marx, noting that “there is evidence for a technophilic Marx” (19), yet Mueller pushes back against the common assumption that Marx was a technological determinist. While recognizing that Marx (and Engels) had made some less than generous comments about the Luddites, Mueller emphasizes Marx’s attention to the real struggles of workers against capitalism and notes that “the struggles against machines were the struggles against the society that utilized them” (24, italics in original text). And the frequency with which machines were becoming targets of worker’s ire in the 19th century demonstrates the way in which workers saw the machines not as neutral tools but as instruments of the factory owners’ power. While defenders of mass machinery may point to the abundance such machines create, some figures like William Morris pushed back on these promises of abundance by noting that such machinery sapped any pleasure out of the act of laboring while the abundance was just a share in shoddy goods. In Marx and Morris, as well as in the actual struggles of workers, Mueller points to the importance of technology becoming recognized as a site of political struggle—emphasizing that in worker’s resistance to technology can be found “a more liberatory politics of work and technology” (29).

    That the 19th century was home to the most renowned fight against technology, does not mean that these struggles (be they physical or philosophical) ended with the arrival of the 20th century. While much is often made of the “scientific management” of Frederick W. Taylor, less is often said of the ways in which workers resisted this system that turned them into living cogs—and even less is usually said of the strike at the Watertown Arsenal wherein (quite unlike the case of the Luddites) Congress sided with the workers (and their union). Nevertheless, the Taylorist viewpoint that “capitalist technologies like scientific management” were “an objective way to improve productivity and therefore the condition of workers” (35) was a viewpoint shared by a not inconsiderable number of socialists in those years. Within the international left of the early 20th century, debates about the meaning of machinery were heated: some like Karl Kautsky took a deterministic stance that developments in capitalist production methods were paving the way for communism; others like the IWW activist Elizabeth Gurley Flynn cheered the tactic of workers sabotaging their machines; still others like Thorstein Veblen dreamed of a technocratic society overseen by benevolent engineers; various Bolsheviks argued about the deployment of Taylorist techniques in the new Soviet state; and standing at the edge of the fascist abyss Walter Benjamin gestured towards a politics that does not praise speed but searches desperately for an emergency brake.

    While the direction of debates about technology in the early 20th century were significantly disrupted by the Second World War (just as they had been upended by the First World War), in the aftermath of Auschwitz and Hiroshima debates about technology and work only intensified. Automation represented a new hope to business owners even as it represented a new threat to workers, as automation could sap the power of agitated workers while centralizing further control in the hands of management. Importantly, automation was not simply accepted by workers, and Mueller notes “on the vanguard of opposing automation were those often marginalized by the official workers’ movement—women and African Americans” (63). Opposition to automation often took the form of “wildcat strikes” with union leaders failing to keep pace with the radicalism and fury of their members. In this period of post-war tumult, left-wing thinkers ranging from Raya Dunayevskaya to Herbert Marcuse to Shulamith Firestone articulated a spectrum of different responses to the promises and perils of automation—yet even as they theorized: workers in mines, factories, and at the docks continued to strike against what the introduction of automation meant for their lives. Simultaneously, automation became a topic of interest, and debate, within the social movements of the time, with automation being viewed by those movements as threat and hope.

    Lurking in the background of many of the discussions around automation was the spread of computers. As increasing numbers of people became aware of them, computers quickly conjured both adoration and dread—they were a frequent target of student activists in the 1960s and 1970s, even as elements of the counterculture (such as Stewart Brand’s Whole Earth Catalog) were enthusiastic about computers. Businesses were quick to adopt computers, and these machines often accelerated the automation of workplaces (while opening up new types of work to the threat of being automated). Yet the rise of the computer also gave rise to a new sort of figure, “the hacker” whose very technological expertise positioned them to challenge computerized capitalism. Though the “politics of hackers are complicated,” Mueller emphasizes that they are often some of technology’s “most critical users, and they regularly deploy their skills to subvert measures by corporations to rationalize and control computer user behavior. They are often Luddites to the core” (105). Not uniformly uncritical celebrants of technology, many hackers turn their intimate knowledge of computers into a way of knowing where best to strike—even as they champion initiatives such as free software, peer-to-peer sharing, and tools for avoiding surveillance.

    Yet as computers have infiltrated nearly every space and moment, it is not only hackers who find themselves regularly interacting with these machines. The omnipresence of computers creates a situation wherein “work seeps into every nook and cranny of human existence via capitalist technologies, accompanied by the erosion of wages and free time” (119) as more and more of our activities become fodder for corporate recommendation algorithms we find ourselves endlessly working for Facebook and Google even as we respond to work emails at 1 a.m. Despite the promises of digital plenty, computing technologies (broadly defined) seem to be giving rise to an increasing sense of frustration, and though there are some who advocate for an anodyne “tech humanism,” it may well be that “the strategy of refusal pursued by the industrial workers of old might be a more promising technique against the depression engines of social media” (122).

    Breaking Things at Work concludes with a call for the radical left to “put forth a decelerationist politics: a politics of slowing down change, undermining technological progress, and limiting capital’s rapacity, while developing organization and cultivating militancy” (127-128). Such a politics entails not a rejection of progress, but a critical reexamination of what it is that is actually meant when the word “progress” is bandied about, as too often what progress stands for is “the progress of elites at the expense of the rest of us” (128). Putting forth such a politics does not require creating something entirely new, but rather recognizing that the elements of just such a politics can be seen repeatedly in worker’s movements and social movements.

    In putting forth a clear definition of “Luddism,” Mueller highlights that Luddism “emphasizes autonomy” by seeking to put control back into the hands of the people actually doing the work, “views technology not as neutral but as a site of struggle,” “rejects production for production’s sake,” “can generalize” into a strategy for mass action, and is “antagonistic” taking a firm stance in clear opposition to capitalism and capitalist technology. In the increasing frustration with social media, in the growing environmental calls for “degrowth,” and in the cracks showing in the golden calf of technology, the space is opening for a politics that takes up the hammer of Luddism. Recognizing as it does so, that a hammer can be used not just to smash things that need to be broken, a hammer can also be used to build something different.

    *

    One of the factors that makes Luddism so appealing more than two centuries later is that it is an ideology that still calls out to be developed. The historic Luddites were undoubtedly real people, with real worries, and real thoughts on the tactics that they were deploying—and yet the historic Luddites did not leave any manifestoes or books of their own writing behind. What remains from the Luddites are primarily the letters they sent and snatches of songs in which they were immortalized (which have been helpfully collected in Kevin Binfield’s 2015 Writings of the Luddites). And though one can begin to cobble together a philosophy of technology from reading through those letters, the work of explaining exactly what it is that Luddism means has been a task that has largely fallen to others. Granted, part of what made the Luddites successful in their time was that the mask of General Ludd could be picked up and worn by many individuals, all of whom could claim to be General Ludd (or his representative).

    With Breaking Things at Work, Gavin Mueller has crafted a vital contribution to Luddism, and what makes this book especially important is the way in which it furthers Luddism in a variety of ways. On one level, Mueller’s book provides a solid introduction and overview to Luddite thinking and tactics throughout the ages, which makes the book a useful retort to those who act as though the historic Luddites were the only workers who ever dared oppose machinery. Yet Mueller makes it clear from the outset of his book that he is not primarily interested in writing a history, rather his book has a clear political goal as well—he wishes to raise the banner of General Ludd and encourage others to march behind this standard. Thus, Mueller’s book is simultaneously an account of Luddism’s past, while also an appeal for Luddism’s future. And while Mueller provides a thoughtful consideration of many past figures and movements that have dallied with Luddism, his book concludes with a clear articulation of what a present day Luddism might look like. For those who call themselves Luddites, or those who would call themselves Luddites, Mueller provides a historically grounded but present focused account of what it meant, and what it can mean, to be a Luddite.

    The clarity with which Mueller defines Luddism in Breaking Things at Work places the book into a genuine debate as to how exactly Luddism should be defined. And this is a debate that Mueller’s book engages with in a particularly provocative way considering how his book is both a scholarly account and an activist manifesto. Writing about the Luddites tends to fall into several camps: works that provide a fairly straightforward historical account of who the original Luddites were and what they literally did (this genre includes works like E.P. Thompson’s Making of the English Working Class, and Kevin Binfield’s Writings of the Luddites); works that treat Luddism as an idea and a philosophy that is not exclusive to the historic Luddites (this genre includes works like Nicols Fox’s Against the Machine, and Matt Tierney’s Dismantlings), works that emphasize that the tactic of machine-breaking was not practiced exclusively by the Luddites (this genre includes works like Eric Hobsbawm and Geogre Rudé’s Captain Swing, and David Noble’s Progress Without People),  and works that draw lines (good or bad) from Luddism to later activist practices (this genre includes approving works like Kirkpatrick Sale’s Rebels Against the Future, and disapproving works like Steven Jones’s Against Technology). Mueller’s Breaking Things at Work  does not fit neatly into any single one of those categories: the Marxist analysis makes the book pair nicely with Thompson’s book, the engagement with radical theorists makes the book pair nicely with Tierney’s book, the treatment of machine-breaking as a common tactic makes the book pair nicely with Noble’s book, and the call to arms places the book into debate with books by the likes of Sale and Jones.

    All of which is to say, the meaning of Luddism remains contested terrain. And even though many of technology’s celebrants remain content to use Luddite as an insult, those who would proudly wear the mask of General Ludd are not themselves all in agreement about exactly what this means.

    Mueller has written a wonderfully provocative book, and it is one in which he does not attempt to hide his own opinion behind two dozen carefully composed distractions. Instead, Mueller is quite clear “to be a good Marxist is to also be a Luddite” (5), and this is a point that leads directly into his goal of turning Marxists into Luddites and making Marxists out of those who are critical of technology. And in his engagement with Marx, Mueller tangles with the perceptions of Marx as technophilic, engages with a variety of Marxist thinkers who fall into a range of camps, all while trying “to be faithful to Marxism’s heretical side, its unofficial channels and para-academic spaces” (vii). And all the while Mueller endeavors to keep his book grounded as a contribution to real struggles around technology in the world today. Considering Mueller’s clear statement of his own position it is likely that some will level their critiques at the book’s Marxism, and still others might critique the book for not being sufficiently Marxist. And as is always the case with books that situate their critique within a particular radical tradition it seems inevitable that some will wonder why their favorite thinker is not included (or does not receive more attention), even as others will wonder why other branches from the tree of the radical left are missing. (Mueller does not spend much time on anarchist thinkers).

    Overall, the question of whether this book will turn its Marxist readers into Luddites, and its technologically critical readers into Marxists is one that can only be answered by each reader themselves. For what Mueller’s book presents is an argument, and the way in which a reader nods along or argues back is likely to be heavily influenced by the way they personally define Luddism. And Mueller is not the first to try to rally people beneath the Luddite’s standard.

    In 1990, Chellis Glendinning published her “Notes Towards a Neo-Luddite Manifesto” in the pages of the Utne Reader. Furiously lamenting the ways in which societies were struggling under the onslaught of new technologies, her manifesto was a call to take up oppositional arms. While taking on the mantle of “Neo-Luddite,” the manifesto articulated a Luddism (or Neo-Luddism) that was defined by three principles: “1. Neo-Luddites are not anti-technology,” “2. All technologies are political,” and “3. The personal view of technology is dangerously limited.” Based on these principles, Glendinning’s manifesto laid out a program that included the dismantling of a range of “destructive” technologies (including genetic engineering technologies and computer technologies), pushed for the search for “new technological forms” that would be “for the benefit of life on Earth,” and this in turn was couched in a call for “Western technological societies” to develop a “life-enhancing worldview.” The manifesto drew on the technological criticism of Lewis Mumford, on Langdon Winner’s call for “epistemological Luddism,” and on the uncompromising stance towards technologies deemed destructive typified by Jerry Mander’s Four Arguments For the Elimination of Television.

    The Neo-Luddites are more noteworthy for their attempt to reclaim and redefine Luddism than they are for their success in actually creating a movement. Indeed, the lasting legacy of Neo-Luddism is not that of a vital social movement that fought for (and continues to fight for) the principles Glendinning put forth, but instead about half a bookshelf worth of books with “Neo-Luddite” somewhere in their title. There are certainly critiques to be leveled at the Neo-Luddites, but when revisiting Glendinning’s manifesto it is also worth placing it in the moment at which it emerged. The backdrop for Breaking Things at Work is one in which most readers will be accustomed to seemingly omnipresent computing technologies, climate exacerbated disasters, and a world in which the wealth of tech billionaires grows massively by the minute. By contrast, the backdrop for Glendinning’s manifesto was a moment in which personal computers had not yet achieved ubiquity (no one was carrying the Internet around in their pocket), climate change still seemed like a distant threat, and Mark Zuckerberg was still a child. It is impossible to say whether or not Glendinning’s manifesto, had it been heeded, could have prevented us from getting into our present morass, but preventing us from winding up where we are now certainly seems to have been one of Glendinning’s goals. At the very least, Glendinning and the Neo-Luddites (as well as the thinkers upon whom they drew) are a reminder that the spirit of General Ludd was circulating before you could Google “Luddism.”

    There are many parallels between the stances outlined by Glendinning and those outlined by Mueller. Though it seems that the key space of conflict between the two is around the question of dismantling. Glendinning and the Neo-Luddites were not subtle in their calls for dismantling certain technologies, whereas Mueller is considerably more nuanced in this respect. Here attempts to define Luddism find themselves butting against the degree to which Luddism is destined to always be associated (for better or worse) with the actual breaking of machines. The naming of entire classes of technology that need to be dismantled may appear like indiscriminate smashing, while calls for careful reevaluation of technologies may appear more like thoughtful disassembly. Yet the underlying question for Luddism remains: are certain technologies irredeemable? Are there technologies that we can remake in a different image, or will those technologies only reshape us in their own image? And if the answer is that these technologies cannot be reshaped, than are there some technologies that we need to break before they can finish breaking us, even if we often find ourselves enjoying some of the benefits of those technologies?

    Writing of the reactions from a range of 1960s social movements to the technological changes they were seeing playing out, Mueller notes that the particular technology that evoked “both fear and fascination” was none other than “the computer” (91). This point leads into what is perhaps the most troubling and challenging element of Mueller’s account, as he goes on to argue that hackers and some of their projects (like free software) fit within the legacy of Luddism. I imagine that many hackers will not be too pleased to see themselves described as Luddites, just as I imagine that many self-professed Luddites will scoff at the idea that using bitcoins to buy drugs on the dark web is a Luddite pursuit. Yet the idea that those most familiar with a technology may know exactly where to strike certainly has some noteworthy resonances with the historic Luddites.

    And yet the matter of hackers and “high tech Luddism”  raises a much broader question, one that the left has been trying to answer for quite some time, and perhaps the key question for any attempt to formulate a Luddite politics in this moment: what are we to make of the computer? Is the computer (and computing technologies, broadly defined) the offspring of the military-industrial-academic complex with logics of control, surveillance, and dominance so deeply ingrained that it ultimately winds up bending all users to that logic? Despite those origins, are computing technologies something which can be seized upon to allow us to reconfigure ourselves into new sorts of beings (cyborgs, perhaps) to break out of the very categories that capitalism tries to sort us into? Have computers fundamentally altered what it means to be human?  Is the computer (and the Internet) simply something that has become so big and so widespread that the best we can hope for is to increase our knowledge of it so that we can perform sabotage strikes while playing in the dark corners? Are computers the “master’s tools”?

    Considering that computer technologies were amongst those that the Neo-Luddites called to be dismantled, it seems pretty clear where they came down on this question. Yet contemporary discussions on the left around computers, a discussion in which Breaking Things at Work is certainly making an intervention, is quite a bit more divided as to what is to be done with and about computers. At several junctures in his book, Mueller notes that attitudes of technological optimism are starting to break down, yet if you survey the books dealing with technology published by the left-wing publisher Verso Books (which is the publisher of Breaking Things at Work) it is clear that a hopeful attitude towards technology is still present in much of the left. Certainly, there are arguments about the way that tech companies are screwing things up, commentary on the environmental costs of the hunger for high-tech gadgets, and paeans for how the Internet could be different—but it often feels that leftist commentaries blast Silicon Valley for what it has done to computers and the Internet so that the readers of such books can continue believing that the problems with computers and the Internet is what capitalism has done to them rather than suggest that these are capitalist tools through and through.

    Is the problem that the train we are on is taking us somewhere we don’t want to go, so we need to slow down so that we can switch tracks? Or is the problem the train itself and we need to hit the emergency brake so that we can get off? To those who have grown accustomed to the comforts of being on board the train, the idea of getting off of it might be a scary thought, it might feel preferable to fight for a more equitable distribution of resources aboard the train, or to fight to seize control of the engine car. Besides, the idea of actually getting off the train seems like little more than a fantasy—it will be hard enough just to get it to reduce its speed. Yet the question remains as to whether the problem is the direction we’re going in, or if the problem is the direction we’re going in and the technology that is taking us in that direction.

    Here it is essential to return to an important fact about the historic Luddites: they were waging their campaign against the introduction of machinery in the moment of those machines’ newness. The machines they attacked had not yet become common, and the moment of negotiation as to what these machines would mean and how they would be deployed was still in flux. When technologies are new they provide a fertile space for resistance, in their moment of freshness they have not yet become taken for granted, previous lifeways have not been forgotten, the skills that were necessary prior to the introduction of the new machine remain vital, and the broader society has not become pleasantly accustomed to their share of machine generated plenitude. Unfortunately, once a technology has become fully incorporated into a workplace (or a society) resistance becomes more and more challenging. While Mueller evocatively captures the long history of workers resisting the introduction of new technologies, these cases show a consistent tendency for this resistance to take place most strongly at the point of the new technology’s introduction. The major challenge becomes what to do when the technology has ceased being new, and when the reliance on that technology has become so total that it becomes almost impossible to imagine turning it off.

    After all, it’s easy to say that “computers are the problem” but at this point it’s easier to imagine the end of capitalism than it is to imagine the end of computers. And besides, many of those who would be quite happy to see capitalism come to an end quite like their computerized doodads and would be distressed if they couldn’t scroll social media on the subway, stream music, go shopping at 2 a.m., play video games, have video calls with distant family, or write overly lengthy book reviews and then post them online. One of the major challenges for technological criticism today is the simple fact that the critics are also reliant on these gadgets, and many of the critics quite like some things about some of those gadgets. In this technological climate, where the idea of truly banishing certain technologies seems fantastical, feelings of dissatisfaction often wind up getting channeled in the direction of appeals to personal responsibility. As though an individual deciding that they will abstain from going on social media on the weekend will somehow be a sufficient response to social media eating the world. This is the way in which a massive social problem winds up being reduced to telling people that they really just need to turn off notifications on their phones.

    What makes Breaking Things at Work, and its definition of Luddism, vital is the way in which Mueller eschews such appeals to minor lifestyle tweaks. As Mueller makes clear the significance of the Luddites is not that they broke machines, but that they saw machines as a site of political struggle, and the thing we need to learn from them today is that machinery still must be a site of political struggle. Turning off notifications, following people with different politics, trying to spend a day a week offline—while these actions can be useful on an individual level, they are not a sufficient response to the ways that technology challenges us today. In a moment wherein so many of the proclamations from Silicon Valley are treated as though they are inevitable, Luddism functions as a powerful retort and as a useful reminder that the people most invested in the belief that you cannot resist capitalist technologies are the people who are most terrified that people might resist those technologies.

    In one of the most infamous of the surviving Luddite letters, “the General of the Army of Redressers,” Ned Ludd writes: “We will never lay down our Arms. The House of Commons passes an Act to put down all Machinery hurtful to Commonality, and repeal that to hang Frame Breakers. But We. We petition no more that won’t do fighting must.” These were militant words from a militant movement, but the idea that there is such a thing as “Machinery hurtful to Commonality” and that such machinery needs to be opposed remains clear two hundred years later.

    There is a specter haunting technological society—the specter of Luddism. And as Mueller makes clear in Breaking Things at Work that specter is becoming more corporeal by the moment.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2o Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

     

  • Tamara Kneese — Our Silicon Valley, Ourselves

    Tamara Kneese — Our Silicon Valley, Ourselves

    a review of Anna Wiener, Uncanny Valley; Joanne McNeil, Lurking; Ellen Ullman, Life in Code; Wendy Liu, Abolish Silicon Valley; Ben Tarnoff and Moira Weigel, eds., Voices from the Valley; Mary Beth Meehan and Fred Turner, Seeing Silicon Valley

    by Tamara Kneese

    “Fuck all that. I have no theory. I’ve only got a story to tell.”
    – Elizabeth Freeman, “Without You, I’m Not Necessarily Nothing”

    ~

    Everyone’s eager to mine Silicon Valley for its hidden stories. In the past several years, women in or adjacent to the tech industry have published memoirs about their time there, ensconcing macrolevel critiques of Big Tech within intimate storytelling. Examples include Anna Wiener’s Uncanny Valley, Joanne McNeil’s Lurking, Ellen Ullman’s Life in Code, Susan Fowler’s Whistleblower, and Wendy Liu’s Abolish Silicon Valley, to name just a handful.[1] At the same time, recent edited volumes curate workers’ everyday lives in the ideological and geographical space that is Silicon Valley, seeking to expose the deep structural inequalities embedded in the tech industry and its reaches in the surrounding region. Examples of this trend include Ben Tarnoff and Moira Weigel’s Voices from the Valley and Mary Beth Meehan and Fred Turner’s Seeing Silicon Valley, along with tech journalists’ reporting on unfair labor practices and subsequent labor organizing efforts. In both cases, personal accounts of the tech industry’s effects constitute their own form of currency.

    What’s interesting about the juxtaposition of women’s first-hand accounts and collected worker interviews is how the first could fit within the much derided and feminized “personal essay” genre while the latter is more explicitly tied to the Marxist tradition of using workers’ perspectives as an organizing catalyst, i.e. through the process of empirical cataloging and self-reflection known as workers’ inquiry.[2] In this review essay, I consider these two seemingly unrelated trends in tandem. What role can personal stories play in sparking collective movements, and does presentation matter?

    *

    Memoirs of life with tech provide a glimpse of the ways that personal experiences—the good, the bad, and the ugly—are mediated by information technologies themselves as well as through their cascading effects on workplaces and social worlds. They provide an antidote to early cyberlibertarian screeds, imbued with dreams of escaping fleshly, earthly drudgery, like John Perry Barlow’s “A Declaration of the Independence of Cyberspace”: “Our identities have no bodies, so, unlike you, we cannot obtain order by physical coercion.” But in femme accounts of life in code, embodiment is inescapable. As much as the sterile efficiencies of automation would do away with the body’s messiness, the body rears its head with a vengeance. In a short post, one startup co-founder, Tracy Young, recounts attempting to neutralize her feminine coded body with plain clothes and a stoic demeanor, persevering through pregnancy, childbirth, and painful breastfeeding, and eventually hiding her miscarriage from her colleagues. Young reveals these details to point to the need for structural changes within the tech industry, which is still male-dominated, especially in the upper rungs. But for Young, capitalism is not the problem. Tech is redeemable through DEI initiatives that might better accommodate women’s bodies and needs. On the other end of the spectrum, pregnant Amazon warehouse workers suffer miscarriages when their managers refuse to follow doctors’ recommendations and compel pregnant workers to lift heavy boxes or prevent them from taking bathroom and water breaks. These experiences lie on disparate ends of the scale, but reflect the larger problems of patriarchy and racial capitalism in tech and beyond. It is unclear if this sliver of common ground can hope to bridge such a gulf of privilege.

    Sexual harassment, workplace misogyny, pregnancy discrimination: these grievances come up again and again within femme tech memoirs, even the ones that don’t at face value seem political. At first glance, Joanne McNeil’s Lurking: How a Person Became a User is not at all about labor. Her memoir is to some extent a celebration of the early internet, at times falling into the trap of nostalgia—the pleasure of the internet being “a place,” and the greater degree of flexibility and play afforded by usernames as opposed to real names policies. “Once I spoke freely and shared my dreams with strangers. Then the real world fastened itself to my digital life…My idle youth online largely—thankfully—evaporated in the sun, but more recent-ish old posts breeze along, colliding with and confusing new image of myself that I try to construct” (McNeil 2020, 8-9). Building on earlier feminist critiques of techno-utopian libertarianism, such as Paulina Borsook’s Cyberselfish (2000), in McNeil’s estimation, the early web allowed people to be lurkers, rather than users, even if the disembodied libertarian imaginaries attached to cyberspace never panned out. With coerced participation and the alignment of actual identities with online profiles, the shift to “the user” reflects the enclosure of the web and the growth of tech corporations, monetization, and ad tech. The beauty of being a lurker was the space to work out the self in relation to communities and to bear witness to these experimental relationships. As McNeil puts it, in her discussion of Friendster, “What happened between <form> and </form> was self-portraiture” (McNeil 2020, 90). McNeil references the many early internet communities, like Echo, LatinoLink, and Café los Negroes, which helped queer, Black, and Latinx relationships flourish in connection with locally situated subcultures.

    In a brief moment, while reflecting on the New York media world built around websites like Gawker, McNeil ties platformization to her experiences as a journalist, a producer of knowledge about the tech industry: “A few years ago, when I was a contractor at a traffic-driven online magazine, I complained to a technologist friend about the pressure I was under to deliver page view above a certain threshold” (McNeil 2020, 138). McNeil, who comes from a working class background, has had in adulthood the kind of work experiences Silicon Valley tends to make invisible, including call center work and work as a receptionist. As a journalist, even as a contractor, she was expected to amass thousands of Twitter followers. Because she lacked a large following, she relied on the publication itself to promote her work. She was eventually let go from the job. “My influence, or lack thereof, impacted my livelihood” (McNeil 2020, 139). This simply stated phrase reveals how McNeil’s critique of Big Tech is ultimately not only about users’ free labor and the extraction of profit from social relationships, but about how platform metrics are making people’s jobs worse.

    Labor practices emerge in McNeil’s narrative at several other points, in reference to Google’s internal caste system and the endemic problem of sexual harassment within the industry. In a discussion of Andrew Norman Wilson’s influential Workers Leaving the Googleplex video (2011), which made clear to viewers the sharp divisions within the Google workforce, McNeil notes that Google still needs these blue-collar workers, like janitors, security guards, and cafeteria staff, even if the company has rendered them largely invisible. But what is the purpose of making these so-called hidden laborers of tech visible, and for whom are they being rendered visible in the first place?[3] If you have ever been on a tech campus, you can’t miss ‘em. They’re right fucking there! If the hierarchies within tech are now more popularly acknowledged, then what? And are McNeil’s experiences as a white-collar tech journalist at all related to these other people’s stories, which often provide the scaffolding for tech reporters’ narratives?

    *

    Other tech memoirs more concretely focus on navigating tech workplaces from a femme perspective. Long-form attention to the matter creates more space for self-reflection and recognition on the part of the reader. In 2016, Anna Wiener’s n+1 essay, “Uncanny Valley,” went viral because it hit a nerve. Wiener presented an overtly gendered story—about body anxiety and tenuous friendship—told through one woman’s time in the world of startups before the majority of the public had caught wind of the downside of digital platforms and their stranglehold on life, work, and politics. Later, Wiener would write a monograph-length version of the story with the same title, detailing her experiences as a “non-technical” woman in tech: “I’d never been in a room with so few women, so much money, and so many people chomping at the bit to get a taste” (Wiener 2020, 61). In conversation with computer science academics and engineers, her skepticism about the feasibility of self-driving cars isn’t taken seriously because she is a woman who works in customer support. Wiener describes herself as being taken in by the promises and material culture of the industry: a certain cashmere sweater and overall look, wellness tinctures, EDM, and Burning Man at the same time she navigates taxicab gropings on work trips and inappropriate comments about “sensual” Jewish women at the office. Given the Protestant Work Ethic-tinged individualism of her workplace, she offers little in the way of solidarity. When her friend Noah is fired after writing a terse memo, she and the rest of the workers at the startup fail to stand up to their boss. She laments, “Maybe we never were a family. We knew we had never been a family,” questioning the common myth that corporations are like kin (Wiener 2020, 113). Near the end of her memoir, Wiener wrestles with the fact that GamerGate, and later the election of Trump, do not bring the reckoning she once thought was coming. The tech industry continues on as before.

    Wiener is in many respects reminiscent of another erudite, Jewish, New York City-to-San Francisco transplant, Ellen Ullman. Ullman published an account of her life as a woman programmer, Close to the Machine: Technophilia and Its Discontents, in 1997, amid the dotcom boom, when tech criticism was less fashionable. Ullman writes about “tantric, algorithmic” (1997, 49) sex with a fellow programmer and the erotics of coding itself, flirting with the romance novel genre. She critiques the sexism and user-disregard in tech (she is building a system for AIDS patients and their providers, but the programmers are rarely confronted with the fleshly existence of their end-users). Her background as a communist, along with her guilt about her awkward class position as an owner and landlord of a building in the Wall Street district, also comes through in the memoir: At one point, she quips “And who was Karl Marx but the original technophile?” (Ullman 1997, 29). Ullman presciently sees remote, contracted tech workers, including globally situated call center works, as canaries in the coal mine. As she puts it, “In this sense, we virtual workers are everyone’s future. We wander from job to job, and now it’s hard for anyone to stay put anymore. Our job commitments are contractual, contingent, impermanent, and this model of insecure life is spreading outward from us” (Ullman 1997, 146). Even for a privileged techie like Ullman, the supposedly hidden global underclass of tech was not so hidden after all.

    Ullman’s Life in Code: A Personal History of Technology, a collection of essays published twenty years later in 2017, reflects a growing desire to view the world of startups, major tech companies, and life in the Bay Area through the lens of women’s unique experiences. A 1998 essay included in Life in Code reveals Ullman’s distrust of what the internet might become: “I fear for the world the internet is creating. Before the advent of the Web, if you wanted to sustain a belief in far-fetched ideas, you had to go out into the desert, or live on a compound in the mountains, or move from one badly furnished room to another in a series of safe houses” (Ullman 2017, 89). Ullman at various points refers to the toxic dynamics of technoculture, the way that engineers make offhand sexist, racist remarks during their workplace interactions. In other words, critics like Ullman had been around for decades, but  her voice, and voices like hers, carried more weight in 2017 than in 1997. Following in Ullman’s footsteps, Wiener’s contribution came at just the right time.

    I appreciate Sharrona Pearl’s excellent review of Wiener’s Uncanny Valley in this publication, and her critique of the book’s political intentions (or lack thereof) and privileged perspective. When it comes to accounts of the self as political forces, Emma Goldman’s Living My Life it is not. But some larger questions remain: why did so many readers find Wiener’s personal narrative compelling, and how might we relate its popularity to a larger cultural shift in how stories about technology are told?

    Another woman’s memoir of a life in tech offers one possible answer. Wendy Liu started as a computer science major at a prestigious university, worked as a Google intern, and co-founded a startup, not an uncommon trajectory for a particular class of tech worker. Her candid memoir of her transformation from tech evangelist to socialist tech critic, Abolish Silicon Valley, references Wiener’s “Uncanny Valley” essay. Wiener’s account resonated with Liu, even as a software engineer who viewed herself as separate from the non-technical women around her— the marketers, program managers, and technical writers. Liu is open about the ways that ideologies around meritocracy and individual success color her trajectory: she viewed Gamergate as an opportunity to test out her company’s tech capabilities and idolized men like Elon Musk and Paul Graham. Hard work always pays off and working 80 hours a week is a means to an end. Sometimes you have to dance with the devil: for example, Liu’s startup at one point considers working for the Republican Party. Despite her seeming belief in the tech industry’s alignment with the social good, Liu has doubts. When Liu first encounters Wiener’s essay, she wryly notes that she thought n+1 might be a tech magazine, given its math-y name. Once she reads it, “The words cut like a knife through my gradually waning hopes, and I wanted to sink into an ocean of this writing” (Liu 2020, 111). Liu goes on to read hundreds of leftist books and undergo a political awakening in London. While Wiener’s memoir is intensely personal, not overtly about a collective politics, it still ignites something in Liu’s consciousness, becoming enfolded into her own account of her disillusionment with the tech industry and capitalism as a whole. Liu also refers to Tech Against Trump, published by Logic Magazine in 2017, which featured “stories from fellow tech workers who were startled into caring about politics because of Trump” (Liu 2020, 150). Liu was not alone in her awakening, and it was first-hand accounts by fellow tech workers who got her and many others to question their relationship to the system.

    Indeed, before Liu published her abolitionist memoir, she published a short essay for a UK-based Marxist publication, Notes from Below, titled “Silicon Inquiry,” applying the time-honored Marxist practice of workers’ inquiry to her own experiences as a white-collar coder. She writes, “I’ve lost my faith in the industry, and with it, any desire to remain within it. All the perks in the world can’t make up for what tech has become: morally destitute, mired in egotism and self-delusion, an aborted promise of what it could have been. Now that I realise this, I can’t go back.” She describes her trajectory from 12-year-old tinkerer, to computer science major, to Google intern, where she begins to sense that something is wrong and unfulfilling about her work: “In Marxist terms, I was alienated from my labour: forced to think about a problem I didn’t personally have a stake in, in a very typically corporate environment that drained all the motivation out of me.” When she turns away from Google to enter the world of startups, she is trapped by the ideology of faking it until you make it. They work long hours, technically for themselves, but without achieving anything tangible. Liu begins to notice the marginalized workers who comprise a major part of the tech industry, not only ride-hail drivers and delivery workers, but the cafeteria staff and janitors who work on tech campuses. The bifurcated workforce makes it difficult for workers to organize; the ones at the top are loyal to management, while those at the bottom of the hierarchy are afraid of losing their jobs if they speak out.

    Towards the end of her memoir, Liu describes joining a picket line of largely Chinese-American women who are cleaners for Marriott Hotels. This action is happening at the same time as the 2018 Google Walkout, during which white-collar tech workers organized against sexual harassment and subsequent retaliation at the company. Liu draws a connection between both kinds of workers, protesting in the same general place: “On the surface, you would think Google engineers and Marriott hotel cleaners couldn’t be more different. And yet, one key component of the hotel workers’ union dispute was the prevalence of sexual harassment in the workplace…The specifics might be different, but the same underlying problems existed at both companies” (Liu 2020, 158). She sees that TVCs (temps, vendors, and contractors) share grievances with their full-time counterparts, especially when it comes to issues over visas, sexual harassment, and entrenched racism. The trick for organizers is to inspire a sense of solidarity and connection among workers who, on the surface, have little in common. Liu explicitly connects the experiences of more white-collar tech workers like herself and marginalized workers within the tech industry and beyond. Her memoir is not merely a personal reflection, but a call to action–individual refusal, like deleting Facebook or Uber, is not sufficient, and transforming the tech industry is necessarily a collective endeavor. Her abolitionist memoir connects tech journalism’s use of workplace grievances and a first-hand account from the coder class, finding common ground in the hopes of sparking structural change. Memoirs like these may act as a kind of connective tissue, bridging disparate experiences of life in and through technology.

    *

    Another approach to personal accounts of tech takes a different tack: Rather than one long-form, first-hand account, cobble together many perspectives to get a sense of contrasts and potential spaces of overlap. Collections of workers’ perspectives have a long leftist history. For decades, anarchists, socialists, and other social reformers have gathered oral histories and published these personal accounts as part of a larger political project (see: Avrich 1995; Buhle and Kelley 1989; Kaplan and Shapiro 1998; Lynd and Lynd 1973). Two new edited collections focus on aggregated workers’ stories to highlight the diversity of people who live and work in Silicon Valley, from Iranian-American Google engineers to Mexican-American food truck owners. The concept of “Silicon Valley,” like “tech industry,” tends to obscure the lived experiences of ordinary individuals, reflecting more of a fantasy than a real place.

    Mary Beth Meehan and Fred Turner’s Seeing Silicon Valley follows the leftist photography tradition (think Lewis Hine or Dorothea Lange) of capturing working class people in their everyday struggles. Based on a six-week Airbnb stay in the area, Meehan’s images are arresting, spotlighting the disparity within Santa Clara Valley through a humanistic lens, while Turner’s historically-informed introduction and short essays provide a narrative through which to read the images. Silicon Valley is “a mirror of America itself. In that sense, it really is a city on a hill for our time” (Meehan and Turner 2021, 8). Through their presentation of life and work in Silicon Valley, Turner and Meehan push back against stereotypical, ahistorical visions of what Silicon Valley is. As Turner puts it, “The workers of Silicon Valley rarely look like the men idealized in its lore” (Meehan and Turner 2021, 7). Turner’s introduction critiques the rampant economic and racial inequality that exists in the Valley, and the United States as a whole, which bears out in the later vignettes. Unhoused people, some of whom work for major tech companies in Mountain View, live in vans despite having degrees from Stanford. People are living with the repercussions of superfund sites, hazardous jobs, and displacement. Several interviewees reference union campaigns, such as organizing around workplace injuries at the Tesla plant or contract security guards unionizing at Facebook, and their stories are accompanied by images of Silicon Valley Rising protest signs from an action in San Jose. Aside from an occasional direct quote, the narratives about the workers are truncated and editorialized. As the title would indicate, the book is above all a visual representation of life in Silicon Valley as a window into contemporary life in the US. Saturated colors and glossy pages make for a perfect coffee table object and one can imagine the images and text at home in a gallery space. To some degree, it is a stealth operation, and the book’s aesthetic qualities bely the sometimes difficult stories contained within, but the book’s intended audience is more academic than revolutionary. Who at this point doesn’t believe that there are poor people in “Silicon Valley,” or that “tech labor” obscures what is more often than not racialized, gendered, embodied, and precarious forms of work?

    A second volume takes a different approach, focusing instead on the stories of individual tech workers. Ben Tarnoff and Moira Weigel, co-founders of Logic Magazine, co-edited Voices from the Valley as part of their larger Logic brand’s partnership series with FSG Originals. The sharply packaged volume includes anonymous accounts from venture capitalist bros as well as from subcontracted massage workers, rendering visible the “people behind the platform” in a secretive industry full of NDAs (Tarnoff and Weigel 2020, 3). As the book’s title suggests, the interviews are edited back-and-forths with a wide range of workers within the industry, emphasizing their unique perspectives. The subtitle promises “Tech Workers Talk About What They Do—And How They Do It.” This is a clear nod to Studs Terkel’s 1974 epic collection of over one hundred workers’ stories, Working: People Talk About What They Do All Day and How They Feel About What They Do, in which he similarly categorizes them according to job description, from gravedigger to flight attendant. Terkel frames each interview and provides a description of their living conditions or other personal details, but for the most part, the workers speak on their own terms. In Tarnoff and Weigel’s contribution, we as readers hear from workers directly, although we do catch a glimpse of the interview prompts that drove the conversations. The editors also provide short essays introducing each “voice,” contextualizing their position. Workers’ voices are there, to be sure, but they are also trimmed to match Logic’s aesthetic. Reviews of the book, even in leftist magazines like Jacobin, tend to focus as much on the (admittedly formidable) husband and wife editor duo as they do on the stories of the workers themselves. Even so, Tarnoff and Weigel emphasize the political salience of their project in their introduction, arguing that “Silicon Valley is now everywhere” (2020, 7) as “tech is a layer of every industry” (2020, 8). They end their introduction with a call to the reader to “Speak, whoever you are. Your voice is in the Valley, too” (Tarnoff and Weigel 2020, 8).

    As in Meehan and Turner’s visually oriented book, Tarnoff and Weigel’s interviews point to the ways that badge color as class marker, along with gender, immigration status, disability, and race, affect people’s experiences on the job. Much like Meehan and Turner’s intervention, the book gives equal space to the most elite voices as it does to those on the margins, spanning the entire breadth of the tech industry. There are scattered examples of activism, like white collar organizing campaigns against Google’s Dragonfly and other #TechWontBuiltIt manifestations. At one point, the individual known as “The Cook” names Tech Workers Coalition. TWC volunteers were “computer techie hacker cool” and showed up to meetings or even union negotiations in solidarity with their subcontracted coworkers. The Cook notes that TWC thinks “everybody working for a tech company should be part of that company, in one sense or another” (Tarnoff and Weigel 2020, 68). There is an asterisk with a shorthand description of TWC, which has become something of a floating signifier of the tech workers’ movement. The international tech workers labor movement encompasses not only white collar coders, but gig and warehouse workers, who are absent here. With only seven interviews included, the volume cannot address every perspective. Because the interviews with workers are abbreviated and punctuated by punchy subheadings, it can be hard to tell whose voices are really being heard. Is it the workers of Silicon Valley, or is it the editors? As with Meehan and Turner’s effort, the end result is largely a view from above, not within. Which isn’t to say there isn’t a place for this kind of aggregation, or that it can’t connect to organizing efforts, but is this volume more of a political work than Wiener’s or Ullman’s memoirs?

    In other interviews, workers reveal gendered workplace discrimination and other grievances that might prompt collective action. The person identified as “The Technical Writer” describes being terminated from her job after her boss suspects her pregnancy. (He eliminates the position instead of directly firing her, making it harder for her to prove pregnancy discrimination). She decides not to pursue a lawsuit because, as she puts it, “Tech is actually kind of a small industry. You don’t want to be the woman who’s not easy to work with” (Tarnoff and Weigel 2020, 46). After being terminated, she finds work as a remote contractor, which allows her to earn an income while caring for her newborn and other young child. She describes the systemic misogyny in tech that leads to women in non-technical roles being seen as less valuable and maternity leave factoring into women’s lower salaries. But she laments the way that tech journalism tends to portray women as the objects, not the subjects of stories, turning them into victims and focusing narratives on bad actors like James Damore, who penned the infamous Google memo against diversity in tech. Sensationalized stories of harassment and discrimination are meant to tug at the heartstrings, but workers’ agency is often missing in these narratives. In another striking interview, “The Massage Therapist,” who is a subcontracted worker within a large tech campus environment, says that despite beleaguered cafeteria workers needing massages more than coders, she was prohibited from treating anyone who wasn’t a full-time employee. The young women working there seemed sad and too stressed to make time for their massages.

    These personal but minor insights are often missing from popular narratives or journalistic accounts and so their value is readily apparent. The question then becomes, how do both personal memoirs and these shorter, aggregated collections of stories translate into changing collective class consciousness? What happens after the hidden stories of Silicon Valley are revealed? Is an awareness of mutual fuckedness enough to form a coalition?[4]

    *

    A first step might be to recognize the political power of the personal essay or memoir, rather than discounting the genre as a whole. Critiques of the personal essay are certainly not new; Virginia Woolf herself decried the genre’s “unclothed egoism.” Writing for The New Yorker in 2017, Jia Tolentino marked the death of the personal essay. For a time, the personal essay was everywhere: sites like The Awl, Jezebel, The Hairpin, and The Toast centered women’s stories of body horror, sex, work, pain, adversity, and, sometimes, rape. In an instant, the personal essay was apparently over, just as white supremacy and misogyny seemed to be on the rise. With the rise of Trumpism and the related techlash, personal stories were replaced with more concretely political takes. Personal essays are despised largely because they are written by and for women. Tolentino traces some of the anti-personal essay discourse to Emily Gould’s big personal reveal in The New York Times Magazine, foregrounding her perspective as a woman on the internet in the age of Gawker. In 2020 essay in The Cut revisiting her Gawker shame and fame, Gould writes, “What the job did have, and what made me blind to everything it didn’t, was exposure. Every person who read the site knew my name, and in 2007, that was a lot of people. They emailed me and chatted with me and commented at me. Overnight, I had thousands of new friends and enemies, and at first that felt exhilarating, like being at a party all the time.” Gould describes her humiliation when a video of her fellating a plastic dildo at work goes viral on YouTube, likely uploaded by her boss, Nick Denton. After watching the infamous 2016 Presidential Debate, when Donald Trump creepily hovered behind Hillary Clinton, Gould’s body registers recognition, prompting a visit to her gynecologist, who tells her that her body is responding to past trauma:

    I once believed that the truth would set us free — specifically, that women’s first-person writing would “create more truth” around itself. This is what I believed when I published my first book, a memoir. And I must have still believed it when I began publishing other women’s books, too. I believed that I would become free from shame by normalizing what happened to me, by naming it and encouraging others to name it too. How, then, to explain why, at the exact same moment when first-person art by women is more culturally ascendant and embraced than it has ever been in my lifetime, the most rapacious, damaging forms of structural sexism are also on the rise?

    Gould has understandably lost her faith in women’s stories, no matter how much attention they receive, overturning structural sexism. But what if the personal essay is, in fact, a site of praxis? Wiener, McNeil, Liu, and Ullman’s contributions are, to various extents, political works because they highlight experiences that are so often missing from mainstream tech narratives. Their power derives from their long-form personal accounts, which touch not only on work but on relationships, family, personal histories. Just as much as the more overtly political edited volumes or oral histories, individual perspectives also align with the Marxist practice of workers’ inquiry. Liu’s memoir, in particular, brings this connection to light. What stories are seen as true workers’ inquiry, part of leftist praxis, and which are deemed too personal, or too femme, to be truly political? When it comes to gathering and publishing workers’ stories, who is doing the collecting and for what purpose? As theorists like Nancy Fraser (2013) caution, too often feminist storytelling under the guise of empowerment, even in cases like the Google Walkout, can be enfolded back into neoliberalism. For instance, the cries of “This is what Googley looks like!” heard during the protest reinforced the company’s hallmark metric of belonging even as it reinterpreted it.

    As Asad Haider and Salar Mohandesi note in their detailed history of workers’ inquiry for Viewpoint Magazine, Marx’s original vision for worker’s inquiry was never quite executed. His was a very empirical project, involving 101 questions about shop conditions, descriptions of fellow workers, and strikes or other organizing activities. Marx’s point was that organizers must look to the working class itself to change their own working conditions. Workers’ inquiry is a process of recognition, whereby reading someone else’s account of their grievances leads to a kind of mutual understanding. Over time and in different geographic contexts, from France and Italy to the United States, workers’ inquiry has entailed different approaches and end goals. Beyond the industrial factory worker, Black feminist socialists like Selma James gathered women’s experiences: “A Woman’s Place discussed the role of housework, the value of reproductive labor, and the organizations autonomously invented by women in the course of their struggle.” The politics of attribution were tricky, and there were often tensions between academic research and political action. James published her account under a pen name. At other times, multi-authored and co-edited works were portrayed as one person’s memoir. But the point was to take the singular experience and to have it extend outward into the collective. As Haider and Mohandesi put it,

    If, however, the objective is to build class consciousness, then the distortions of the narrative form are not problems at all. They might actually be quite necessary. With these narratives, the tension in Marx’s workers’ inquiry – between a research tool on the one hand, and a form of agitation on the other – is largely resolved by subordinating the former to the latter, transforming inquiry into a means to the end of consciousness-building.

    The personal has always been political. Few would argue that Audre Lorde’s deeply personal Cancer Journals is not also a political work. And Peter Kropotkin’s memoir accounting for his revolutionary life begins with his memory of his mother’s death. The consciousness raising and knowledge-sharing of 1970s feminist projects like Our Bodies, Ourselves, the queer liberation movement, disability activism, and the Black Power movement related individual experiences to broader social justice struggles. Oral histories accounting for the individual lives of ethnic minority leftists in the US, like Paul Avrich’s Anarchist Voices, Judy Kaplan and Linn Shapiro’s Red Diapers, and Michael Keith Honey’s Black Workers Remember, perform a similar kind of work. If Voices from the Valley and Seeing Silicon Valley are potentially valuable as political tools, then first person accounts of life in tech should be seen as another fist in the same fight. There is an undeniable power attached to hearing workers’ stories in their own words and movements can emerge from the unlikeliest sources.

    EDIT (8/6/2021): a sentence was added to correctly describe Joanne McNeil’s background and work history.
    _____

    Tamara Kneese is an Assistant Professor of Media Studies and Director of Gender and Sexualities Studies at the University of San Francisco. Her first book on digital death care practices, Death Glitch, is forthcoming with Yale University Press. She is also the co-editor of The New Death (forthcoming Spring 2022, School for Advanced Research/University of New Mexico Press).

    Back to the essay

    _____

    Notes

    [1] I would include Kate Losse’s early, biting critique The Boy Kings, published in 2012, in this category. Losse was Facebook employee #51 and exposed the ways that nontechnical women, even those with PhDs, were marginalized by Zuckerberg and others in the company.

    [2] Workers’ inquiry combines research with organizing, constituting a process by which workers themselves produce knowledge about their own circumstances and use that knowledge as part of their labor organizing.

    [3] Noopur Raval (2021) questions the “invisibility” narratives within popular tech criticism, including Voices from the Valley and Seeing Silicon Valley, arguing that ghost laborers are not so ghostly to those living in the Global South.

    [4] With apologies to Fred Moton. See The Undercommons (2013).
    _____

    Works Cited

    • Paul Avrich. Anarchist Voices: An Oral History of Anarchism in the United States. Princeton, NJ: Princeton University Press, 1995.
    • Paulina Borsook. Cyberselfish: A Critical Romp Through the Terribly Libertarian Culture of High Tech. New York: Public Affairs, 2000.
    • Paul Buhle and Robin D. G. Kelley. “The Oral History of the Left in the United States: A Survey and Interpretation.” The Journal of American History 76, no. 2 (1989): 537-50. doi:10.2307/1907991.
    • Susan Fowler, Whistleblower: My Journey to Silicon Valley and Fight for Justice at Uber. New York: Penguin Books, 2020.
    • Nancy Fraser. Fortunes of Feminism: From State-Managed Capitalism to Neoliberal Crisis. New York: Verso, 2013.
    • Emma Goldman. Living My Life. New York: Alfred A. Knopf, 1931.
    • Emily Gould. “Exposed.” The New York Times Magazine, May 25, 2008, https://www.nytimes.com/2008/05/25/magazine/25internet-t.html.
    • Emily Gould. “Replaying My Shame.” The Cut, February 26, 2020. https://www.thecut.com/2020/02/emily-gould-gawker-shame.html
    • Asad Haider and Salar Mohandesi. “Workers’ Inquiry: A Genealogy.” Viewpoint Magazine, September 27, 2013, https://viewpointmag.com/2013/09/27/workers-inquiry-a-genealogy/.
    • Michael Keith Honey. Black Workers Remember: An Oral History of Segregation, Unionism, and the Freedom Struggle. Oakland: University of California Press, 2002.
    • Judy Kaplan and Linn Shapiro. Red Diapers: Growing Up in the Communist Left. Champaign, IL: University of Illinois Press, 1998.
    • Peter Kropotkin. Memoirs of a Revolutionist. Boston: Houghton Mifflin, 1899.
    • Wendy Liu. Abolish Silicon Valley: How to Liberate Technology from Capitalism. London: Repeater Books, 2020.
    • Wendy Liu. “Silicon Inquiry.” Notes From Below, January 29, 2018, https://notesfrombelow.org/article/silicon-inquiry.
    • Audre Lorde. The Cancer Journals. San Francisco: Aunt Lute Books, 1980.
    • Katherine Losse. The Boy Kings: A Journey Into the Heart of the Social Network. New York: Simon & Schuster, 2012.
    • Alice Lynd and Robert Staughton Lynd. Rank and File: Personal Histories by Working-Class Organizers. New York: Monthly Review Press, 1973.
      Joanne McNeil. Lurking: How a Person Became a User. New York: MCD/Farrar, Straus and Giroux, 2020.
    • Mary Beth Meehan and Fred Turner. Seeing Silicon Valley: Life Inside a Fraying America. Chicago: University of Chicago Press, 2021.
    • Fred Moten and Stefano Harney. The Undercommons: Fugitive Planning & Black Study. New York: Minor Compositions, 2013.
    • Noopur Raval. “Interrupting Invisbility in a Global World.” ACM Interactions. July/August, 2021, https://interactions.acm.org/archive/view/july-august-2021/interrupting-invisibility-in-a-global-world.
    • Ben Tarnoff and Moira Weigel. Voices from the Valley: Tech Workers Talk about What They Do—and How They Do It. New York: FSG Originals x Logic, 2020.
    • Studs Terkel. Working: People Talk About What They Do All Day and How They Feel About What They Do. New York: Pantheon Books, 1974.
    • Jia Tolentino. “The Personal-Essay Boom is Over.” The New Yorker, May 18, 2017, https://www.newyorker.com/culture/jia-tolentino/the-personal-essay-boom-is-over.
    • Ellen Ullman. Close to the Machine: Technophilia and Its Discontents.  New York: Picador/Farrar, Straus and Giroux, 1997.
    • Ellen Ullman. Life in Code: A Personal History of Technology. New York: MCD/Farrar, Straus and Giroux, 2017.
    • Anna Wiener. “Uncanny Valley.” n+1, Spring 2016: Slow Burn, https://nplusonemag.com/issue-25/on-the-fringe/uncanny-valley/.
    • Anna Wiener. Uncanny Valley: A Memoir. New York: MCD/Farrar, Straus and Giroux, 2020.
  • Sharrona Pearl — In the Shadow of the Valley (Review of Anna Wiener, Uncanny Valley)

    Sharrona Pearl — In the Shadow of the Valley (Review of Anna Wiener, Uncanny Valley)

    a review of Anna Wiener, Uncanny Valley: A Memoir (Macmillan, 2020)

    by Sharrona Pearl

    ~

    Uncanny Valley, the latest, very well-publicized memoir of Silicon Valley apostasy, is, for sure, a great read.  Anna Wiener writes beautiful words that become sentences that become beautiful paragraphs and beautiful chapters.  The descriptions are finely wrought, and if not quite cinematic than very, very visceral.  While it is a wry and tense and sometimes stressful story, it’s also exactly what it says it is: a memoir.  It’s the story of her experiences.  It captures a zeitgeist – beautifully, and with nuance and verve and life. It highlights contradictions and complications and confusions: hers, but also of Silicon Valley culture itself.  It muses upon them, and worries them, and worries over them.  But it doesn’t analyze them and it certainly doesn’t solve them, even if you get the sense that Wiener would quite like to do so.  That’s okay.  Solving the problems exposed by Silicon Valley tech culture and tech capitalism is quite a big ask.

    Wiener’s memoir tells the story of her accidental immersion into, and gradual (too gradual?) estrangement from, essentially, Big Tech.  A newly minted graduate from a prestigious small liberal arts college (of course), Wiener was living in Brooklyn (of course) while working as an underpaid assistant in a small literary agency (of course.) “Privileged and downwardly mobile,” as she puts it, Wiener was just about getting by with some extra help from her parents, embracing being perpetually broke as she party-hopped and engaged in some light drug use while rolling her eyes at all the IKEA furniture.  In as clear a portrait of Brooklyn as anything could be, Wiener’s friends spent 2013 making sourdough bread near artisan chocolate shops while talking on their ironic flip phones.  World-weary at 24, Wiener decides to shake things up and applies for a job at a Manhattan-based ebook startup.  It’s still about books, she rationalizes, so the startup part is almost beside the point.  Or maybe, because it’s still about books, the tech itself can be used for good.  Of course, neither of these things turn out to be true for either this startup, or tech itself.  Wiener quickly discovers (and so do her bosses) that she’s just not the right fit.  So she applies for another tech job instead.  This time in the Bay Area.  Why not?  She’d gotten a heady dose of the optimism and opportunity of startup culture, and they offered her a great salary.  It was a good decision, a smart and responsible and exciting decision, even as she was sad to leave the books behind.  But honestly, she’d done that the second she joined the first startup.  And in a way, the entire memoir is Wiener figuring that out.

    Maybe Wiener’s privilege (alongside generational resources and whiteness) is living in a world where you don’t have to worry about Silicon Valley even as it permeates everything.  She and her friends were being willfully ignorant in Brooklyn; it turns out, as Wiener deftly shows us, you can be willfully ignorant from the heart of Silicon Valley too.  Wiener lands a job at one startup and then, at some point, takes a pay cut to work at another whose culture is a better fit.  “Culture” does a lot of work here to elide sexism, harassment, surveillance, and violation of privacy.  To put it another way: bad stuff is going on around Wiener, at the very companies she works for, and she doesn’t really notice or pay attention…so we shouldn’t either.  Even though she narrates these numerous and terrible violations clearly and explicitly, we don’t exactly clock them because they aren’t a surprise.  We already knew.  We don’t care.  Or we already did the caring part and we’ve moved on.

    If 2013 feels both too early and too late for sourdough (weren’t people making bread in the 1950s because they had to?  And in 2020 because of COVID?) that’s a bit like the book itself.  Surely the moment for Silicon Valley Seduction and Cessation was the early 2000s?  And surely our disillusionment from the surveillance of Big Tech and the loss of privacy didn’t happen until after 2016? (Well, if you pay attention to the timeline in the book, that’s when it happened for Wiener too).  I was there for the bubble in the early aughts.  How could anyone not know what to expect?  Which isn’t to say that this memoir isn’t a gripping and illustrative mise-en-scène.  It’s just that in the era of Coded Bias and Virginia Eubanks and Safiya Noble and Meredith Broussard and Ruha Benjamin and Shoshana Zuboff… didn’t we already know that Big Tech was Bad?  When Wiener has her big reveal in learning from her partner Noah that “we worked in a surveillance company,” it’s more like: well, duh.  (Does it count as whistleblowing if it isn’t a secret?)

    But maybe that wasn’t actually the big reveal of the book.  Maybe the point was that Wiener did already know, she just didn’t quite realize how seductive power is, how pervasive an all-encompassing a culture can be, and how easy distinctions between good and bad don’t do much for us in the totalizing world of tech.  She wants to break that all down for us.  The memoir is kind of Tech Tales for Lit Critics, which is distinct from Tech for Dummies ™ because maybe the critics are the smart ones in the end.  The story is for “us;” Wiener’s tribe of smart and idealistic and disaffected humanists.  (Truly us, right dear readers?)  She makes it clear that even as she works alongside and with an army of engineers, there is always an us and them.  (Maybe partly because really, she works for the engineers, and no matter what the company says everyone knows what the hierarchy is.)  The “us” are the skeptics and the “them” are the cult believers except that, as her weird affectation of never naming any tech firms (“an online superstore; a ride-hailing app; a home-sharing platform; the social network everyone loves to hate,”) we are all in the cult in some way, even if we (“we”) – in Wiener’s Brooklyn tribe forever no matter where we live – half-heartedly protest. (For context: I’m not on Facebook and I don’t own a cell phone but PLEASE follow me on twitter @sharronapearl).

    Wiener uses this “NDA language” throughout the memoir.  At first it’s endearing – imagine a world in which we aren’t constantly name-checking Amazon and AirBnB.  Then its addicting – when I was grocery shopping I began to think of my local Sprouts as “a West-Coast transplant fresh produce store.”  Finally, it’s annoying – just say Uber, for heaven’s sake!  But maybe there’s a method to it: these labels makes the ubiquity of these platforms all the more clear, and forces us to confront just how very integrated into our lives they all are.  We are no different from Wiener; we all benefit from surveillance.

    Sometimes the memoir feels a bit like stunt journalism, the tech take on The Year of Living Biblically or Running the Books.  There’s a sense from the outset that Wiener is thinking “I’ll take the job, and if I hate it I can always write about it.”  And indeed she did, and indeed she does, now working as the tech and start-up correspondent for The New Yorker.  (Read her articles: they’re terrific.)  But that’s not at all a bad thing: she tells her story well, with self-awareness and liveliness and a lot of patience in her sometimes ironic and snarky tone.  It’s exactly what it we imagine it to be when we see how the sausage is made: a little gross, a lot upsetting, and still really quite interesting.

    If Wiener feels a bit old before her time (she’s in her mid-twenties during her time in tech, and constantly lamenting how much younger all her bosses are) it’s both a function of Silicon Valley culture and its veneration of young male cowboys, and her own affectations.  Is any Brooklyn millennial ever really young?  Only when it’s too late.  As a non-engineer and a woman, Wiener is quite clear that for Silicon Valley, her time has passed.  Here is when she is at her most relatable in some ways: we have all been outsiders, and certainly many of would be in that setting.  At the same time, at 44 with three kids, I feel a bit like telling this sweet summer child to take her time.  And that much more will happen to her than already has.  Is that condescending?  The tone brings it out in me.  And maybe I’m also a little jealous: I could do with having made a lot of money in my 20s on the road to disillusionment with power and sexism and privilege and surveillance.  It’s better – maybe – than going down that road without making a lot of money and getting to live in San Francisco.  If, in the end, I’m not quite sure what the point of her big questions are, it’s still a hell of a good story.  I’m waiting for the movie version on “the streaming app that produces original content and doesn’t release its data.”

    _____

    Sharrona Pearl (@SharronaPearl) is a historian and theorist of the body and face.  She has written many articles and two monographs: About Faces: Physiognomy in Nineteenth-Century Britain (Harvard University Press, 2010) and Face/On: Face Transplants and the Ethics of the Other (University of Chicago Press, 2017). She is Associate Professor of Medical Ethics at Drexel University.

    Back to the essay

  • Richard Hill —  In Everything, Freedom for Whom? (Review of Laura DeNardis, The Internet in Everything: Freedom and Security in a World with No Off Switch)

    Richard Hill — In Everything, Freedom for Whom? (Review of Laura DeNardis, The Internet in Everything: Freedom and Security in a World with No Off Switch)

    a review of Laura DeNardis, The Internet in Everything: Freedom and Security in a World with No Off Switch (Yale University Press, 2020)

    by Richard Hill

    ~

    This highly readable book by a respected mainstream scholar (DeNardis is a well-known Internet governance scholar; she a professor in the School of Communication at American University and the author of The Global War for Internet Governance and other books) documents and confirms what a portion of civil society has been saying for some time: use of Internet has become pervasive and it is so deeply embedded in so many business and private processes that it can no longer be treated as neutral technology whose governance is delegated to private companies, especially not when the companies in question have dominant market power.

    As the author puts the matter (3): “The Internet is no longer merely a communications system connecting people and information. It is a control system connecting vehicles, wearable devices, home appliances, drones, medical equipment, currency, and every conceivable industry sector. Cyberspace now completely and often imperceptibly permeates offline spaces, blurring boundaries between material and virtual worlds. This transformation of the Internet from a communication network between people to a control network embedded directly into the physical world may be even more consequential than the shift from an industrial society to a digital information society.”

    The stakes of the Internet of Things (IoT) (which a respected technologist has referred to as the Internet of Trash) are high; as the author states (4): “The stakes of cybersecurity rise as Internet outages are no longer about losing access to communication and content but about losing day-to-day functioning in the real world, from the ability to drive a car to accessing medical care. Internet-connected objects bring privacy concerns into intimate spheres of human existence far beyond the already invasive data-gathering practices of Facebook, Google, and other content intermediaries”

    The author explains clearly, in non-technical language, key technological aspects (such as security) that are matters of concern. Because, citing Janet Abbate (132): “technical decisions can have far-reaching economic and social consequences, altering the balance of power between competing businesses or nations and constraining the freedom of users.” Standardization can have very significant effects. Yet (147): “In practice, the individuals involved in standards setting have been affiliated with corporations with a stake in the outcome of deliberations. Participation, while open, requires technical expertise and, often, funding to meaningfully engage.”

    The author also explains why it is inevitable that states will take an increasing interest in the governance of the Internet (7): “Technology policy must, in the contemporary context, anticipate and address future questions of accountability, risk, and who is responsible for outages, security updates, and reliability.”

    Although the book does not explicitly mention it (but there is an implicit reference at (216)), this is not surprising in light of the historical interest of states and empires in communications, the way in which policies of the United States regarding the Internet have favored its geo-economic and geo-political goals, in particular the interests of its large private companies that dominate the information and communications technology (ICT) sector worldwide, and the way in which United States has deliberately used a human rights discourse to promote policies that further those geo-economic and geo-political interests.

    As the author puts the matter (182, echoing others: “Powerful forces have an interest in keeping conceptions of freedom rooted in the free flow of content. It preserves revenue structures of private ordering and fuels the surveillance state.” However, “The free flow of information rests on a system of private surveillance capitalism in which possibilities for individual privacy are becoming increasingly tenuous. Governments then co-opt this infrastructure and associated data to enact surveillance and exert power over citizens. Tensions between openness and enclosure are high, with private companies increasingly using proprietary technologies, rather than those based on open standards, for anticompetitive means. Trade-secrecy-protected, and therefore invisible, algorithms make decisions that have direct effects on human freedom. Governments increasingly tamper with global infrastructure – such as local DNS redirection – for censorship.”  In this context, see also this excellent discussion of the dangerous consequences of the current dominance by a handful of companies.

    One wonders whether the situation might have been better if there had been greater government involvement all along. For example, as the author correctly notes (157): “A significant problem of Internet governance is the infinite-regress question of how to certify the authority that in turn certifies an online site.” In the original X.509 concept, there was no infinite-regress: the ultimate certification authority would have been an entity controlled by, or at least licensed by, a national government.

    The book focuses on IoT and the public interest, taking to task Internet governance systems and norms. Those who are not yet familiar with the issues, and their root causes, will be able to understand them and how to deal with them. As the book well explains, policymakers are not yet adequately addressing IoT issues; instead, there is a focus on “content” and social media governance issues rather than the emerging, possibly existential, consequences of the forthcoming IoT disruption. While many experts in Internet matters will find much familiar material, even they will benefit from the author’s novel approach.

    The author has addressed many issues in her numerous articles and books, mostly relating to infrastructure and the layers below content, as does this valuable book. However, in my view, the most important emerging issue of Internet governance is the economic value of data and its distribution (see for example the Annex of this submission and here, here and here.) Hopefully the author will tackle those subjects in the future.

    The author approvingly notes that Morozov has criticized (181) “two approaches: cyber-utopian views that the Internet can vanquish authoritarianism, and Internet-centrism that pushes technological solutions without regard to context.” She correctly notes (183) that “The goal of restoring, or preserving, a free and open Internet (backward-looking idealization) should be replaced with the objective of progressively moving closer to freedom (forward-looking).” While the book does explain (Chapter 6) that “free and open Internet” has been used as an agenda to further certain political and economic interests, I would have welcomed a more robust criticism of how that past idealization got us into the dangerous predicament that the book so well describes. The author asks (115): “A critical question is what provides the legitimacy for this privatization of governance”. I would reply “nothing, look at the mess, which is so well described in the book.”

    For example, the author posits (92): “Many chapters of Internet innovation have proceeded well without heavy regulatory constraints.” This is certainly true if “well” is intended to mean “have grown fast”; however, as the book well documents, it is not true if “well” is intended to mean “safely and deliberately”. As the author states (94): “From the Challenger space shuttle explosion to the Fukushima Daiichi nuclear disaster, the history of technological success is the history of technological failure.” Yes, and those failures, in particular for the cited examples, are due to engineering or operational mistakes. I posit that the same holds for the Internet issues that the book so clearly highlights.

    The author recognizes that (181) “The majority of human Internet users are not in the United States or even in so-called Western countries”, yet the book struck me as being US-centric, to the point of sometimes appearing biased. For example, by never adding “alleged” to references of Russian interference with US elections or cyber-espionage; by adding “alleged” to references of certain US actions; by not mentioning supposed or acknowledged instances of US cyber-activities other than the Snowden revelations; by stating (211) “Energy-grid sensors in the United States should not be easily accessible in Russia” when the converse is also the case. And by positing (88): “One historical feature, and now limitation, of privacy advocacy is that it approaches this area as an individual problem rather than a global economic and political problem.” Non-US advocates have consistently approached this area from the global perspective, see for example here, here and here.

    ***

    Chapter 1 reminds us that, at present, more objects are interconnected than are people, and explains how this results in all companies becoming, in sense, Internet companies, with the consequence that the (17): “embedding of network sensors and actuators into the physical world has transformed the design and governance of cyber infrastructure into one of the most consequential geopolitical issues of the twenty-first century.” As the author correctly notes (18): “Technical points of control are not neutral – they are sites of struggle over values and power arenas for mediating competing interests.” And (19): “the design of technical standards is political.” And (52): “Architectural constraints create political constraints.”

    Chapter 2 explains how the so-called Internet of Things is more accurately described as a set of cyber-physical systems or “network of everything” that is resulting in (28): “the fundamental integration of material-world systems and digital systems.” And it explains how that integration shapes new policy concerns, in particular with respect to privacy and security (38): “Cybersecurity no longer protects content and data only. It also protects food security and consumer safety.” (Market failures resulting in the current inadequate level of cybersecurity are well explained in the ISOC’s Global Internet Report 2016.)

    Chapter 3 explains how cyber-physical systems will pose an increasing threat to privacy. For example (60): “Privacy complications emerging in embedded toys underscore how all companies are now tech companies that gather and process digital data, not just content intermediaries such as Google but toy companies such as Mattel.” The author joins others in noting (61) that: “In the digital realm generally, it is an understatement to say that privacy is not going well.” As the author correctly notes (61): “Transparency and notice to consumers about data gathering and sharing practices should represent absolute minimal standards of practice. But even this minimal standard is difficult to attain.” I would have added that it is difficult to attain only because of the misguided neo-liberal policies that are still being pursued by the US and its allies, and that perpetuate the current business model of (61): “giving away free services in exchange for data-collection-driven targeted advertising” (for an in-depth discussion of this business model, see here). The author joins others in noting that (62): “This private surveillance is also what has enabled massive government surveillance of citizens”. And that (64):” This revenue model based on online advertising is only sustainable via the constant collection and accrual of personal information.” She notes that (84): “The collection of data via a constant feedback loop of sensors and actuators is part of the service itself.” And that (85): “Notice and choice are already problematic concepts, even when it is feasible to provide notice and gain consent, but they often do not apply at all to the Internet of things.”

    While it is true that traditional notice and consent may be difficult to implement for IoT, I would argue that we need to develop new methods to allow users to control their data meaningfully, and I believe that the author would agree that we don’t want IoT to become another tool for surveillance capitalism. According to the author (84): “Public policy has to realistically acknowledge that much social and economic good emanates from this constant data collection.” In my view, this has to be qualified: the examples given in the book don’t require the kind of pervasive data trading that exists at present. Yes, we need data collection, but not data exploitation as currently practiced. And indeed the author herself makes that point: it is indispensable to move towards the collection of only the data that are (88) “necessary for innovation and operational efficiency”. As she correctly notes (91), data minimization is a core tenet of the European Union’s GDPR.

    The chapter includes a good introduction of the current Internet economic model. While most of us acquiesce at least to some degree to that business model I would dispute the author’s assertion that (62): “it a cultural shift in what counts as the private sphere”, for the reasons explained in detail by Harcourt. Nor would I agree that (64): “It has also changed the norms of what counts as privacy.” Indeed, the EU’s GDPR and related developments elsewhere indicate that the norms imposed by the current business model are not well accepted outside the USA. The author herself refers to developments in the USA (82), the “Fair Information Practice Principles (FIPPs)”; I would have preferred a reference to the COE Convention 108.

    The author asks, I presume rhetorically, whether (65): “voluntary corporate measures suffice for protecting privacy”. The author correctly wonders whether, given the nature of IoT devices and their limited human interfaces (65): “traditional approaches such as notice, disclosure, and consumer choice even apply in cyber-physical systems”. That is, privacy problems are even more challenging to address. Yet, offline law applies equally online only, so I believe that we need to find ways to map the traditional approaches to IoT. As the author correctly says (84): “The question of what can and should be done faces inherent challenges” and conflicting values may need to be balanced; however, I don’t think that I can agree that (84): “In the realm of content control, one person’s privacy is another person’s censorship.”

    The author correctly states (88): “Especially in the cyber-physical arena, privacy has broad public purposes, in the same way as freedom of expression is not only about individual rights but also about public power and democratic stability.” See in this respect GDPR Recital 4.

    Chapter 4 explains well how insufficient cybersecurity is creating significant risks for systems that were traditionally not much affected by cyberthreats, that is, how what was previously referred to as the “physical world” is now inextricably tied to the cyberworld. As the book says, citing Bruce Schneier (106): “your security on the Internet depends on the security of millions of Internet-enabled devices, designed and sold by companies you’ve never heard of to consumers who don’t care about your security.” As the author says (109): “IoT devices are vulnerable, and this is a market failure, a political failure, and a technical failure.” (The market failures are well explained here).

    The chapter reminds us that cyberattacks have taken place and might turn into cyberwar; it also reminds us that some cyberattacks have been carried out using malware that had been stockpiled by the US government and that had leaked. The author outlines the debate involving (99): “the question of when governments should notify manufacturers and the public of vulnerabilities they detect, versus stockpiling knowledge of these vulnerabilities and exploits based on these bugs for cyber offense.” In my view, there is little to be debated: as the President of Microsoft said (cited at (123)), governments should agree not to stockpile vulnerabilities and immediately to notify them; further reasons are found in (125); for concrete proposals, see here.

    The author reminds us that (118): “Liability is an area in need of regulatory clarity.” This is reinforced at (225). As the author notes (120): “Those who purchase and install systems have a responsibility to be aware of the product’s privacy and security policies.” This is true, but it can be difficult or impossible in practice for consumers to have sufficient awareness. We expect people to check the pressure of the tires on their cars; we don’t expect them to check the engineering specifications of the brakes: manufacturers are liable for the engineering.

    The author also notes that (118): “the tradition, generally, has been immunity from liability for Internet intermediaries.” This is also discussed at (170). And, citing Jack Balkin (219): “The largest owners of private infrastructure are so powerful that we might even regard them as special-purpose sovereigns. They engage in perpetual struggles for power for control of digital networks with nation states, who, in turn, want to control and co-opt these powerful players.” As the author notes, there are some calls to move away from that tradition, see for example here, in particular because (221): “ Much of the power of private intermediaries emanates from massive data collection and monetization practices that underpin business models based on interactive advertising.” I disagree with the author when she posits that (223): “shifting to content-intermediary liability would create a disincentive to innovation and risk.” On the contrary, it might unlock the current non-competitive situation.

    The author asks, I trust rhetorically (121): “To what extent should back doors be built into cyber-physical system and device encryption for law enforcement access in light of the enormous consequences of security problems”. The answer is well known to anyone who understands the technical and policy issues: never (see also here and here). As the book puts the matter (126): “Without various types of encryption, there would be no digital commerce, no online financial systems, and no prospect whatsoever for private communications.”

    Chapter 5 explains why interoperability is at the heart of networks and how it has been evolving as the Internet moves away from being just a communications infrastructure, towards the infrastructure needed to conduct most all human activities. As the author correctly notes (145): “companies sometimes have an interest in proprietary specifications for anticompetitive effects and to lock in customer bases.” And (158): “social media platforms are, in some ways, closer to the proprietary online systems of the 1990s in which users of one online service could not communicate with users on other systems.” (A proposed solution to that issue can be found here). But it is worse that that (145): “intellectual property rights within connected objects enable manufacturers to control the flow of data and the autonomy and rights of individuals even after an object is purchased outright.” It would have been nice if the author had referenced the extensive criticism of the TRIPS agreements, which agreements are mentioned in the book (146).

    Chapter 6 reviews the “free and open Internet” mantra and reminds us that Internet freedom aspirations articulated by the US (164) “on the surface, comport with U.S. First Amendment traditions, the objective of maintaining the dominance of U.S. multinational tech companies, and a host of foreign-policy interventions contingent on spreading democratic values and attenuating the power of authoritarian regimes. Discourses around Internet freedom have served a variety of interests.” Indeed, as shown by Powers and Jabolonski, they have been deliberately used to promote US interests.

    Regarding Net Neutrality, as the author explains (177): “The complexity of the issue is far greater than it is often simplistically portrayed in the media and by policymakers.”

    The author correctly notes that (177) multistakeholder governance is a fetishized ideal. And that (167): “a … globally influential Internet freedom formulation views multistakeholder governance models as a mechanism for democratic ideals in cyberspace.” That view has been disputed, including by the author herself. I regret that, in addition to works she cites, she did not also cite her 2013 paper on the topic and other literature on multistakeholder governance in general (see the Annex of this submission to an ITU group), in particular that it has been criticized as being generally not fit for purpose.

    The chapter gives a good example of a novel cyber-physical speech issue (184): “Is a 3D-Printed Gun a Speech Right?”

    Chapter 7 summarizes the situation and makes recommendations. These have largely been covered above. But it worth repeating some key points (199): “Based on the insufficient state of privacy, security, and interoperability in the IoT, as well as the implications for human safety and societal stability, the prevailing philosophy of a private-sector-led governance structure has to be on the table for debate.” In particular because (199): “local objects are a global Internet governance concern”.

    The chapter also includes a good critique of those who believe that there are some sort of “invariant” architectural principles for the Internet that should guide policies. As the author correctly notes (210): “Setting aside global norm heterogeneity and just focusing on Western democracies, architectural principles are not fixed. Neither should they be fixed. … New architectural principles are needed to coincide with the demands of the contemporary moment.”

    Chapter 8 reminds us that the world has always changed, in particular due to the development of new technologies, and that this is what is happening now (215): “The diffusion of digital technologies into the material world represents a major societal transformation.” And (213): “Another sea change is that Internet governance has become a critical global political concern.” It includes a good discussion of the intermediary liability issues, as summarized above. And reinforces points made above, for example (227): “Voluntary industry self-regulation is inadequate in itself because there is not always an endogenous incentive structure to naturally induce strong security measures.”

    ***

    The author has written extensively on many topics not covered in depth in this book. People who are not familiar with her work might take certain statements in the book out of context and interpret them in ways with which I would not agree. For the sake of clarity, I comment below on some of those statements. This is not meant to be criticism of the book, or the author, but rather my interpretation of certain topics.

    According to the author (40): “Theft of intellectual property – such as trade secrets and industry patents – is a significant economic policy concern.” (The same point is made at (215)). I would argue, on the contrary, that the current intellectual property regime is far too strict and has become dysfunctional, as shown by the under-production of COVID vaccines. While the author uses the term “piracy” to refer to digitally-enabled copyright infringement, it is important to recall that piracy is a grave violent crime, whereas copyright infringement is an entirely different, non-violent crime.

    The author correctly notes (53) that: “The goal of preserving a ‘universal’ Internet with shared, open standards has always been present in Internet policy and design communities.” However, I would argue that that goal was related to the communications infrastructure (layers 1-5 of the OSI model), and not to the topics dealt with in the book. Indeed, as the book well explains (135), there is a clear trend towards proprietary, non-shared solutions for the cyber-physical infrastructure and the applications that it supports.

    The author states (54): “The need for massive pools of globally unique identifiers for embedded systems should provide an incentive for IPv6”. This is a correct, but a non-specialist may fail to understand the distinction between addresses (such as IP address) that identify a place to which information should be sent; and names, that uniquely identify an object or entity regardless of location. In that context, an IP address can be viewed as a temporary identifier of an object. The same caveat applies later (193): “A common name and number space is another defining historical characteristic of the Internet. Every device connected to the Internet, traditionally, has had a globally unique IP address.”

    The author states (66): “government surveillance primarily occurs via government requests to the private sector to disclose data”. My understanding of the Snowden revelations is different: the US government has its own extensive and pervasive data collection capabilities, quite independently of the private sector’s capabilities.

    According to the author, anonymous speech and behavior on the Internet were facilitated by (77): “Making unique Internet identifiers logical (software defined) rather than physical (linked to specific hardware)”. Again, a non-specialist may be induced in error. As the author well knows (having written authoritatively on the subject), it was only the shortage of IPv4 addresses that resulted in DHCP and widespread NATting; the original idea was that IP addresses would be statically device-specific; but they are addresses, not names, so they cannot be hard-coded, otherwise you couldn’t move the device to another location/network.

    The author posits regarding privacy (91): “Like most areas of Internet governance, it is a multistakeholder problem requiring multistakeholder solutions.” As already noted, the author has analyzed multistakeholder processes, their strengths and shortcoming, and the book explains clearly why the private sector has little interest in promoting privacy (as the author says (92): “In many ways, market incentives discourage privacy practices”), and given the visible failure of the Internet’s multistakeholder model to address fully the priorities set forth in the 2005 WGIG report: administration of the DNS root zone files and systems; Internet interconnection costs; security; and spam.

    A mention of ENISA (which is cited in elsewhere in the book) would have been welcome in the catalog of policy proposals for securing systems (110).

    The author notes (142): “ITU historically provides telecommunication specifications in areas such as Internet telephony.” Non specialists may not be aware of the fact that the key term here is “such as”: historically, the ITU did far more, and continues to do more, albeit not much in the specific area of Internet telephony.

    According to the author (148): “Similar to W3C specifications, IETF standards are freely published and historically unconstrained by intellectual property rights.” This is not quite correct. IETF has a RAND policy, whereas W3C does not.

    The author states that (153): “The original design of the Internet was itself a radical rethinking of existing architecture.” That is an overstatement: the Internet was an evolution of previous architectures.

    According to the author (156): “Blockchain already underlies a variety of mainstream financial and industrial service implementations.” She does not provide a reference for this statement, which I  (and others) find dubious, in particular with respect to the qualifier “mainstream”.

    The author states that IETF engineers (166): “created traditions of bottom-up technical design.” I believe that it would be more accurate to say that the IETF built on and reinforced such traditions, because, since the 19th century, most international standards were designed by bottom-up collaboration of engineers.

    The author posits that (166): “the goal of many standards is to extract royalties via underlying patents”. This may be true for de facto standards, but it is not true for international standards, since IEC, IETF, ISO, and ITU all have RAND policies.

    With respect to the WGIG (178), the non-specialist may not be aware that it was convened by consensus of the UN Member States, and that it addressed many issues other than the management and administration of Internet domain names and addresses, for example security and spam. Most of the issues are still open.

    Regarding the 2012 WCIT (182), what happened was considerably more complex than the short (US-centric) mention in the book.

    According to the author (201): “Data localization requirements, local DNS redirection, and associated calls for Internet sovereignty as an ideological competitor to the multistakeholder model of Internet governance do not match the way cross-border technology works in practice.” This appears to me to contradict the points well made elsewhere in the book to the effect that technology should not blindly drive policies. As already noted, the book (because of its focus) does not discuss the complex economic issues related to data. I don’t think that data localization, which merits a serious economic discussion, should be dismissed summarily as being incompatible with current technology, when in my view it is not. In this context, it is important to stress the counter-productive effects of e-commerce proposals being negotiated, in secret, in trade negotiations (see also here and here). The author does not mention them, no doubt because they are outside the main scope of the book, but perhaps also because they are sufficiently secret that she is not aware of them.

    The author refers to cryptocurrencies (206). It would have been nice if she had also referred to criticism of cryptocurrencies, see for example here.

    ***

    Again, these quibbles are not meant to detract in any way from the value of the book, which explains clearly, insightfully, and forcefully why things are changing and why we cannot continue to pretend that government interventions are not needed. In summary, I would highly recommend this book, in particular to policy-makers.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Zachary Loeb — Burn It All (Review of Mullaney, Peters, Hicks and Philip, eds., Your Computer Is on Fire)

    Zachary Loeb — Burn It All (Review of Mullaney, Peters, Hicks and Philip, eds., Your Computer Is on Fire)

    a review of Thomas S. Mullaney, Benjamin Peters, Mar Hicks and Kavita Philip, eds., Your Computer Is on Fire (MIT Press, 2021)

    by Zachary Loeb

    ~

    It often feels as though contemporary discussions about computers have perfected the art of talking around, but not specifically about, computers. Almost every week there is a new story about Facebook’s malfeasance, but usually such stories say little about the actual technologies without which such conduct could not have happened. Stories proliferate about the unquenchable hunger for energy that cryptocurrency mining represents, but the computers eating up that power are usually deemed less interesting than the currency being mined. Debates continue about just how much AI can really accomplish and just how soon it will be able to accomplish even more, but the public conversation winds up conjuring images of gleaming terminators marching across a skull-strewn wasteland instead of rows of servers humming in an undisclosed location. From Zoom to dancing robots, from Amazon to the latest Apple Event, from misinformation campaigns to activist hashtags—we find ourselves constantly talking about computers, and yet seldom talking about computers.

    All of the aforementioned specifics are important to talk about. If anything, we need to be talking more about Facebook’s malfeasance, the energy consumption of cryptocurrencies, the hype versus the realities of AI, Zoom, dancing robots, Amazon, misinformation campaigns, and so forth. But we also need to go deeper. Case in point, though it was a very unpopular position to take for many years, it is now a fairly safe position to say that “Facebook is a problem;” however, it still remains a much less acceptable position to suggest that “computers are a problem.” At a moment in which it has become glaringly obvious that tech companies have politics, there still remains a common sentiment that computers are neutral. And thus such a view can comfortably disparage Bill Gates and Jeff Bezos and Sundar Pichai and Mark Zuckerberg for the ways in which they have warped the potential of computing, while still holding out hope that computing can be a wonderful emancipatory tool if it can just be put in better hands.

    But what if computers are themselves, at least part of, the problem? What if some of our present technological problems have their roots deep in the history of computing, and not just in the dorm room where Mark Zuckerberg first put together FaceSmash?

    These are the sorts of troubling and provocative questions with which the essential new book Your Computer Is on Fire engages. It is a volume that recognizes that when we talk about computers, we need to actually talk about computers. A vital intervention into contemporary discussions about technology, this book wastes no energy on carefully worded declarations of fealty to computers and the Internet, there’s a reason why the book is not titled Your Computer Might Be on Fire but Your Computer Is on Fire.

    The editors of the volume are quite upfront about the confrontational stance of the volume, Thomas Mullaney opens the book by declaring that “Humankind can no longer afford to be lulled into complacency by narratives of techno-utopianism or technoneutrality” (4). This is a point that Mullaney drives home as he notes that “the time for equivocation is over” before emphasizing that despite its at moments woebegone tonality, the volume is not “crafted as a call of despair but as a call to arms” (8). While the book sets out to offer a robust critique of computers, Mar Hicks highlights that the editors and contributors of the book shall do this in a historically grounded way, which includes a vital awareness that “there are almost always red flags and warning signs before a disaster, if one cares to look” (14). Though unfortunately many of those who attempted to sound the alarm about the potential hazards of computing were either ignored or derided as technophobes. Where Mullaney had described the book as “a call to arms,” Hicks describes what sorts of actions this call may entail: “we have to support workers, vote for regulation, and protest (or support those protesting) widespread harms like racist violence” (23). And though the focus is on collective action, Hicks does not diminish the significance of individual ethical acts, noting powerfully (in words that may be particularly pointed at those who work for the big tech companies): “Don’t spend your life as a conscientious cog in a terribly broken system” (24).

    Your Computer Is on Fire begins like a political manifesto; as the volume proceeds the contributors maintain the sense of righteous fury. In addition to introductions and conclusions, the book is divided into three sections: “Nothing is Virtual” wherein contributors cut through the airy talking points to bring ideas about computing back to the ground; “This is an Emergency” sounds the alarm on many of the currently unfolding crises in and around computing; and “Where Will the Fire Spread” turns a prescient gaze towards trajectories to be mindful of in the swiftly approaching future. Hicks notes, “to shape the future, look to the past” (24), and this is a prompt that the contributors take up with gusto as they carefully demonstrate how the outlines of our high-tech society were drawn long before Google became a verb.

    Drawing attention to the physicality of the Cloud, Nathan Ensmenger begins the “Nothing is Virtual” section by working to resituate “the history of computing within the history of industrialization” (35). Arguing that “The Cloud is a Factory,” Ensmenger digs beneath the seeming immateriality of the Cloud metaphor to extricate the human labor, human agendas, and environmental costs that get elided when “the Cloud” gets bandied about. The role of the human worker hiding behind the high-tech curtain is further investigated by Sarah Roberts, who explores how many of the high-tech solutions that purport to use AI to fix everything, are relying on the labor of human beings sitting in front of computers. As Roberts evocatively describes it, the “solutionist disposition toward AI everywhere is aspirational at its core” (66), and this desire for easy technological solutions covers up challenging social realities. While the Internet is often hailed as an American invention, Benjamin Peters discusses the US ARPANET alongside the ultimately unsuccessful network attempts of the Soviet OGAS and Chile’s Cybersyn, in order to show how “every network history begins with a history of the wider word” (81), and to demonstrate that networks have not developed by “circumventing power hierarchies” but by embedding themselves into those hierarchies (88). Breaking through the emancipatory hype surrounding the Internet, Kavita Philip explores the ways in which the Internet materially and ideologically reifies colonial logics, of dominance and control, demonstrating how “the infrastructural internet, and our cultural stories about it, are mutually constitutive.” (110). Mitali Thakor brings the volume’s first part to a close, with a consideration of how the digital age is “dominated by the feeling of paranoia” (120), by discussing the development and deployment of sophisticated surveillance technologies (in this case, for the detection of child pornography).

    “Electronic computing technology has long been an abstraction of political power into machine form” (137), these lines from Mar Hicks eloquently capture the leitmotif that plays throughout the chapters that make up the second part of the volume. Hicks’ comment comes from an exploration of the sexism that has long been “a feature, not a bug” (135) of the computing sector, with particular consideration of the ways in which sexist hiring and firing practices undermined the development of England’s computing sector. Further exploring how the sexism of today’s tech sector has roots in the development of the tech sector, Corinna Schlombs looks to the history of IBM to consider how that company suppressed efforts by workers to organize by framing the company as a family—albeit one wherein father still knew best. The biases built into voice recognition technologies (such as Siri) are delved into by Halcyon Lawrence who draws attention to the way that these technologies are biased towards those with accents, a reflection of the lack of diversity amongst those who design these technologies. In discussing robots, Safiya Umoja Noble explains how “Robots are the dreams of their designers, catering to the imaginaries we hold about who should do what in our societies” (202), and thus these robots reinscribe particular viewpoints and biases even as their creators claim they are creating robots for good. Shifting away from the flashiest gadgets of high-tech society, Andrea Stanton considers the cultural logics and biases embedded in word processing software that treat the demands of languages that are not written left to write as somehow aberrant. Considering how much of computer usage involves playing games, Noah Wardrip-Fruin argues that the limited set of video game logics keeps games from being about very much—a shooter is a shooter regardless of whether you are gunning down demons in hell or fanatics in a flooded ruin dense with metaphors.

    Oftentimes hiring more diverse candidates is hailed as the solution to the tech sector’s sexism and racism, but as Janet Abbate notes in the first chapter of the “Where Will the Fire Spread?” section, this approach generally attempts to force different groups to fit into Silicon Valley’s warped view of what attributes make for a good programmer. Abbate contends that equal representation will not be enough “until computer work is equally meaningful for groups who do not necessarily share the values and priorities that currently dominate Silicon Valley” (266). While computers do things to society, they also perform specific technical functions, and Ben Allen comments on source code to show the power that programmers have to insert nearly undetectable hacks into the systems they create. Returning to the question of code as empowerment, Sreela Sarkar discusses a skills training class held in Seelampur (near New Delhi), to show that “instead of equalizing disparities, IT-enabled globalization has created and further heightened divisions of class, caste, gender, religion, etc.” (308). Turning towards infrastructure, Paul Edwards considers how the speed with which platforms have developed to become infrastructure has been much swifter than the speed with which older infrastructural systems were developed, which he explores by highlighting three examples in various African contexts (FidoNet, M-Pesa, and Free Basiscs). And Thomas Mullaney closes out the third section with a consideration of the way that the QWERTY keyboard gave rise to pushback and creative solutions from those who sought to type in non-Latin scripts.

    Just as two of the editors began the book with a call to arms, so too the other two editors close the book with a similar rallying cry. In assessing the chapters that had come before, Kavita Philip emphasizes that the volume has chosen “complex, contradictory, contingent explanations over just-so stories.” (364) The contributors, and editors, have worked with great care to make it clear that the current state of computers was not inevitable—that things currently are the way they are does not mean they had to be that way, or that they cannot be changed. Eschewing simplistic solutions, Philip notes that language, history, and politics truly matter to our conversations about computing, and that as we seek for the way ahead we must be cognizant of all of them. In the book’s final piece, Benjamin Peters sets the computer fire against the backdrop of anthropogenic climate change and the COVID-19 pandemic, noting the odd juxtaposition between the progress narratives that surround technology and the ways in which “the world of human suffering has never so clearly appeared on the brink of ruin” (378). Pushing back against a simple desire to turn things off, Peters notes that “we cannot return the unasked for gifts of new media and computing” (380). Though the book has clearly been about computers, truly wrestling with the matters must force us to reflect on what it is that we really talk about when we talk about computers, and it turns out that “the question of life becomes how do not I but we live now?” (380)

    It is a challenging question, and it provides a fitting end to a book that challenges many of the dominant public narratives surrounding computers. And though the book has emphasized repeatedly how important it is to really talk about computers, this final question powers down the computer to force us to look at our own reflection in the mirrored surface of the computer screen.

    Yes, the book is about computers, but more than that it is about what it has meant to live with these devices—and what it might mean to live differently with them in the future.

    *

    With the creation of Your Computer Is on Fire the editors (Hicks, Mullaney, Peters, and Philip) have achieved an impressive feat. The volume is timely, provocative, wonderfully researched, filled with devastating insights, and composed in such a way as to make the contents accessible to a broad audience. It might seem a bit hyperbolic to suggest that anyone who has used a computer in the last week should read this book, but anyone who has used a computer in the last week should read this book. Scholars will benefit from the richly researched analysis, students will enjoy the forthright tone of the chapters, and anyone who uses computers will come away from the book with a clearer sense of the way in which these discussions matter for them and the world in which they live.

    For what this book accomplishes so spectacularly is to make it clear that when we think about computers and society it isn’t sufficient to just think about Facebook or facial recognition software or computer skills courses—we need to actually think about computers. We need to think about the history of computers, we need to think about the material aspects of computers, we need to think about the (oft-unseen) human labor that surrounds computers, we need to think about the language we use to discuss computers, and we need to think about the political values embedded in these machines and the political moments out of which these machines emerged. And yet, even as we shift our gaze to look at computers more critically, the contributors to Your Computer Is on Fire continually remind the reader that when we are thinking about computers we need to be thinking about deeper questions than just those about machines, we need to be considering what kind of technological world we want to live in. And moreover we need to be thinking about who is included and who is excluded when the word “we” is tossed about casually.

    Your Computer Is on Fire is simultaneously a book that will make you think, and a good book to think with. In other words, it is precisely the type of volume that is so desperately needed right now.

    The book derives much of its power from the willingness on the parts of the contributors to write in a declarative style. In this book criticisms are not carefully couched behind three layers of praise for Silicon Valley, and odes of affection for smartphones, rather the contributors stand firm in declaring that there are real problems (with historical roots) and that we are not going to be able to address them by pledging fealty to the companies that have so consistently shown a disregard for the broader world. This tone results in too many wonderful turns of phrase and incendiary remarks to be able to list all of them here, but the broad discussion around computers would be greatly enhanced with more comments like Janet Abbate’s “We have Black Girls Code, but we don’t have ‘White Boys Collaborate’ or ‘White Boys Learn Respect.’ Why not, if we want to nurture the full set of skills needed in computing?” (263) While critics of technology often find themselves having to argue from a defensive position, Your Computer Is on Fire is a book that almost gleefully goes on the offense.

    It almost seems like a disservice to the breadth of contributions to the volume to try to sum up its core message in a few lines, or to attempt to neatly capture the key takeaways in a few sentences. Nevertheless, insofar as the book has a clear undergirding position, beyond the titular idea, it is the one eloquently captured by Mar Hicks thusly:

    High technology is often a screen for propping up idealistic progress narratives while simultaneously torpedoing meaningful social reform with subtle and systemic sexism, classism, and racism…The computer revolution was not a revolution in any true sense: it left social and political hierarchies untouched, at times even strengthening them and heightening inequalities. (152)

    And this is the matter with which each contributor wrestles, as they break apart the “idealistic progress narratives” to reveal the ways that computers have time and again strengthened the already existing power structures…even if many people get to enjoy new shiny gadgets along the way.

    Your Computer Is on Fire is a jarring assessment of the current state of our computer dependent societies, and how they came to be the way they are; however, in considering this new book it is worth bearing in mind that it is not the first volume to try to capture the state of computers in a moment in time. That we find ourselves in the present position, is unfortunately a testament to decades of unheeded warnings.

    One of the objectives that is taken up throughout Your Computer Is on Fire is to counter the techno-utopian ideology that never so much dies as much as it shifts into the hands of some new would-be techno-savior wearing a crown of 1s and 0s. However, even as the mantle of techno-savior shifts from Mark Zuckerberg to Elon Musk, it seems that we may be in a moment when fewer people are willing to uncritically accept the idea that technological progress is synonymous with social progress. Though, if we are being frank, adoring faith in technology remains the dominant sentiment (at least in the US). Furthermore, this isn’t the first moment when a growing distrust and dissatisfaction with technological forces has risen, nor is this the first time that scholars have sought to speak out. Therefore, even as Your Computer is on Fire provides fantastic accounts of the history of computing, it is worthwhile to consider where this new vital volume fits within the history of critiques of computing. Or, to frame this slightly differently, in what ways is the 21st century critique of computing, different from the 20th century critique of computing?

    In 1979 the MIT Press published the edited volume The Computer Age: A Twenty Year View. Edited by Michael Dertouzos and Joel Moses, that book brought together a variety of influential figures from the early history of computing including J.C.R. Licklider, Herbert Simon, Marvin Minsky, and many others. The book was an overwhelmingly optimistic affair, and though the contributors anticipated that the mass uptake of computers would lead to some disruptions, they imagined that all of these changes would ultimately be for the best. Granted, the book was not without a critical voice. The computer scientist turned critic, Joseph Weizenbaum was afforded a chapter in a quarantined “Critiques” section from which to cast doubts on the utopian hopes that had filled the rest of the volume. And though Weizenbaum’s criticisms were presented, the book’s introduction politely scoffed at his woebegone outlook, and Weizenbaum’s chapter was followed by not one but two barbed responses, which ensured that his critical voice was not given the last word. Any attempt to assess The Computer Age at this point will likely say as much about the person doing the assessing as about the volume itself, and yet it would take a real commitment to only seeing the positive sides of computers to deny that the volume’s disparaged critic was one of its most prescient contributors.

    If The Computer Age can be seen as a reflection of the state of discourse surrounding computers in 1979, than Your Computer Is on Fire is a blazing demonstration of how greatly those discussions have changed by 2021. This is not to suggest that the techno-utopian mindset that so infused The Computer Age no longer exists. Alas, far from it.

    As the contributors to Your Computer Is on Fire make clear repeatedly, much of the present discussion around computing is dominated by hype and hopes. And a consideration of those conversations in the second half of the twentieth century reveals that hype and hope were dominant forces then as well. Granted, for much of that period (arguably until the mid-1980s and not really taking off until the 1990s), computers remained technologies with which most people had relatively little direct interaction. The mammoth machines of the 1960s and 1970s were not all top-secret (though some certainly were), but when social critics warned about computers in the 50s, 60s, and 70s they were not describing machines that had become ubiquitous—even if they warned that those machines would eventually become so. Thus, when Lewis Mumford warned in 1956, that:

    In creating the thinking machine, man has made the last step in submission to mechanization; and his final abdication before this product of his own ingenuity has given him a new object of worship: a cybernetic god. (Mumford, 173)

    It is somewhat understandable that his warning would be met with rolled eyes and impatient scoffs. For “the thinking machine” at that point remained isolated enough from most people’s daily lives that the idea that this was “a new object of worship” seemed almost absurd. Though he continued issuing dire predictions about computers, by 1970 when Mumford wrote of the development of “computer dominated society” this warning could still be dismissed as absurd hyperbole. And when Mumford’s friend, the aforementioned Joseph Weizenbaum, laid out a blistering critique of computers and the “artificial intelligentsia” in 1976 those warnings were still somewhat muddled as the computer remained largely out of sight and out of mind for large parts of society. Of course, these critics recognized that this “cybernetic god” had not as of yet become the new dominant faith, but they issued such warnings out of a sense that this was the direction in which things were developing.

    Already by the 1980s it was apparent to many scholars and critics that, despite the hype and revolutionary lingo, computers were primarily retrenching existing power relations while elevating the authority of a variety of new companies. And this gave rise to heated debates about how (and if) these technologies could be reclaimed and repurposed—Donna Haraway’s classic Cyborg Manifesto emerged out of those debates. By the time of 1990’s “Neo-Luddite Manifesto,” wherein Chellis Glendinning pointed to “computer technologies” as one of the types of technologies the Neo-Luddites were calling to be dismantled, the computer was becoming less and less an abstraction and more and more a feature of many people’s daily work lives. Though there is not space here to fully develop this argument, it may well be that the 1990s represent the decade in which many people found themselves suddenly in a “computer dominated society.”  Indeed, though Y2K is unfortunately often remembered as something of a hoax today, delving back into what was written about that crisis as it was unfolding makes it clear that in many sectors Y2K was the moment when people were forced to fully reckon with how quickly and how deeply they had become highly reliant on complex computerized systems. And, of course, much of what we know about the history of computing in those decades of the twentieth century we owe to the phenomenal research that has been done by many of the scholars who have contributed chapters to Your Computer Is on Fire.

    While Your Computer Is on Fire provides essential analyses of events from the twentieth century, as a critique it is very much a reflection of the twenty-first century. It is a volume that represents a moment in which critics are no longer warning “hey, watch out, or these computers might be on fire in the future” but in which critics can now confidently state “your computer is on fire.” In 1956 it could seem hyperbolic to suggest that computers would become “a new object of worship,” by 2021 such faith is on full display. In 1970 it was possible to warn of the threat of “computer dominated society,” by 2021 that “computer dominated society” has truly arrived. In the 1980s it could be argued that computers were reinforcing dominant power relations, in 2021 this is no longer a particularly controversial position. And perhaps most importantly, in 1990 it could still be suggested that computer technologies should be dismantled, but by 2021 the idea of dismantling these technologies that have become so interwoven in our daily lives seems dangerous, absurd, and unwanted. Your Computer Is on Fire is in many ways an acknowledgement that we are now living in the type of society about which many of the twentieth century’s technological critics warned. In the book’s final conclusion, Benjamin Peters pushes back against “Luddite self-righteousness” to note that “I can opt out of social networks; many others cannot” (377), and it is the emergence of this moment wherein the ability to “opt out” has itself become a privilege is precisely the sort of danger about which so many of the last century’s critics were so concerned.

    To look back at critiques of computers made throughout the twentieth century is in many ways a fairly depressing activity. For it reveals that many of those who were scorned as “doom mongers” had a fairly good sense of what computers would mean for the world. Certainly, some will continue to mock such figures for their humanism or borderline romanticism, but they were writing and living in a moment when the idea of living without a smartphone had not yet become unthinkable. As the contributors to this essential volume make clear, Your Computer Is on Fire, and yet too many of us still seem to believe that we are wearing asbestos gloves, and that if we suppress the flames of Facebook we will be able to safely warm our toes on our burning laptop.

    What Your Computer Is on Fire achieves so masterfully is to remind its readers that the wired up society in which they live was not inevitable, and what comes next is not inevitable either. And to remind them that if we are going to talk about what computers have wrought, we need to actually talk about computers. And yet the book is also a discomforting testament to a state of affairs wherein most of us simply do not have the option of swearing off computers. They fill our homes, they fill our societies, they fill our language, and they fill our imaginations. Thus, in dealing with this fire a first important step is to admit that there is a fire, and to stop absentmindedly pouring gasoline on everything. As Mar Hicks notes:

    Techno-optimist narratives surrounding high-technology and the public good—ones that assume technology is somehow inherently progressive—rely on historical fictions and blind spots that tend to overlook how large technological systems perpetuate structures of dominance and power already in place. (137)

    And as Kavita Philip describes:

    it is some combination of our addiction to the excitement of invention, with our enjoyment of individualized sophistications of a technological society, that has brought us to the brink of ruin even while illuminating our lives and enhancing the possibilities of collective agency. (365)

    Historically rich, provocatively written, engaging and engaged, Your Computer Is on Fire is a powerful reminder that when it is properly controlled fire can be useful, but when fire is allowed to rage out of control it turns everything it touches to ash. This book is not only a must read, but a must wrestle with, a must think with, and a must remember. After all, the “your” in the book’s title refers to you.

    Yes, you.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2o Review Digital Studies section.

    Back to the essay

    Works Cited

    • Lewis Mumford. The Transformations of Man. New York: Harper and Brothers, 1956.

     

     

     

     

     

  • Richard Hill —  The Curse of Concentration (Review of Cory Doctorow, How to Destroy Surveillance Capitalism)

    Richard Hill — The Curse of Concentration (Review of Cory Doctorow, How to Destroy Surveillance Capitalism)

    a review of Cory Doctorow, How to Destroy Surveillance Capitalism (OneZero, 2021)

    by Richard Hill

    ~

    This short online (free access) book provides a highly readable, inspiring, and powerful complement to Shoshana Zuboff’s The Age of Surveillance Capitalism (which the author qualifies and to some extent criticizes) and Timothy Wu’s The Curse of Bigness. It could be sub-titled (paraphrasing Maistre) “every nation gets the economic system it deserves,” in this case a symbiosis of corporate surveillance and state surveillance, in an economy dominated by, and potentially controlled by, a handful of companies. As documented elsewhere, that symbiosis is not an accident or coincidence. As the author puts the matter: “We need to take down Big Tech, and to do that, we need to start by correctly identifying the problem.”

    What follows is my analysis of the ideas of the book: it does not follow the order in which the ideas are presented in the book. In a nutshell, the author describes the source of the problem: an advertising-based revenue model that requires ever-increasing amounts of data, and thus ever-increasing concentration, coupled with weak anti-trust enforcement, and, worse, government actions that deliberately or inadvertently favor the power of dominant companies. The author describes (as have others) the negative effects this has had for privacy (which, as the author says, “is necessary for human progress”) and democracy; and proposes some solutions: strong antitrust, but also a relatively new idea – imposed interoperability. I will summarize these themes in the order given above.

    However, I will first summarize four important observations that underpin the issues outlined above. The first is that the Internet (and information and communications technologies (ICT) in general) is everything. As the author puts it: “The upshot of this is that our best hope of solving the big coordination problems — climate change, inequality, etc. — is with free, fair, and open tech.”

    The second is that data and information are increasingly important (see for example the Annex of this submission), and don’t fit well into existing private property regimes (see also here and here). And this in particular because of the way it is currently applied: “Big Tech has a funny relationship with information. When you’re generating information — anything from the location data streaming off your mobile device to the private messages you send to friends on a social network — it claims the rights to make unlimited use of that data. But when you have the audacity to turn the tables — to use a tool that blocks ads or slurps your waiting updates out of a social network and puts them in another app that lets you set your own priorities and suggestions or crawls their system to allow you to start a rival business — they claim that you’re stealing from them.”

    The third is that the time has come to reject the notion that ICTs, the Internet, and the companies that dominate those industries (“Big Tech”) are somehow different from everything else and should be treated differently: “I think tech is just another industry, albeit one that grew up in the absence of real monopoly constraints. It may have been first, but it isn’t the worst nor will it be the last.”

    The fourth is that network effects favor concentration: “A decentralization movement has tried to erode the dominance of Facebook and other Big Tech companies by fielding ‘indieweb’ alternatives – Mastodon as a Twitter alternative, Diaspora as a Facebook alternative, etc. – but these efforts have failed to attain any kind of liftoff. Fundamentally, each of these services is hamstrung by the same problem: every potential user for a Facebook or Twitter alternative has to convince all their friends to follow them to a decentralized web alternative in order to continue to realize the benefit of social media. For many of us, the only reason to have a Facebook account is that our friends have Facebook accounts, and the reason they have Facebook accounts is that we have Facebook accounts.”

    Turning to the main ideas of the book, the first is that the current business model is based on advertising: “ad-driven Big Tech’s customers are advertisers, and what companies like Google and Facebook sell is their ability to convince you to buy stuff. Big Tech’s product is persuasion. The services — social media, search engines, maps, messaging, and more — are delivery systems for persuasion. Rather than finding ways to bypass our rational faculties, surveillance capitalists like Mark Zuckerberg mostly do one or more of three things: segment the market, attempt to deceive it, and exploit dominant positions.”

    Regarding segmentation, the author states: “Facebook is tops for segmenting.” However, despite the fine targeting, its ads don’t always work: “The solution to Facebook’s ads only working one in a thousand times is for the company to try to increase how much time you spend on Facebook by a factor of a thousand. Rather than thinking of Facebook as a company that has figured out how to show you exactly the right ad in exactly the right way to get you to do what its advertisers want, think of it as a company that has figured out how to make you slog through an endless torrent of arguments even though they make you miserable, spending so much time on the site that it eventually shows you at least one ad that you respond to.”

    Thus it practices a form of deception: “So Facebook has to gin up traffic by sidetracking its own forums: every time Facebook’s algorithm injects controversial materials – inflammatory political articles, conspiracy theories, outrage stories – into a group, it can hijack that group’s nominal purpose with its desultory discussions and supercharge those discussions by turning them into bitter, unproductive arguments that drag on and on. Facebook is optimized for engagement, not happiness, and it turns out that automated systems are pretty good at figuring out things that people will get angry about.”

    The author describes how the current level of concentration is not due only to network effects and market forces. But also to “tactics that would have been prohibited under classical, pre-Ronald-Reagan antitrust enforcement standards.”

    This is compounded by the current copyright regime: “If our concern is that markets cease to function when consumers can no longer make choices, then copyright locks should concern us at least as much as influence campaigns. An influence campaign might nudge you to buy a certain brand of phone; but the copyright locks on that phone absolutely determine where you get it serviced, which apps can run on it, and when you have to throw it away rather than fixing it. Copyright locks are a double whammy: they create bad security decisions that can’t be freely investigated or discussed.”

    And it is due to inadequate government intervention: “Only the most extreme market ideologues think that markets can self-regulate without state oversight. Markets need watchdogs – regulators, lawmakers, and other elements of democratic control – to keep them honest. When these watchdogs sleep on the job, then markets cease to aggregate consumer choices because those choices are constrained by illegitimate and deceptive activities that companies are able to get away with because no one is holding them to account. Many of the harms of surveillance capitalism are the result of weak or nonexistent regulation. Those regulatory vacuums spring from the power of monopolists to resist stronger regulation and to tailor what regulation exists to permit their existing businesses.”

    For example as the author documents, the penalties for leaking data are negligible, and “even the most ambitious privacy rules, such as the EU General Data Protection Regulation, fall far short of capturing the negative externalities of the platforms’ negligent over-collection and over-retention, and what penalties they do provide are not aggressively pursued by regulators.”

    Yet we know that data will leak and can be used for identity theft with major consequences: “For example, attackers can use leaked username and password combinations to hijack whole fleets of commercial vehicles that have been fitted with anti-theft GPS trackers and immobilizers or to hijack baby monitors in order to terrorize toddlers with the audio tracks from pornography. Attackers use leaked data to trick phone companies into giving them your phone number, then they intercept SMS-based two-factor authentication codes in order to take over your email, bank account, and/or cryptocurrency wallets.”

    But we should know what to do: “Antitrust is a market society’s steering wheel, the control of first resort to keep would-be masters of the universe in their lanes. But Bork and his cohort ripped out our steering wheel 40 years ago. The car is still barreling along, and so we’re yanking as hard as we can on all the other controls in the car as well as desperately flapping the doors and rolling the windows up and down in the hopes that one of these other controls can be repurposed to let us choose where we’re heading before we careen off a cliff. It’s like a 1960s science-fiction plot come to life: people stuck in a ‘generation ship,’ plying its way across the stars, a ship once piloted by their ancestors; and now, after a great cataclysm, the ship’s crew have forgotten that they’re in a ship at all and no longer remember where the control room is. Adrift, the ship is racing toward its extinction, and unless we can seize the controls and execute emergency course correction, we’re all headed for a fiery death in the heart of a sun.”

    We know why nobody is in the control room: “The reason the world’s governments have been slow to create meaningful penalties for privacy breaches is that Big Tech’s concentration produces huge profits that can be used to lobby against those penalties – and Big Tech’s concentration means that the companies involved are able to arrive at a unified negotiating position that supercharges the lobbying.” Regarding lobbying, see for example here and here.

    But it’s worse than lack of control: not only have governments failed to enforce antitrust laws, they have actively favored mass collection of data, for their own purposes: “Any hard limits on surveillance capitalism would hamstring the state’s own surveillance capability. … At least some of the states’ unwillingness to take meaningful action to curb surveillance should be attributed to this symbiotic relationship. There is no mass state surveillance without mass commercial surveillance. … Monopolism is key to the project of mass state surveillance. … A concentrated tech sector that works with authorities is a much more powerful ally in the project of mass state surveillance than a fragmented one composed of smaller actors.” The author documents how this is the case for Amazon’s Ring.

    As the author says: “This mass surveillance project has been largely useless for fighting terrorism: the NSA can only point to a single minor success story in which it used its data collection program to foil an attempt by a U.S. resident to wire a few thousand dollars to an overseas terror group. It’s ineffective for much the same reason that commercial surveillance projects are largely ineffective at targeting advertising: The people who want to commit acts of terror, like people who want to buy a refrigerator, are extremely rare. If you’re trying to detect a phenomenon whose base rate is one in a million with an instrument whose accuracy is only 99%, then every true positive will come at the cost of 9,999 false positives.”

    And the story gets worse and worse: “In the absence of a competitive market, lawmakers have resorted to assigning expensive, state-like duties to Big Tech firms, such as automatically filtering user contributions for copyright infringement or terrorist and extremist content or detecting and preventing harassment in real time or controlling access to sexual material. These measures put a floor under how small we can make Big Tech because only the very largest companies can afford the humans and automated filters needed to perform these duties. But that’s not the only way in which making platforms responsible for policing their users undermines competition. A platform that is expected to police its users’ conduct must prevent many vital adversarial interoperability techniques lest these subvert its policing measures.”

    So we get into a vicious circle: “To the extent that we are willing to let Big Tech police itself – rather than making Big Tech small enough that users can leave bad platforms for better ones and small enough that a regulation that simply puts a platform out of business will not destroy billions of users’ access to their communities and data – we build the case that Big Tech should be able to block its competitors and make it easier for Big Tech to demand legal enforcement tools to ban and punish attempts at adversarial interoperability.”

    And into a long-term conundrum: “Much of what we’re doing to tame Big Tech instead of breaking up the big companies also forecloses on the possibility of breaking them up later. Yet governments confronting all of these problems all inevitably converge on the same solution: deputize the Big Tech giants to police their users and render them liable for their users’ bad actions. The drive to force Big Tech to use automated filters to block everything from copyright infringement to sex-trafficking to violent extremism means that tech companies will have to allocate hundreds of millions to run these compliance systems.” Such rules “are not just death warrants for small, upstart competitors that might challenge Big Tech’s dominance but who lack the deep pockets of established incumbents to pay for all these automated systems. Worse still, these rules put a floor under how small we can hope to make Big Tech.”

    The author documents how the curse of concentration is not restricted to ICTs and the Internet. For example: “the degradation of news products long precedes the advent of ad-supported online news. Long before newspapers were online, lax antitrust enforcement had opened the door for unprecedented waves of consolidation and roll-ups in newsrooms.” However, as others have documented in detail, the current Internet advertising model has weakened conventional media, with negative effects for democracy.

    Given the author’s focus on weak antitrust enforcement as the root of the problems, it’s not surprising that he sees antitrust as a solution: “Today, we’re at a crossroads where we’re trying to figure out if we want to fix the Big Tech companies that dominate our internet or if we want to fix the internet itself by unshackling it from Big Tech’s stranglehold. We can’t do both, so we have to choose. If we’re going to break Big Tech’s death grip on our digital lives, we’re going to have to fight monopolies. I believe we are on the verge of a new “ecology” moment dedicated to combating monopolies. After all, tech isn’t the only concentrated industry nor is it even the most concentrated of industries. You can find partisans for trustbusting in every sector of the economy. … First we take Facebook, then we take AT&T/WarnerMedia.”

    It may be hard to break up big tech, but it’s worth starting to work on it: “Getting people to care about monopolies will take technological interventions that help them to see what a world free from Big Tech might look like. … Getting people to care about monopolies will take technological interventions that help them to see what a world free from Big Tech might look like.”

    In particular, the author stresses a relatively new idea: adversarial compatibility, that is, forced interoperability: “adversarial compatibility reverses the competitive advantage: If you were allowed to compete with Facebook by providing a tool that imported all your users’ waiting Facebook messages into an environment that competed on lines that Facebook couldn’t cross, like eliminating surveillance and ads, then Facebook would be at a huge disadvantage. It would have assembled all possible ex-Facebook users into a single, easy-to-find service; it would have educated them on how a Facebook-like service worked and what its potential benefits were; and it would have provided an easy means for disgruntled Facebook users to tell their friends where they might expect better treatment. Adversarial interoperability was once the norm and a key contributor to the dynamic, vibrant tech scene, but now it is stuck behind a thicket of laws and regulations that add legal risks to the tried-and-true tactics of adversarial interoperability. New rules and new interpretations of existing rules mean that a would-be adversarial interoperator needs to steer clear of claims under copyright, terms of service, trade secrecy, tortious interference, and patent.”

    In conclusion: “Ultimately, we can try to fix Big Tech by making it responsible for bad acts by its users, or we can try to fix the internet by cutting Big Tech down to size. But we can’t do both. To replace today’s giant products with pluralistic protocols, we need to clear the legal thicket that prevents adversarial interoperability so that tomorrow’s nimble, personal, small-scale products can federate themselves with giants like Facebook, allowing the users who’ve left to continue to communicate with users who haven’t left yet, reaching tendrils over Facebook’s garden wall that Facebook’s trapped users can use to scale the walls and escape to the global, open web.”

    In this context, it is important to stress the counter-productive effects of e-commerce proposals being negotiated, in secret, in trade negotiations (see also here and here). The author does not mention them, perhaps because they are sufficiently secret that he is not aware of them.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Richard Hill — “Free” Isn’t Free (Review of Michael Kende, The Flip Side of Free)

    Richard Hill — “Free” Isn’t Free (Review of Michael Kende, The Flip Side of Free)

    a review of Michael Kende, The Flip Side of Free: Understanding the Economics of the Internet (MIT Press, 2021)

    by Richard Hill

    ~

    This book is a must-read for anyone who wishes to engage in meaningful discussions of Internet governance, which will increasingly involve economic issues (17-20). It explains clearly why we don’t have to pay in money for services that are obviously expensive to provide. Indeed, as we all know, we get lots of so-called free services on the Internet: search facilities, social networks, e-mail, etc. But, as the old saying goes “there ain’t no such thing as a free lunch.” It costs money to provide all those Internet services (10), and somebody has to pay for them somehow. In fact, users pay for them, by allowing (often unwittingly: 4, 75, 92, 104, 105) the providers to collect personal data which is then aggregated and used to sell other services (in particular advertising, 69) at a large profit. The book correctly notes that there are both advantages (79) and disadvantages (Chapters 5-8) to the current regime of surveillance capitalism. Had I written a book on the topic, I would have been more critical and would have preferred a subtitle such as “The Triumph of Market Failures in Neo-Liberal Regimes.”

    Michael Kende is a Senior Fellow and Visiting Lecturer at the Graduate Institute of International and Development Studies, Geneva, a Senior Adviser at Analysis Mason, a Digital Development Specialist at the World Bank Group, and former Chief Economist of the Internet Society. He has worked as an academic economist at INSEAD as a US regulator at the Federal Communications Commission. In this clearly written and well researched book, he explains, in laymen’s terms, the seeming paradox of “free” services that nevertheless yield big profits.

    The secret is to exploit the monetary value of something that had some, but not much, value until a bit over twenty years ago: data (63). The value of data is now so large that the companies that exploit it are the most valuable companies in the world, worth more than old giants such as producers of automobiles or petroleum. In fact data is so central to today’s economy that, as the author puts it (143): “It is possible that a new metric is needed to measure market power, especially when services are offered for free. Where normally a profitable increase in price was a strong metric, the new metric may be the ability to profitably gather data – and monetize it through advertising – without losing market share.” To my knowledge, this is an original idea, and it should be taken seriously by anyone interested in the future evolution of, not just the Internet, but society in general (for the importance of data, see for example the annex of this paper, and also here).

    The core value of this book lies in Chapters 5 through 10, which provide economic explanations – in easy-to-understand lay language – of the current state of affairs. They cover the essential elements: the importance of data, and why a few companies have dominant positions. Readers looking for somewhat more technical economic explanations may consider reading this handbook and readers looking for the history of the geo-economic policies that resulted in the current state of affairs can read the books reviewed here and here.

    Chapter 5 of the book explains why most of us trade off the privacy of our data in exchange for “free” services: the benefits may outweigh the risks (88), we may underestimate the risks (89), and we may not actually know the risks (91, 92, 105). As the author correctly notes (99-105), there likely are market failures that should be corrected by government action, such as data privacy laws. The author mentions the European Union GDPR (100); I think that it is also worth mentioning the less known, but more widely adopted, Council of Europe Convention (108). And I would have preferred an even more robust criticism of jurisdictions that allow data brokers to operate secretively (104).

    Chapter 6 explains how market failures have resulted in inadequate security in today’s Internet. In particular users cannot know if a product has an adequate level of security (information asymmetry) and one user’s lack of security may not affect him or her, but may affect others (negative externalities). As the author says, there is a need to develop security standards (e.g. devices should not ship with default administrator passwords) and to impose liability for companies that market insecure products (120, 186).

    Chapter 7 explains well the economic concepts of economies of scale and network effect (see also 23), how they apply to the Internet, and why (122-129) they facilitated the emergence of the current dominant platforms (such as Amazon, Facebook, Google, and their Chinese equivalents). This results in a winner-takes-all situation: the best company becomes the only significant player (133-137). At present, competition policy (140-142) has not dealt with this issue satisfactorily and innovative approaches that recognize the central role and value of data may be needed. I would have appreciated an economic discussion of how much (or at least some) of the gig economy is not based on actual innovation (122), but on violating labor laws or housing and consumer protection laws. I would also have expected a more extensive discussion of two-sided markets (135): while the topic is technical, I believe that the author has the skills to explain it clearly for laypeople. It is a pity that the author didn’t explore, at least briefly, the economic issues relating to the lack of standardization, and interoperability, of key widely used services, such as teleconferencing: nobody would accept having to learn to use a plethora of systems in order to make telephone calls; why do we accept that for video calls?

    The chapter correctly notes that data is the key (143-145) and notes that data sharing (145-147, 187, 197) may help to reintroduce competition. While it is true that data is in principle non-rivalrous (194), in practice at present it is hoarded and treated as private property by those who collect it. It would have been nice if the author had explored methods for ensuring the equitable distribution of the value added of data, but that would no doubt have required an extensive discussion of equity. It is a pity that the author didn’t discuss the economic implications, and possible justification, of providing certain base services (e.g. e-mail, search) as public services: after all, if physical mail is a public service, why shouldn’t e-mail also be a public service?

    Chapter 8 documents the digital divide: access to Internet is much less affordable, and widespread, in developing countries than it is in developed countries. As the author points out, this is not a desirable situation, and he outlines solutions (including infrastructure sharing and universal service funds (157)), as have others (for example here, here, here, and here). It would have been nice if the author had explored how peering (48) may disadvantage developing countries (in particular because much of their content is hosted abroad (60, 162)); and evaluated the economics of relying on large (and hence efficient and low-cost) data centers in hubs as opposed to local hosting (which has lower transmission costs but higher operating costs); but perhaps those topics would have strayed from the main theme of the book. The author correctly identifies the lack of payment systems as a significant hindrance to greater adoption of the e-commerce in developing countries (164); and, of course, the relative disadvantage with respect to data of companies in developing countries (170, 195).

    Chapter 9 explains why security and trust on the Internet must be improved, and correctly notes that increasing privacy will not necessarily increase trust (183). The Chapter reiterates some of the points outlined above, and rightly concludes: “There is good reason to raise the issue [of lack of trust] when seeing the market failures taking place today with cybersecurity, sometimes based on the most easily avoidable mistakes, and the lack of efforts to fix them. If we cannot protect ourselves today, what about tomorrow?” (189)

    Chapter 10 correctly argues that change is needed, and outlines the key points: “data is the basis for market power; lack of data is the hidden danger of the digital divide; and data will train the algorithms of the future AI” (192). Even when things go virtual, there is a role for governments: “who but governments could address market power and privacy violations and respond to state-sponsored attacks against their citizens or institutions?” (193) Data governance will be a key topic for the future: “how to leverage the unique features of data and avoid the costs: how to generate positive good while protecting privacy and security for personal data; how to maintain appropriate property rights to reward innovation and investment while checking market power; how to enable machine learning while allowing new companies strong on innovation and short on data to flourish; how to ensure that the digital divide is not replaced by a data divide.” (195)

    Chapters 1 through 4 purport to explain how certain technical features of the Internet condition its economics. The chapters will undoubtedly be useful for people who don’t have much knowledge of telecommunication and computer networks, but they are unfortunately grounded in an Internet-centric view that does not, in my view, accord sufficient weight to the long history of telecommunications, and, consequently, considers as inevitable things that were actually design choices. It is important to recall that the Internet was originally designed as a national (US) non-public military and research network (27-28). As such, it originally provided only for 7-bit ASCII character sets (thus excluding character with accents), it did not provide for usage-based billing, and it assumed that end-to-end encryption could be used to provide adequate security (108). It was not designed to allow insecure end-user devices (such as personal computers) to interconnect on a global scale.

    The Internet was originally funded by governments, so when it was privatized, some method of funding other than conventional usage charges had to be invented (such as receiver pays (53)– and advertising). It is correct (39, 44) that differences in pricing are due to differences in technology, but only because the Internet technologies were not designed to facilitate consumption/volume-based pricing. I would have expected an economics-based discussion of how this makes it difficult to optimize networks, which always have choke points (54-55). For example, I am connected by DSL, and I pay for a set bandwidth, which is restricted by my ISP. While the fiber can carry higher bandwidth (I just have to pay more for it), at any given time (as the author correctly notes) my actual bandwidth depends on what my neighbours that share the same multiplexor are doing. If one of my neighbours is streaming full-HD movies all day long, my performance will degrade, yet they may or may not be paying the same price as me (55). This is not economically efficient. Thus, contrary to what the author posits (46), best-effort packet switching (the Internet model) is not always more efficient than circuit-switching: if guaranteed quality of service is needed, circuit-switching can be more efficient that paying for more bandwidth, even if, in case of overload, service is denied rather than being “merely” degraded (those of us who have had to abandon an Internet teleconference because of poor quality will appreciate that degradation can equal service denial; and musicians who have tried to perform virtually doing the pandemic would have appreciated a guaranteed quality of service that would have ensured synchronization between performers and between video and sound).

    As the author correctly notes, (59) some form of charging is necessary when resources are scarce; and (42, 46, 61) it is important to allocate scarcity efficiently. It’s a pity that the author didn’t explore the economics of usage-based billing, and dedicated circuits, as methods for the efficient allocation of scarcity (again, in the end there is always a scarce resource somewhere in the system). And it’s a pity that he didn’t dig into the details of the economic factors that result in video traffic being about 70% of all traffic (159): is that due to commercial video-on-demand services (such as Netflix), or to user file sharing (such as YouTube) or to free pornography (such as PornHub)? In addition, I would have appreciated a discussion of the implications of the receiver-pays model, considering that receivers pay not only for the content they requested (e.g. Wikipedia pages), but also for content that they don’t want (e.g. spam) or didn’t explicitly request (e.g. adversiting).

    The mention in passing of the effects of Internet on democracy (6) fails to recognize the very deleterious indirect effects resulting from the decline of traditional media. Contrary to what the book implies (7, 132) breaking companies up would not necessarily be deleterious, and making platforms responsible for content would not necessarily stifle innovation., even if such measures could have downsides.

    It is true (8) that anything can be connected to the Internet (albeit with a bit more configuration than the book implies), but it is also true that this facilitates phishing, malware attacks, spoofing, abuse of social networks, and so forth.

    Contrary to what the author implies (22), ICT standards have always been free to use (with some exceptions relating to intellectual property rights; further, the exceptions allowed by IETF are the same as those allowed by ITU and most other standards-making bodies (34)). Core Internet standards have always been free to access online, whereas that was not the case in the past for telecommunications standards; however, that has changed, and ITU telecommunications standards are also freely available online. While it is correct (24) that access to traditional telecommunication networks was tightly controlled, and that early data networks were proprietary, traditional telecommunications networks and later data networks were based on publicly-available standards. While it is correct (31) that anybody can contribute to Internet standards-making, in practice the discussions are dominated by people who are employed by companies that have a vested interest in the standards (see for example pp. 149-152 of the book reviewed here, and Chapters 5 and 6 of the book reviewed here); further, W3C (32) and IEEE (33) are a membership organization, as are the more traditional standardization bodies. While users of standards (in particular manufacturers) have a role in making Internet standards, that is the case for most standard-making; end-users do not have a role in making Internet standards (32). Regarding standards (33), the author fails to mention the key role of ITU-R with respect to the availability of WiFi spectrum and of ITU-T with respect to xDSL (51) and compression.

    The OSI Model (26) was a joint effort of CCITT/ITU, IEC, and ISO. Contrary to what the author implies (29), e-mail existed in some form long before the Internet, albeit as proprietary systems, and there were other efforts to standardize e-mail; it is a pity that the author didn’t provide an economic analysis of why SMTP prevailed over more secure e-mail protocols, and how its lack of billing features facilitates spam (I have been told that the “simple” in SMTP refers to absence of the security and billing features that encumbered other e-mail protocols).

    While much of the Internet is decentralized (30), so is much of the current telephone system. On the other hand, Internet’s naming and addressing is far more centralized than that of telephony.

    However, these criticisms of specific bits of Chapters 1 through 4 do not in any way detract from the value of the rest of the book which, as already mentioned, should be required reading for anyone who wishes to engage in discussions of Internet-related matters.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Arne De Boever  — The End of Art (Once Again)

    Arne De Boever — The End of Art (Once Again)

    by Arne De Boever

    ~

    Where they burn books, they will also ultimately burn people.
    —Heinrich Heine

    You Morons

    In early March 2021, a group of “tech and art enthusiasts” who make up the company Injective Protocol[1] burnt Banksy’s work Morons (White) (2006), which they had previously acquired from Tagliatella Galleries for $95,000.[2] At first sight, the burning could be read as performance art in the spirit of Banksy’s Morons (White), which shows an art auction where a canvas featuring the text “I CAN’T BELIEVE YOU MORONS ACTUALLY BUY THIS SHIT” is up for sale (and going for $750,450). As such, the performance would take further Banksy’s own criticism of the art market, a market whose dialectic has easily reappropriated Banksy’s criticism as part of its norm and turned it into economic value. The burning of the Banksy would then seek to more radically negate the value of the work of art that Banksy’s Morons (White) challenges but cannot quite escape as long as it remains a valuable work of art.

    However, such negation was not the goal of the burning. As the tech and art enthusiast who set the Banksy aflame explained, the burning was in fact accomplished as part of a financial investment, and to inspire other artists. In other words, the burning in fact confirmed the art market’s norm rather than challenging it, and it encouraged other artists to make work that does the same. You see, before Banksy’s Morons (White) was burnt, Injective Protocol had recorded the work as what is called a non-fungible token or NFT in the blockchain. This means that for the work’s digital image, a unique, original code was created; that code—which is what you buy if you buy and NFT–is the new, original, NFT artwork, henceforth owned by Injective Protocol even if digital copies of Banksy’s Morons (White) of course still circulate as mere symbols of that code.[3] Such ownership, and the financial investment as which it was intended, required the burning of the material Banksy because Injective Protocol sought to relocate the primary value of the work into the NFT artwork—something that could only be accomplished if the original Banksy was destroyed. The goal of the burning was thus to relocate the value of the original in the derivative, which had a bigger financial potential than the original Banksy.

    The Banksy burning was perhaps an unsurprising development for those who have an interest in art and cryptocurrencies and have been following the rise of cryptoart. Cryptoart is digital art that is recorded in the blockchain as an NFT. That makes cryptoart “like” bitcoin, which is similarly recorded in the blockchain: each bitcoin is tied to a unique, original code that is recorded in a digital ledger where all the transactions of bitcoin are tracked. As an NFT, a digital artwork is similarly tied to a unique, original code that marks its provenance. The main difference between bitcoin and an NFT is that the former, as currency, is fungible, whereas the latter, as art, as not.[4] Now, NFTs were initially created “next to” already existing non-digital art, as a way to establish provenance for digital images and artworks. But as such images and artworks began to accrue value, and began to comparatively accrue more value than already existing non-digital art, the balance in the art market shifted, and NFTs came to be considered more valuable investments than already existing works of non-digital art.

    The burning of Banksy’s Morons (White) was the obvious next step in that development: let us replace the already existing work of non-digital art by an NFT, destroy the already existing work of non-digital art, and relocate the value of the work into the NFT as part of a financial investment. It realizes the dialectic of an art market that will not hesitate to destroy an already existing non-digital work of art (and replace it with an NFT) if it will drive up financial value. The auction houses who have sold NFTs are complicit to this process.

    Crypto Value = Exhibition Value + Cult Value

    The digital may at some point have held the promise of a moving away from exceptionalism–the belief that the artist and the work of art are exceptional, which is tied to theories of the artist as genius and the unresolved role of the fake and the forgery in art history–as the structuring logic of our understanding of the artist and the work of art. The staged burning of the Banksy does not so much realize that promise as relocate the continued dominance of exceptionalism—and its ties to capitalism, even if the work of art is of course an exceptional commodity that does not truly fit the capitalist framework—in the digital realm. The promise of what artist and philosopher Hito Steyerl theorized as “the poor image”[5] is countered in the NFT as a decidedly “rich image”, or rather, as the rich NFT artwork (because we need to distinguish between the NFT artwork/ the code and the digital image, a mere symbol that is tied to the code). Art, which in the part of its history that started with conceptual art in the early 1970s had started realizing itself—parallel to the rise of finance and neoliberalism–as a financial instrument, with material artworks functioning as means to hedge against market crashes (as James Franco’s character in Isaac Julien’s Playtime [2014] discusses[6]), has finally left the burden of its materiality behind to become a straight-up financial instrument, a derivative that has some similarities to a cryptocurrency like bitcoin. Art has finally realized itself as what it is: non-fungible value, one of finance’s fictions.[7]

    Although the video of the Banksy burning might shock, and make one imagine (because of its solicitation to other tech enthusiasts and artists) an imminent future in which all artworks will be burnt so as to relocate their primary value in an NFT tied to the artwork’s digital image, such a future actually does not introduce all that much difference with respect to today. Indeed, we are merely talking about a relocation of value, about a relocation of the art market. The market’s structure, value’s structure, remain the same. In fact, the NFT craze demonstrates how the artwork’s structuring logic, what I have called aesthetic exceptionalism,[8] realizes itself in the realm of the digital where, for a brief moment, one may have thought it could have died. Indeed, media art and digital art more specifically seemed to hold the promise of an art that would be more widely circulated, where the categories of authorship, value, and ownership were less intimately connected, and could perhaps even—see Steyerl; but the argument goes back to Walter Benjamin’s still influential essay on the copy[9]—enable a communist politics. Such a communist politics would celebrate the copy against the potentially fascist values of authenticity, creativity, originality, and eternal value that Benjamin brings up at the beginning of his essay. But no: with NFT, those potentially fascist values are in fact realizing themselves once again in the digital realm, and in a development that Benjamin could not have foreseen “the aura” becomes associated with the NFT artwork—not even the digital image of an artwork but a code as which the image lies recorded in the blockchain. Because the NFT artwork is a non-fungible token, one could argue that it is even more of an original than the digital currencies with which it is associated. After all, bitcoin is still a medium of exchange, whereas an NFT is not. In the same way that art is not money, NFT is not bitcoin, even if the NFT needs to be understood (as I suggested previously) as one of finance’s fictions.

    What’s remarkable here is not so much that a Banksy is burnt, or that other artworks may in the future be burnt. What’s remarkable is the power of aesthetic exceptionalism: an exceptionalism so strong that it can even sacrifice the material artwork to assert itself.

    Of course, some might point out—taking Banksy’s Morons (White) as a point of departure–that Banksy himself invited this destruction. Indeed, at a Sotheby’s auction not so long ago, Banksy had himself already realized the partial destruction of one of his works in an attempt to criticize the art market[10]—a criticism that is evident also in the work of art that Injective Protocol burnt. But the art market takes such avant-garde acts of vandalism in stride, and Banksy’s stunt came to function as evidence for what has been called “the Banksy effect”[11]: your attempt to criticize the art market becomes the next big thing on the art market, and your act of art vandalism in fact pushes the dollar value of the work of art. If that happens, the writer Ben Lerner argues in an essay about art vandalism titled “Damage Control”,[12] your vandalism isn’t really vandalism: art vandalism that pushes up dollar value isn’t vandalism. Banksy’s stunt was an attempt to make art outside of the art market, but the attempt failed. The sale of the work went through, and a few months later, one can find the partially destroyed artwork on the walls of a museum, reportedly worth three times more since the date when it was sold. For Lerner, examples like this open up the question of a work of art outside of capitalism, a work of art from which “the market’s soul has fled”,[13] as he puts it. But as the Banksy example shows, that soul is perhaps less quick to get out than we might think. Over and over again, we see it reassert itself through those very attempts that seek to push it out. One might refer to that as a dialectic—the dialectic of avant-garde attempts to be done with exceptionalist art. Ultimately they realize only one thing: the further institutionalization of exceptionalist art.

    That dialectic has today reached a most peculiar point: the end of art that some, a long time ago, already announced. But none of those arguments reached quite as far as the video of the Authentic Banksy Art Burning Ceremony that was released in March: in it, we are quite literally witnessing the end of the work of art as we know it. It shows us the “slow burn”, as the officiating member of Injective Protocol puts it, through which Banksy’s material work of art—and by extension the material work of art at large—disappears (and has been disappearing). At the same time, this destruction is presented as an act of creation—not so much of a digital image of the Banksy work but of the NFT artwork or the code that authenticates that digital image, authors it, brands it with the code of its owners. So with the destruction of Banksy’s work of art, another work of art is created—the NFT artwork, a work that you cannot feature on your wall (even if its symbolic appendage, the digital image of the Banksy, can be featured on your phone, tablet, or computer and even if some owners of the NFT artwork might decide to materially realize the NFT artwork as a work that can be shown on their walls). But what is the NFT artwork? It strikes one as the artwork narrowed down to its exceptionalist, economic core, the authorship and originality that determine its place on the art market. It is the artwork limited to its economic value, the scarcity and non-fungibility that remain at the core of what we think of as art. This is not so much purposiveness without purpose, as Immanuel Kant famously had it, but non-fungible value as a rewriting of that phrase. Might that have been the occluded truth of Kant’s phrase all along?

    In Kant After Duchamp,[14] which remains one of the most remarkable books of 20th-century art criticism, Thierry de Duve shifted the aesthetic question from “is it beautiful?” (Kant’s question) to “is it art?” (Duchamp’s question, which triggers de Duve’s rereading of Kant’s Critique of Judgment). It seems that today, one might have to shift the question once again, to situate Kant after Mike Winkelmann, the graphic designer/ NFT artist known as Beeple whose NFT collage “Everydays: The First 5000 Days” was sold at a Christie’s auction for $69,346,250. The question with this work is not so much whether it is beautiful, or even whether it is art; what matters here is solely its non-fungible value (how valuable is it, or how valuable might it become?), which would trigger yet another rereading of Kant’s third critique. Shortly after the historic sale of Beeple’s work was concluded, it was widely reported that the cryptocurrency trader who bought the work may have profited financially from the sale, in that the trader had previously been buying many of the individual NFTs that made up Beeple’s collage—individual NFTs that, after the historic sale of the collage, went up significantly in value, thus balancing out the expense of buying the collage and even yielding the trader a profit. What’s interesting here is not the art—Beeple’s work is not good art[15]—but solely the non-fungible value.

    It seems clear that what has thus opened up is another regime of art. In his essay on the copy, Benjamin wrote of the shift from cult value, associated with the fascism of the original, to exhibition value, associated with the communism of the copy. Today, we are witnessing the anachronistic, zombie-like return of cult value within exhibition value, a regime that can be understood as the crypto value of the work of art. That seems evident in the physical token that buyers of Beeple’s NFTs get sent: in its gross materialism—it comes with a cloth to clean the token but that can also be used “to clean yourself up after blasting a hot load in yer pants from how dope this is!!!!!!111”; a certificate of authenticity stating “THIS MOTHERFUCKING REAL ASS SHIT (this is real life mf)”; and a hair sample, “I promise it’s not pubes”–, it functions as a faux cultic object that is meant to mask the emptiness of the NFT. Assuaging the anxieties, perhaps, of the investors placing their moneys into nothing, it also provides interesting insights into the materialisms (masculinist/ sexist, and racist—might we call them alt-right materialisms?) that reassert themselves in the realm of the digital, as part of an attempt to realize exceptionalism in a commons that could have freed itself from it.[16] As the text printed on the physical token has it: “strap on an adult diaper because yer about to be in friggn’ boner world usa motherfucker”.

    NFT-Elitism

    It’s worth asking about the politics of this. I have been clear about the politics of aesthetic exceptionalism: it is associated with the politics of sovereignty, which is a rule of the one, a mon-archy, that potentially tends abusive, tyrannical, totalitarian. That is the case for example with exceptionalism in Carl Schmitt, even if it does not have to be the case (see for example discussions of democratic exceptionalism).[17] With the NFT artwork, the politics of aesthetic exceptionalism is realizing itself in the digital realm, which until now seemed to present a potential threat to it. It has nothing to do with anti-elitism, or populism; it is not about leaving behind art-world snobbery, as some have suggested. It is in fact the very logic of snobbery and elitism that is realizing itself in the NFT artwork, in the code that marks originality, authenticity, authorship and ownership. Cleverly, snobbery and elitism work their way back in via a path that seems to lead elsewhere. It is the Banksy effect, in politics. The burning of the Banksy is an iconoclastic gesture that preserves the political theology of art that it seems to attack.[18] This is very clear in even the most basic discourse on NFTs, which will praise both the NFT’s “democratic” potential—look at how it goes against the elitism of the art world!—while asserting that the entire point of the NFT is that it enables the authentification that once again excludes fakes and forgeries from the art world. Many, if not all of the problems with art world elitism continue here.

    With the description of NFT artworks as derivatives, and their understanding as thoroughly part of the contemporary financial economy, the temptation is of course to understand them as “neoliberal”—and certainly the Banksy burning by a group of “tech and art enthusiasts” (a neo-liberal combo if there ever was one) seems to support such a reading. But the peculiar talk about authenticity and originality in the video of the Banksy burning, the surprising mention of “primary value” and its association to the original work of art (which now becomes the NFT artwork, as the video explains), in fact strikes one as strangely antiquated. Indeed, almost everything in the video strikes one as from a different, bygone time: the work, on its easel; the masked speaker, a robber known to me from the tales of my father’s childhood; the flame, slowly working its way around the canvas, which appears to be set up in front of a snowy landscape that one may have seen in a Brueghel. Everything is there to remind us that, through the neoliberal smokescreen, we are in fact seeing an older power at work—that of the “sovereign”, authentic original, the exceptional reality of “primary value” realizing itself through this burning ritual that marks not so much its destruction but its phoenix-like reappearance in the digital realm. In that sense, the burning has something chilling to it, as if it is an ancient ritual marking the migration of sovereign power from the material work of art to the NFT artwork. A transference of the sovereign spirit, if you will, and the economic soul of the work of art. For anyone who has closely observed neoliberalism, this continued presence of sovereignty in the neoliberal era will not come as a surprise—historians, political theorists, anthropologists, philosophers, and literary critics have shown that it would be a mistake to oppose neoliberalism and sovereignty historically, and in the analysis of our contemporary moment. The aesthetic regime of crypto value would rather be a contemporary manifestation of neoliberal sovereignty or of authoritarian neoliberalism (the presence of Trump in Beeple’s work is worth noting).

    Art historians and artists, however, may be taken aback by how starkly the political truth of art is laid bare here. Reduced to non-fungible value, brought back to its exceptionalist economic core, the political core of the artwork as sovereign stands out in its tension with art’s frequent association with democratic values like openness, equality, and pluralism. As the NFT indicates, democratic values have little to do with it: what matters, at the expense of the material work of art, is the originality and authenticity that enable the artwork to operate as non-fungible value. Part of finance’s fictions, the artwork thus also reveals itself as politically troubling because it is profoundly rooted in a logic of the one that, while we are skeptical of it in politics, we continue to celebrate aesthetically. How to block this dialectic, and be done with it? How to think art outside of economic value, and the politics of exceptionalism? How to end not so much art but exceptionalism as art’s structuring logic? How to free art from fascism? The NFT craze, while it doesn’t answer those questions, has the dubious benefit of identifying all of those problems.

    _____

    Arne De Boever teaches in the School of Critical Studies at the California Institute of the Arts and is the author of Finance Fictions: Realism and Psychosis in a Time of Economic Crisis (Fordham University Press, 2017), Against Aesthetic Exceptionalism (University of Minnesota Press, 2019), and other works. His most recent book is François Jullien’s Unexceptional Thought (Rowman & Littlefield, 2020).

    Back to the essay

    _____

    Acknowledgments

    Thanks to Alex Robbins, Jared Varava, Makena Janssen, Kulov, and David Golumbia.

    _____

    Notes

    [1] See: https://injectiveprotocol.com/.

    [2] See: https://news.artnet.com/art-world/financial-traders-burned-banksy-nft-1948855. A video of the burning can be accessed here: https://www.youtube.com/watch?v=C4wm-p_VFh0.

    [3] See: https://hyperallergic.com/624053/nft-art-goes-viral-and-heads-to-auction-but-what-is-it/.

    [4] A simple explanation of cryptoart’s relation to cryptocurrency can be found here: https://www.youtube.com/watch?v=QlgE_mmbRDk.

    [5] Steyerl, Hito. “In Defense of the Poor Image”. e-flux 10 (2009). Available at: https://www.e-flux.com/journal/10/61362/in-defense-of-the-poor-image/.

    [6] See: https://www.isaacjulien.com/projects/playtime/.

    [7] I am echoing here the title of my book Finance Fictions, where I began to theorize some of what is realized by the NFT artwork: Boever, Arne De. Finance Fictions: Realism and Psychosis in a Time of Economic Crisis. New York: Fordham University Press, 2017.

    [8] See: Boever, Arne De. Against Aesthetic Exceptionalism. Minneapolis: University of Minnesota Press, 2019.

    [9] See: Benjamin, Walter. “The Work of Art in the Era of Mechanical Reproduction” In: Benjamin, Walter. Illuminations: Essays and Reflections. Ed. Hannah Arendt. Trans. Harry Zohn. New York: Schocken Books, 1969. 217-251.

    [10] See: https://www.youtube.com/watch?v=vxkwRNIZgdY&feature=emb_title.

    [11] Brenner, Lexa. “The Banksy Effect: Revolutionizing Humanitarian Protest Art”. Harvard International Review XL: 2 (2019): 35-37.

    [12] Lerner, Ben. “Damage Control: The Modern Art World’s Tyranny of Price”. Harper’s Magazine 12/2013: 42-49.

    [13] Lerner, “Damage Control”, 49.

    [14] Duve, Thierry de. Kant After Duchamp. Cambridge: MIT, 1998.

    [15] While such judgments are of course always subjective, this article considers a number of good reasons for judging the work as bad art: https://news.artnet.com/opinion/beeple-everydays-review-1951656#.YFKo4eIE7p4.twitter.

    [16] The emphasis on materialism here is not meant to obscure the materialism of the digital NFT, namely its ecological footprint which is, like that of bitcoin, devastating.

    [17] See Boever, Against Aesthetic Exceptionalism.

    [18] On this, see my: “Iconic Intelligence (Or: In Praise of the Sublamental)”. boundary 2 (forthcoming).

  • Richard Hill — Multistakeholder Internet Governance Still Doesn’t Live Up to Its PR (Review of Palladino and Santaniello, Legitimacy, Power, and Inequalities in the Multistakeholder Internet Governance)

    Richard Hill — Multistakeholder Internet Governance Still Doesn’t Live Up to Its PR (Review of Palladino and Santaniello, Legitimacy, Power, and Inequalities in the Multistakeholder Internet Governance)

    a review of Nicola Palladino and Mauro Santaniello, Legitimacy, Power, and Inequalities in the Multistakeholder Internet Governance: Analyzing IANA Transition (Palgrave MacMillan, 2020)

    by Richard Hill

    ~

    While multistakeholder processes have long existed (see the Annex of this submission to an ITU group), they have recently been promoted as a better alternative to traditional governance mechanisms, in particular at the international level; and Internet governance has been put forward as an example of how multistakeholder processes work well, and better than traditional governmental processes. Thus it is very appropriate that a detailed analysis be made of a recent, highly visible, allegedly multistakeholder process: the process by which the US government relinquished its formal control over the administration of Internet names and address. That process was labelled the “IANA transition.”

    The authors are researchers at, respectively, the School of law and Governance, Dublin City University; and the Internet & Communication Policy Center, Department of Political and Social Studies, University of Salerno, Italy. They have taken part in several national and international research projects on Internet Governance, Internet Policy and Digital Constitutionalism processes. They have methodically examined various aspects of the IANA (Internet Assigned Numbers Authority) transition, and collected and analysed an impressive body of data regarding who actually participated in, and influenced, the transition process. Their research confirms what others have stated, namely that the process was dominated by insiders with vested interests, that the outcome did not resolve long-standing political issues, and that the process cannot by any means be seen as an example of an ideal multistakeholder process, and this despite claims to the contrary by the architects of the IANA transition.

    As the authors put the matter: “For those who believe that the IANA is a business concerning exclusively or primarily ICANN [Internet Corporations for Assigned Names and Numbers], the IETF [Internet Engineering Task Force], the NRO [Numbering Resource Organization], and their respective communities, the IANA transition process could be considered inclusive and fair enough, and its outcome effectively transferring the stewardship over IANA functions to the global stakeholder’s community of reference. For those who believe that the IANA stakeholders extend far beyond the organizations mentioned above, the assessment can only have a negative result” (146). Because “in the end, rather than transferring the stewardship of IANA functions to a new multistakeholder body that controls the IANA operator (ICANN), the transition process allowed the ICANN multistakeholder community to perform the oversight role that once belonged to the NTIA [the US government]” (146). Indeed “in the end, the novel governance arrangements strengthened the position of the registries and the technical community” (148). And the US government could still exercise ultimate control, because “ICANN, the PTI [Post-Transition IANA], and most of the root server organizations remain on US territory, and therefore under US jurisdiction” (149).

    That is, the transition failed to address the key political issue: “the IANA functions are at the heart of the DNS [Domain Name System] and the Internet as we know it. Thus, their governance and performance affect a vast range of actors [other than the technical and business communities involved in the operation of the DNS] that should be considered legitimate stakeholders” (147). Instead, it was one more example of “the rhetorical use of the multistakeholder discourse. In particular, … through a neoliberal discourse, the key organizations already involved in the DNS regime were able to use the ambiguity of the concept of a ‘global multistakeholder community’ as a strategic power resource.” Thus failing fully to ensure that discussions “take place through an open process with the participation of all stakeholders extending beyond the ICANN community.” While the call for participation in the process was formally open “its addressees were already identified as specific organizations. It is worth noting that these organizations did not involve external actors in the set-up phase. Rather, they only allowed other interested parties to take part in the discussion according to their rules and with minor participatory rights [speaking, but non-voting, observers]” (148).

    Thus, the authors’ “analysis suggests that the transition did not result in, nor did it lead to, a higher form of multistakeholderism filling the gap between reality and the ideal-type of what multistakeholderism ought to be, according to normative standards of legitimacy. Nor was it able to fix the well-known limitations in inclusiveness, fairness of the decision-making process, and accountability of the entire DNS regime. … Instead, the transition seems to have solidified previous dominant positions and ratified the ownership of an essential public function by a private corporation, led by interwoven economic and technical interests” (149). In particular, “the transition process showed the irrelevance of civil society, little and badly represented in the stakeholder structure before and after the transition” (150). And “multistakeholderism [in this case] seems to have resulted in misleading rhetoric legitimizing power asymmetries embedded within the institutional design of DNS management, rather than in a new governance model capable of ensuring the meaningful participation of all the interested parties.”

    In summary, the IANA transition is one more example of the failure of multistakeholder processes to achieve their desired goal. As the authors correctly note: “Initiatives supposed to be multistakeholder have often been criticized for not complying with their premises, resulting in ‘de-politicization mechanisms that limit political expression and struggle’” (153). Indeed, “While multistakeholderism is used as a rhetoric to solidify and legitimize power positions within some policy-making arena, without any mechanisms giving up power to weaker stakeholders and without making concrete efforts to include different discourses, it will continue to produce ambiguous compromises without decisions, or make decisions affected by a poor degree of pluralism” (153). As others have stated, “‘multistakeholderism reinforces existing power dynamics that have been ‘baked in’ to the model from the beginning. It privileges north-western governments, particularly the US, as well as the US private sector.’ Similarly, … multistakeholderism [can be defined] as a discursive tool employed to create consensus around the hegemony of a power élite” (12). As the authors starkly put the matter, “multistakeholder discourse could result in misleading rhetoric that solidifies power asymmetries and masks domination, manipulation, and hegemonic practices” (26). In particular because “election and engagement procedures often tend to favor an already like-minded set of collective and individual actors even if they belong to different stakeholder categories” (30).

    The above conclusions are supported by detailed, well referenced, descriptions and analyses. Chapters One and Two explain the basic context of the IANA transition, Internet governance and their relation to multistakeholder processes. Chapter One “points out how multistakeholderism is a fuzzy concept that has led to ambiguous practices and disappointing results. Further, it highlights the discursive and legitimizing nature of multistakeholderism, which can serve both as a performing narrative capable of democratizing the Internet governance domain, as well as a misleading rhetoric solidifying the dominant position of the most powerful actors in different Internet policy-making arenas” (1). It traces the history of multistakeholder governance in the Internet context, which started in 2003 (however, a broader historical context would have been useful, see the Annex of this submission to an ITU group). It discusses the conflict between developed and developing countries regarding the management and administration of domain names and addresses that dominated the discussions at the World Summit on the Information Society (WSIS) (Mueller’s Networks and States gives a more detailed account, explaining how development issues – which were supposed to be the focus of the WSIS – got pushed aside, thus resulting in the focus on Internet governance). As the authors correctly state, “the outcomes of the WSIS left the tensions surrounding Internet governance unresolved, giving rise to contestation in subsequent years and to the cyclical recurrence of political conflicts challenging the consensus around the multistakeholder model” (5). The IANA transition was seen as a way of resolving these tensions, but it relied “on the conflation of the multistakeholder approach with the privatization of Internet governance” (8).

    As the authors posit (citing well-know scholar Hoffmann, “multistakeholderism is a narrative based on three main promises: the promise of achieving global representation on an issue putting together all the affected parties; the promise of overcoming the traditional democratic deficit at the transnational level, ‘establishing communities of interest as a digitally enabled equivalent to territorial constituencies’; and the promise of higher and enforced outcomes since incorporating global views on the matter through a consensual approach should ensure more complete solutions and their smooth implementation” (10).

    Chapter Three provides a thorough introduction to the management of Internet domain names and address and of the issues related to it and to the IANA function, in particular the role of the US government and of US academic and business organizations; the seminal work of the Internet Ad Hoc Group (IAHC); the creation and evolution of ICANN; and various criticism of ICANN, in particular regarding its accountability. (The chapter inexplicably fails to mention the key role of Mocakpetris in the creation of the DNS).

    Chapter Four describes the institutional setup of the IANA transition, and the constraints unilaterally imposed by the US government (see also 104) and the various parties that dominate discussions of the issues involved. As the authors note, the call for the creation of the key group went out “without having before voted on the proposed scheme [of the group], neither within the ICANN community nor outside through a further round of public comments” (67). The structure of that group heavily influenced the discussions and the outcome.

    Chapter Five evaluates the IANA transition in terms of one of three types of legitimacy: input legitimacy, that is whether all affected parties could meaningfully participate in the process (the other two types of legitimacy are discussed in subsequent chapters, see below). By analysing in detail the profiles and affiliations of the participants with decision-making power, the authors find that “a vast majority (56) of the people who have taken part in the drafting of the IANA transition proposal are bearers of technical and operative interests” (87); “Regarding nationality, Western countries appear to be over-represented within the drafting and decisional organism involved in the IANA transition process. In particular, US citizens constitute the most remarkable group, occupying 20 seats over 90 available” (89); and  “IANA transition voting members experienced multiple and trans-sectoral affiliations, blurring the boundaries among stakeholder categories” (151). In summary “the results of this stakeholder analysis seem to indicate that the adopted categorization and appointment procedures have reproduced within the IANA transition process well-known power relationships and imbalances already existing in the DNS management, overrepresenting Western, technical, and business interests while marginalizing developing countries and civil society participation” (90).

    Chapter Six evaluates the transition with respect to process legitimacy: whether all participants could meaningfully affect the outcome. As the authors correctly note, “Stakeholders not belonging to the organizations at the core of the operational communities were called to join the process according to rules and procedures that they had not contributed to creating, and with minor participatory rights” (107). The decision-making process was complex, and undermined the inputs from weaker parties – thus funded, dedicated participants were more influential. Further, key participants were concerned about how the US government would view the outcome, and whether it would approve it (116). And discussions appear to have been restricted to a neo-liberal framework and technical framework (120, 121). As the authors state: “Ultimately, this narrow technical frame prevented the acknowledgment of the public good nature of the IANA functions, and, even more, of their essence as public policy issues” (121). Further, “most members and participants at the CWG-Stewardship had been socialized to the ICANN system, belonging to one of its structures or attending its meetings” and “the long-standing neoliberal plan of the US government and the NTIA to ‘privatize’ the DNS placed the IANA transition within a precise system of definitions, concepts, references, and assumptions that constrained the development of alternative policy discourses and limited the political action of sovereignist and constitutional coalitions” (122).

    Thus, it is not surprising that the authors find that “a single discourse shaped the deliberation. These results contradict the assumptions at the basis of the multistakeholder model of governance, which is supposed to reach a higher and more complete understanding of a particular matter through deliberation among different categories of actors, with different backgrounds, views, and perspectives. Instead, the set of IANA transition voting members in many regards resembled what has been defined as a ‘club governance’ model, which refers to an ‘elite community where the members are motivated by peer recognition and a common goal in line with values, they consider honourable’” (151).

    Chapter Seven evaluates the transition with respect to output legitimacy: whether the result achieved its goals of transferring oversight of the IANA function to a global multistakeholder community. As the authors state “ the institutional effectiveness of the IANA transition cannot be evaluated as satisfying from a normative point of view in terms of inclusiveness, balanced representation, and accountability. As a consequence, the ICANN board remains the expression of interwoven business and technical interests and is unlikely to be truly constrained by an independent entity” (135). Further, as shown in detail, “the political problems connected to the IANA functions have been left unresolved, …  it did not take a long time before they re-emerged” (153).

    Indeed, “IANA was, first of all, a political matter. Indeed, the transition was settled as a consequence of a political fact – the widespread loss of trust in the USA as the caretaker of the Internet after the Snowden disclosures. Further, the IANA transition process aimed to achieve eminently political goals, such as establishing a novel governance setting and strengthening the DNS’s accountability and legitimacy” (152). However, as the authors explain in detail, the IANA transition was turned into a technical discussion, and “The problem here is that governance settings, such as those described as club governance, base their legitimacy form professional expertise and reputation. They are well-suited to performing some form of ‘technocratic’ governance, addressing an issue with a problem-solving approach based on an already given understanding of the nature of the problem and of the goals to be reached. Sharing a set of overlapping and compatible views is the cue that puts together these networks of experts. Nevertheless, they are ill-suited for tackling political problems, which, by definition, deal with pluralism” (152).

    Chapter Seven could have benefitted from a discussion of ICANN’s new Independent Review Process, and the length of time it has taken to put into place the process to name the panellists.

    Chapter Eight, already summarized above, presents overall conclusions.

    In summary, this is a timely and important book that provides objective data and analyses of a particular process that has been put forward as a model for multistakeholder governance, which itself has been put forth as a better alternative to conventional governance. While there is no doubt that ICANN, and the IANA function, are performing their intended functions, the book shows that the IANA transition was not a model multistakeholder process: on the contrary, it exhibited many of the well-known flaws of multistakeholder processes. Thus it should not be used as a model for future governance.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Zachary Loeb — Does Facebook Have Politics? (Review of Langdon Winner, The Whale and the Reactor, second edition)

    Zachary Loeb — Does Facebook Have Politics? (Review of Langdon Winner, The Whale and the Reactor, second edition)

    a review of Langdon Winner, The Whale and the Reactor: A Search for Limits in an Age of High Technology, second edition (University of Chicago Press, 2020)

    by Zachary Loeb

    ~

    The announcement that Mark Zuckerberg and Priscilla Chan would be donating $300 million to help address some of the challenges COVID-19 poses for the 2020 elections was met with a great deal of derision. The scorn was not directed at the effort to recruit poll workers, or purchase PPE for them, but at the source from whence these funds were coming. Having profited massively from allowing COVID-19 misinformation to run rampant over Facebook, and having shirked responsibility as the platform exacerbated political tensions, the funding announcement came across not only as too little too late, but as a desperate publicity stunt. The incident was but another installment in Facebook’s tumult as the company (alongside its CEO/founder) continually finds itself cast as a villain. Facebook can take some solace in knowing that other tech companies—Google, Amazon, Uber—are also receiving increasingly negative attention, and yet it seems that for every one critical story about Amazon there are five harsh pieces about Facebook.

    Where Facebook, and Zuckerberg, had once enjoyed laudatory coverage, with the platform being hailed as an ally of democracy, by 2020 it has become increasingly common to see Facebook (and Zuckerberg) treated as democracy’s gravediggers. Indeed, much of the animus found in the increasingly barbed responses to Facebook seem to be animated by a sense of betrayal. Many people, including more than a few journalists and scholars, had initially been taken in by Facebook’s promises of a more open and connected world, even if they are loathe to admit that they had ever fallen for that ruse now. Certainly, or so the shift in sentiment conveys, Facebook and Zuckerberg deserve to be angrily upbraided and treated with withering skepticism now… but who could have seen this coming?

    “Technologies are not merely aids to human activity, but also powerful forces acting to reshape that activity and its meaning” (6). When those words were first published, in 1986, Mark Zuckerberg was around two years old, and yet those words provide a more concise explanation of Facebook than any Facebook press release or defensive public speech given by Zuckerberg. Granted, those words were not written specifically about Facebook (how could they have been?), but in order to express a key insight about the ways in which technologies impact the societies in which they are deployed. The point being not only to consider how technologies can have political implications, but to emphasize that technologies are themselves political. Or to put it slightly differently, Langdon Winner was warning about Facebook before there was a Facebook to warn about.

    More than thirty years after its initial publication, The University of Chicago Press has released a new edition of Langdon Winner’s The Whale and the Reactor. Considering the frequency with which this book, particularly its second chapter “Do Artifacts Have Politics?,” is still cited today, it is hard to suggest that Winner’s book has been forgotten by scholars. And beyond the academy, those who have spent even a small amount of time reading some of the prominent recent STS or media studies works will have likely come across his name. Therefore, the publication of the this second edition—equipped with a new preface, afterword, an additional chapter, and a spiffy red cover—represents an important opportunity to revisit Winner’s work. While its citational staying power suggests that The Whale and the Reactor has become something of an essential touchstone for works on the politics of technological systems, the larger concerns coursing through the book have not lost any of their weight in the years since the book was published.

    For at its core The Whale and the Reactor is not about the types of technologies we are making, but about the type of society we are making.

    Divided into three sections, The Whale and the Reactor wastes no time in laying out its central intervention. Noting that technology had rarely been treated as a serious topic for philosophical inquiry, Winner sets about arguing that an examined life must examine the technological systems that sustain that life. That technology has so often been relegated to the background has given rise to a sort of “technological somnambulism” whereby many “willingly sleepwalk” as the world is technologically reconfigured around them (10). Moving forward in this dreamy state, the sleepers may have some vague awareness of the extent to which these technological systems are becoming interwoven into their daily lives, but by the time they awaken (supposing they ever do awaken) these systems have accumulated sufficient momentum as to make it seemingly impossible to turn them off at all. Though The Whale and the Reactor is not a treatise on somnambulism, this characterization is significant insofar as a sleepwalker is one who staggers through the world in a state of unawareness, and thus cannot be held truly responsible. Contrary to such fecklessness, the argument presented by Winner is that responsibility for the world being remade by technology is shared by all those who live in that world. Sleepwalking is not an acceptable excuse.

    In what is almost certainly the best-known section of the book, Winner considers whether or not artifacts have politics—answering this question strongly in the affirmative. Couching his commentary in a recognition that “Scarcely a new invention comes along that someone doesn’t proclaim it as the salvation of a free society” (20), Winner highlights that social and economic forces leave clear markers on technologies, but he notes that the process works in the opposite direction as well. Two primary ways in which “artifacts can contain political priorities” (22) are explored: firstly, situations wherein a certain artifact is designed in such a way as to settle a particular larger issue; and secondly, technologies that are designed to function within, and reinforce, a certain variety of political organization. As an example of the first variety, Winner gives an example of mechanization at a nineteenth century reaper manufacturing plant, wherein the process of mechanization was pursued not to produce higher quality or less expensive products, but for the purposes of breaking the power of the factory’s union. While an example of the second sort of politics can be seen in the case of atomic weaponry (and nuclear power) wherein the very existence of these technologies necessitates complex organizations of control and secrecy. Though, of the two arguments, Winner frames the first example as presenting clearer proof, technologies of the latter case make a significant impact insofar as they tend to make “moral reasons other than those of practical necessity appear increasingly obsolete” (36) for the political governance of technological systems.

    Inquiring as to the politics of a particular technology provides a means by which to ask questions about the broader society, specifically: what kind of social order gets reified by this technology? One of freedom and equality? One of control and disenfranchisement? Or one that distracts from the maintenance of the status quo by providing the majority with a share in technological abundance? It is easy to avoid answering such questions when you are sleepwalking, and as a result, “without anyone having explicitly chosen it, dependency upon highly centralized organizations has gradually become a dominant social form” (47). That this has not been “explicitly chosen” is partially a result of the dominance of a technologically optimistic viewpoint that has held to “a conviction that all technology—whatever its size, shape, or complexion—is inherently liberating” (50). Though this bright-eyed outlook is periodically challenged by an awareness of the ways that some technologies can create or exacerbate hazards, these dangers wind up being treated largely as hurdles that will be overcome by further technological progress. When all technologies are seen as “inherently liberating” a situation arises wherein “liberation” comes to be seen only in terms of what can be technologically delivered. Thus, the challenge is to ask “What forms of technology are compatible with the kind of society we want to build?” (52) rather than simply assume that we will be content in whatever world we sleepily wander into. Rather than trust that technology will be “inherently liberating,” Winner emphasizes that it is necessary to ask what kinds of technology will be “compatible with freedom, social justice, and other key political ends” (55), and to pursue those technologies.

    Importantly, a variety of people and groups have been aware of the need to push for artifacts that more closely align with their political ideals, though these response have taken on a range of forms. Instead of seeing technology as deeply intertwined with political matters, some groups saw technology as a way of getting around political issues: why waste time organizing for political change when microcomputers and geodesic domes can allow you to build that alternative world here and now? In contrast to this consumeristic, individualistically oriented attitude (exemplified by works such as the Whole Earth Catalog), there were also efforts to ask broader political questions about the nature of technological systems such as the “appropriate technology” movement (which grew up around E.F. Schumacher’s Small is Beautiful). Yet such attempts appear already in the past, rearguard actions that were trying to meekly resist the increasing dominance of complex technical systems. As the long seventies shifted into the 1980s and increasing technological centralization became evident, such movements appear as romantic gestures towards the dream of decentralization. And though the longing for escape from centralized control persists, the direction  “technological ‘progress’ has followed” is one in which “people find themselves dependent upon a great many large, complex systems whose centers are, for all practical purposes, beyond their power to influence” (94).

    Perhaps no technology simultaneously demonstrates the tension between the dream of decentralization and growth of control quite like the computer. Written in the midst of what was being hailed as “the computer revolution” or the “information revolution” (98), The Whale and the Reactor bore witness to the exuberance with which the computer was greeted even as this revolution remained “conspicuously silent about its own ends” (102). Though it was not entirely clear what problem the computer was the solution to, there was still a clear sentiment that the computer had to be the solution to most problems. “Mythinformation” is the term Winner deploys to capture this “almost religious conviction that a widespread adoption of computers and communications systems along with easy access to electronic information will automatically produce a better world for human living” (105). Yet “mythinformation” performs technological politics in inverse order: instead of deciding on political goals and then seeking out the right technological forms for achieving those goals, it takes a technology (the computer) and then seeks to rearrange political problems in such a way as to make them appear as though they can be addressed by that technology. Thus, “computer romantics” hold to the view that “increasing access to information enhances democracy and equalizes social power” (108), less as a reflection of the way that political power works and more as a response to the fact that “increasing access to information” is one of the things that computers do well. Despite the equalizing hopes, earnest though they may have been, that were popular amongst the “computer romantics” the trends that were visible early in “the computer revolution” gave ample reason to believe that the main result would be “an increase in power by those who already had a great deal of power” (107). Indeed, contrary to the liberatory hopes that were pinned on “the computer revolution” the end result might be one wherein “confronted with omnipresent, all-seeing data banks, the populace may find passivity and compliance the safest route, avoiding activities that once represented political liberty” (115).

    Considering the overwhelming social forces working in favor of unimpeded technological progress, there are nevertheless a few factors that have been legitimated as reasons for arguing for limits. While there is a long trajectory of theorists and thinkers who have mulled over the matter of ecological despoilment, and while environmental degradation is a serious concern, “the state of nature” represents a fraught way to consider technological matters. For some, the environment has become little more than standing reserve to be exploited, while others have formed an almost mystical attachment to an imagination of pristine nature; in this context “ideas about things natural must be examined and criticized” as well (137). Related to environmental matters are concerns that take as their catchword “risk,” and which attempt to reframe the discussion away from hopes and towards potential dangers. Yet, in addition to cultural norms that praise certain kinds of “risk-taking,” a focus on risk assessment tends to frame situations in terms of tradeoffs wherein one must balance dangers against potential benefits—with the result being that the recontextualized benefit is generally perceived as being worth it. If the environment and risk are unsatisfactory ways to push for limits, so too has become the very notion of “human values” which “acts like a lawn mower that cuts flat whole fields of meaning and leaves them characterless” (158).

    In what had originally been The Whale and the Reactor’s last chapter, Winner brought himself fully into the discussion—recalling how it was that he came to be fascinated with these issues, and commenting on the unsettling juxtaposition he felt while seeing a whale swimming not far from the nuclear reactor at Diablo Canyon. It is a chapter that critiques the attitude towards technology, that Winner saw in many of his fellow citizens, as being one of people having “gotten used to having the benefits of technological conveniences without expecting to pay the costs” (171). This sentiment is still fully on display more than thirty years later, as Winner shifts his commentary (in a new chapter for this second edition) to the age of Facebook and the Trump Presidency. Treating the techno-utopian promises that had surrounded the early Internet as another instance of technology being seen as “inherently liberating,” Winner does not seem particularly surprised by the way that the Internet and social media are revealing that they “could become a seedbed for concentrated, ultimately authoritarian power” (189). In response to the “abuses of online power,” and beneath all of the glitz and liberating terminology that is affixed to the Internet, “it is still the concerns of consumerism and techno-narcissism that are emphasized above all” (195). Though the Internet had been hailed as a breakthrough, it has wound up leading primarily to breakdown.

    Near the book’s outset, Winner observes how “In debates about technology, society, and the environment, an extremely narrow range of concepts typically defines the realm of acceptable discussion” (xii), and it is those concepts that he wrestles with over the course of The Whale and the Reactor. And the point that Winner returns to throughout the volume is that technological choices—whether they are the result of active choice or a result of our “technological somnambulism”—are not just about technology. Rather, “What appear to be merely instrumental choices are better seen as choices about the form of social and political life a society builds, choices about the kinds of people we want to become” (52).

    Or, to put it a slightly different way, if we are going to talk about the type of technology we want, we first need to talk about the type of society we want, whether the year is 1986 or 2020.

    *

    Langdon Winner began his foreword to the 2010 edition of Lewis Mumford’s Technics and Civilization with the comment that “Anyone who studies the human dimensions of technological change must eventually come to terms with Lewis Mumford.” And it may be fair to note, in a similar vein, that anyone who studies the political dimensions of technological change must eventually come to terms with Langdon Winner. The staying power of The Whale and the Reactor is something which Winner acknowledges with a note of slightly self-deprecating humor, in the foreword to the book’s second edition, where he comments “At times, it seems my once bizarre heresy has finally become a weary truism” (vii).

    Indeed, to claim in 2020 that artifacts have politics is not to make a particularly radical statement. That statement has been affirmed enough times as to hardly make it a question that needs to be relitigated. Yet the second edition of The Whale and the Reactor is not a victory lap wherein Winner crows that he was right, nor is it the ashen lamentation of a Cassandra glumly observing that what they feared has transpired. Insofar as The Whale and the Reactor deserves this second edition, and to be clear it absolutely deserves this second edition, it is because the central concerns animating the book remain just as vital today.

    While the second edition contains a smattering of new material, the vast majority of the book remains as it originally was. As a result the book undergoes that strange kind of alchemy whereby a secondary source slowly transforms into a primary source—insofar as The Whale and the Reactor can now be treated as a document showing how, at least some, scholars were making sense of “the computer revolution” while in the midst of it. The book’s first third, which contains the “Do Artifacts Have Politics?” chapter, has certainly aged the best and the expansiveness with which Winner addresses the question of politics and technology makes it clear why those early chapters remain so widely read, while ensuring that these chapters have a certain timeless quality to them. However, as the book shifts into its exploration of “Technology: Reform and Revolution” the book does reveal its age. Read today, the commentary on “appropriate technology” comes across more as a reminder of a historical curio than as an exploration of the shortcomings of an experiment that recently failed. It feels somewhat odd to read Winner’s comments on “the state of nature,” bereft as they are of any real mention of climate change. And though Winner could have written in 1986 that technology was frequently overlooked as a topic deserving of philosophical scrutiny, today there are many works responding to that earlier lack (and many of those works even cite Winner). While Winner certainly cannot be faulted for not seeing the future, what makes some of these chapters feel particularly dated is that in many other places Winner excelled so remarkably at seeing the future.

    The chapter on “Mythinformation” stands as an excellent critical snapshot of the mid-80s enthusiasm that surrounded “the computer revolution,” with Winner skillfully noting how the utopian hopes surrounding computers were just the latest in the well-worn pattern wherein every new technology is seen as “inherently liberating.” In writing on computers, Winner does important work in separating the basics of what these machines literally can do, from the sorts of far-flung hopes that their advocates attached to them. After questioning whether the issues facing society are genuinely ones that boil down to access to information, Winner noted that it was more than likely that the real impact of computers would be to help those in control stay in control. As he puts it, “if there is to be a computer revolution, the best guess is that it will have a distinctively conservative character” (107) .In 1986, it may have been necessary to speak of this in terms of a “best guess,” and such comments may have met with angry responses from a host of directions, but in 2020 it seems fairly clear that Winner’s sense of what the impact of computers would be was not wrong.

    Considering the directions that widespread computerization would push societies, Winner hypothesized that it could lead to a breakdown in certain kinds of in-person contact and make it so that people would “become even more susceptible to the influence of employers, news media, advertisers, and national political leaders” (116). And moving to the present, in the second edition’s new chapter, Winner observes that despite the shiny toys of the Internet the result has been one wherein people “yield unthinkingly to various kinds of encoded manipulation (especially political manipulation), varieties of misinformation, computational propaganda, and political malware” (187). It is not that The Whale and the Reactor comes out to openly declare “don’t tell me that you weren’t warned,” but there is something about the second edition being published now, that feels like a pointed reminder. As former techno-optimists rebrand as techno-skeptics, the second edition is a reminder that some people knew to be wary from the beginning. Some may anxiously bristle as the CEOs of tech giants testify before Congress, some may feel a deep sense of disappointment every time they see yet another story about Facebook’s malfeasance, but The Whale and the Reactor is a reminder that these problems could have been anticipated. If we are unwilling to truly confront the politics of technologies when those technologies are new, we may find ourselves struggling to deal with the political impacts of those technologies once they have wreaked havoc.

    Beyond its classic posing of the important “do artifacts have politics?” question, the present collision between technology and politics helps draw attention to a deeper matter running through The Whale and the Reactor. Namely, that the book keeps coming back to the idea of democracy. Indeed, The Whale and the Reactor shows a refreshingly stubborn commitment to this idea. Technology clearly matters in the book, and technologies are taken very seriously throughout the book, but Winner keeps returning to democracy. In commenting on the ways in which artifacts have politics, the examples that Winner explores are largely ones wherein technological systems are put in place that entrench the political authority of a powerful minority, or which require the development of regimes that exceed democratic control. For Winner, democracy (and being a participant in a democracy) is an active process, one that cannot be replaced by “passive monitoring of electronic news and information” which “allows citizens to feel involved while dampening the desire to take an active part” (111). Insofar as “the vitality of democratic politics depends upon people’s willingness to act together in pursuit of their common ends” (111), a host of technological systems have been put in place that seem to have simultaneously sapped “people’s willingness” while also breaking down a sense of “common ends.” And though the Internet may trigger some nostalgic memory of active democracy, it is only a “pseudopublic realm” wherein the absence of the real conditions of democracy “helps generate wave after wave of toxic discourse along with distressing patterns of oligarchical rule, incipient authoritarianism, and governance by phonies and confidence men” (192).

    Those who remain committed to arguing for the liberatory potential of computers and the Internet, a group which includes individuals from a range of perspectives, might justifiably push back against Winner by critiquing the vision of democracy he celebrates. After all, there is something rather romantic about  Winner’s evocations of New England townhall meetings  and his comments on the virtues of face-to-face encounters. Do all participants in such encounters truly get to participate equally? Are such situations even set up so that all people can participate equally? What sorts of people and what modes of participation are privileged by such a model of democracy? Is a New England townhall meeting really a model for twenty-first century democracy? Here it is easy to picture Winner responding that what such questions reveal is the need to create technologies that will address those problems—and where a split may then open up is around the question of whether or not computers and the Internet represent such tools. That “technologies are not merely aids to human activity, but also powerful forces acting to reshape that activity and its meaning” (6) opens up a space in which different technologies can be built, even as other technologies can be dismantled, but such a recognition forces us to look critically at our technologies and truly confront the type of world that we are making and reinforcing for each other. And, in terms of computers and the Internet, the question that The Whale and the Reactor forces to the fore is one of: which are we putting first, computers or democracy?

    Winner warned his readers of the dangers of “technological somnambulism,” but it unfortunately seems that his call was not sufficient to wake up the sleepers in his midst in the 1980s. Alas, that The Whale and the Reactor remains so strikingly relevant is partially a testament to the persistence of the sleepwalkers’ continual slouch into the future. And though there may be some hopeful signs of late that more and more people are groggily stirring and rubbing the slumber from their eyes—the resistance to facial recognition is certainly a hopeful sign—a danger persists that many will conclude that since they have reached this spot that they must figure out some way to justify being here. After all, few want to admit that they have been sleepwalking. What makes The Whale and the Reactor worth revisiting today is not only that Winner asks the question “do artifacts have politics?” but the way in which, in responding to this question, he is willing to note that there are some artifacts that have bad politics. That there are some artifacts that do not align with our political goals and values. And what’s more, that when we are confronted with such artifacts, we do not need to pretend that they are our friends just because they have rearranged our society in such a way that we have no choice but to use them.

    In the foreword to the first edition of The Whale and the Reactor, Winner noted “In an age in which the inexhaustible power of scientific technology makes all things possible, it remains to be seen where we will draw the line, where we will be able to say, here are the possibilities that wisdom suggests we avoid” (xiii). For better, or quite likely for worse, that still remains to be seen today.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay