b2o: boundary 2 online

Reviews and analysis of scholarly books about digital technology and culture, as well as of articles, legal proceedings, videos, social media, digital humanities projects, and other emerging digital forms, offered from a humanist perspective, in which our primary intellectual commitment is to the deeply embedded texts, figures, themes, and politics that constitute human culture, regardless of the medium in which they occur.

  • Arne De Boever  — The End of Art (Once Again)

    Arne De Boever — The End of Art (Once Again)

    by Arne De Boever

    ~

    Where they burn books, they will also ultimately burn people.
    —Heinrich Heine

    You Morons

    In early March 2021, a group of “tech and art enthusiasts” who make up the company Injective Protocol[1] burnt Banksy’s work Morons (White) (2006), which they had previously acquired from Tagliatella Galleries for $95,000.[2] At first sight, the burning could be read as performance art in the spirit of Banksy’s Morons (White), which shows an art auction where a canvas featuring the text “I CAN’T BELIEVE YOU MORONS ACTUALLY BUY THIS SHIT” is up for sale (and going for $750,450). As such, the performance would take further Banksy’s own criticism of the art market, a market whose dialectic has easily reappropriated Banksy’s criticism as part of its norm and turned it into economic value. The burning of the Banksy would then seek to more radically negate the value of the work of art that Banksy’s Morons (White) challenges but cannot quite escape as long as it remains a valuable work of art.

    However, such negation was not the goal of the burning. As the tech and art enthusiast who set the Banksy aflame explained, the burning was in fact accomplished as part of a financial investment, and to inspire other artists. In other words, the burning in fact confirmed the art market’s norm rather than challenging it, and it encouraged other artists to make work that does the same. You see, before Banksy’s Morons (White) was burnt, Injective Protocol had recorded the work as what is called a non-fungible token or NFT in the blockchain. This means that for the work’s digital image, a unique, original code was created; that code—which is what you buy if you buy and NFT–is the new, original, NFT artwork, henceforth owned by Injective Protocol even if digital copies of Banksy’s Morons (White) of course still circulate as mere symbols of that code.[3] Such ownership, and the financial investment as which it was intended, required the burning of the material Banksy because Injective Protocol sought to relocate the primary value of the work into the NFT artwork—something that could only be accomplished if the original Banksy was destroyed. The goal of the burning was thus to relocate the value of the original in the derivative, which had a bigger financial potential than the original Banksy.

    The Banksy burning was perhaps an unsurprising development for those who have an interest in art and cryptocurrencies and have been following the rise of cryptoart. Cryptoart is digital art that is recorded in the blockchain as an NFT. That makes cryptoart “like” bitcoin, which is similarly recorded in the blockchain: each bitcoin is tied to a unique, original code that is recorded in a digital ledger where all the transactions of bitcoin are tracked. As an NFT, a digital artwork is similarly tied to a unique, original code that marks its provenance. The main difference between bitcoin and an NFT is that the former, as currency, is fungible, whereas the latter, as art, as not.[4] Now, NFTs were initially created “next to” already existing non-digital art, as a way to establish provenance for digital images and artworks. But as such images and artworks began to accrue value, and began to comparatively accrue more value than already existing non-digital art, the balance in the art market shifted, and NFTs came to be considered more valuable investments than already existing works of non-digital art.

    The burning of Banksy’s Morons (White) was the obvious next step in that development: let us replace the already existing work of non-digital art by an NFT, destroy the already existing work of non-digital art, and relocate the value of the work into the NFT as part of a financial investment. It realizes the dialectic of an art market that will not hesitate to destroy an already existing non-digital work of art (and replace it with an NFT) if it will drive up financial value. The auction houses who have sold NFTs are complicit to this process.

    Crypto Value = Exhibition Value + Cult Value

    The digital may at some point have held the promise of a moving away from exceptionalism–the belief that the artist and the work of art are exceptional, which is tied to theories of the artist as genius and the unresolved role of the fake and the forgery in art history–as the structuring logic of our understanding of the artist and the work of art. The staged burning of the Banksy does not so much realize that promise as relocate the continued dominance of exceptionalism—and its ties to capitalism, even if the work of art is of course an exceptional commodity that does not truly fit the capitalist framework—in the digital realm. The promise of what artist and philosopher Hito Steyerl theorized as “the poor image”[5] is countered in the NFT as a decidedly “rich image”, or rather, as the rich NFT artwork (because we need to distinguish between the NFT artwork/ the code and the digital image, a mere symbol that is tied to the code). Art, which in the part of its history that started with conceptual art in the early 1970s had started realizing itself—parallel to the rise of finance and neoliberalism–as a financial instrument, with material artworks functioning as means to hedge against market crashes (as James Franco’s character in Isaac Julien’s Playtime [2014] discusses[6]), has finally left the burden of its materiality behind to become a straight-up financial instrument, a derivative that has some similarities to a cryptocurrency like bitcoin. Art has finally realized itself as what it is: non-fungible value, one of finance’s fictions.[7]

    Although the video of the Banksy burning might shock, and make one imagine (because of its solicitation to other tech enthusiasts and artists) an imminent future in which all artworks will be burnt so as to relocate their primary value in an NFT tied to the artwork’s digital image, such a future actually does not introduce all that much difference with respect to today. Indeed, we are merely talking about a relocation of value, about a relocation of the art market. The market’s structure, value’s structure, remain the same. In fact, the NFT craze demonstrates how the artwork’s structuring logic, what I have called aesthetic exceptionalism,[8] realizes itself in the realm of the digital where, for a brief moment, one may have thought it could have died. Indeed, media art and digital art more specifically seemed to hold the promise of an art that would be more widely circulated, where the categories of authorship, value, and ownership were less intimately connected, and could perhaps even—see Steyerl; but the argument goes back to Walter Benjamin’s still influential essay on the copy[9]—enable a communist politics. Such a communist politics would celebrate the copy against the potentially fascist values of authenticity, creativity, originality, and eternal value that Benjamin brings up at the beginning of his essay. But no: with NFT, those potentially fascist values are in fact realizing themselves once again in the digital realm, and in a development that Benjamin could not have foreseen “the aura” becomes associated with the NFT artwork—not even the digital image of an artwork but a code as which the image lies recorded in the blockchain. Because the NFT artwork is a non-fungible token, one could argue that it is even more of an original than the digital currencies with which it is associated. After all, bitcoin is still a medium of exchange, whereas an NFT is not. In the same way that art is not money, NFT is not bitcoin, even if the NFT needs to be understood (as I suggested previously) as one of finance’s fictions.

    What’s remarkable here is not so much that a Banksy is burnt, or that other artworks may in the future be burnt. What’s remarkable is the power of aesthetic exceptionalism: an exceptionalism so strong that it can even sacrifice the material artwork to assert itself.

    Of course, some might point out—taking Banksy’s Morons (White) as a point of departure–that Banksy himself invited this destruction. Indeed, at a Sotheby’s auction not so long ago, Banksy had himself already realized the partial destruction of one of his works in an attempt to criticize the art market[10]—a criticism that is evident also in the work of art that Injective Protocol burnt. But the art market takes such avant-garde acts of vandalism in stride, and Banksy’s stunt came to function as evidence for what has been called “the Banksy effect”[11]: your attempt to criticize the art market becomes the next big thing on the art market, and your act of art vandalism in fact pushes the dollar value of the work of art. If that happens, the writer Ben Lerner argues in an essay about art vandalism titled “Damage Control”,[12] your vandalism isn’t really vandalism: art vandalism that pushes up dollar value isn’t vandalism. Banksy’s stunt was an attempt to make art outside of the art market, but the attempt failed. The sale of the work went through, and a few months later, one can find the partially destroyed artwork on the walls of a museum, reportedly worth three times more since the date when it was sold. For Lerner, examples like this open up the question of a work of art outside of capitalism, a work of art from which “the market’s soul has fled”,[13] as he puts it. But as the Banksy example shows, that soul is perhaps less quick to get out than we might think. Over and over again, we see it reassert itself through those very attempts that seek to push it out. One might refer to that as a dialectic—the dialectic of avant-garde attempts to be done with exceptionalist art. Ultimately they realize only one thing: the further institutionalization of exceptionalist art.

    That dialectic has today reached a most peculiar point: the end of art that some, a long time ago, already announced. But none of those arguments reached quite as far as the video of the Authentic Banksy Art Burning Ceremony that was released in March: in it, we are quite literally witnessing the end of the work of art as we know it. It shows us the “slow burn”, as the officiating member of Injective Protocol puts it, through which Banksy’s material work of art—and by extension the material work of art at large—disappears (and has been disappearing). At the same time, this destruction is presented as an act of creation—not so much of a digital image of the Banksy work but of the NFT artwork or the code that authenticates that digital image, authors it, brands it with the code of its owners. So with the destruction of Banksy’s work of art, another work of art is created—the NFT artwork, a work that you cannot feature on your wall (even if its symbolic appendage, the digital image of the Banksy, can be featured on your phone, tablet, or computer and even if some owners of the NFT artwork might decide to materially realize the NFT artwork as a work that can be shown on their walls). But what is the NFT artwork? It strikes one as the artwork narrowed down to its exceptionalist, economic core, the authorship and originality that determine its place on the art market. It is the artwork limited to its economic value, the scarcity and non-fungibility that remain at the core of what we think of as art. This is not so much purposiveness without purpose, as Immanuel Kant famously had it, but non-fungible value as a rewriting of that phrase. Might that have been the occluded truth of Kant’s phrase all along?

    In Kant After Duchamp,[14] which remains one of the most remarkable books of 20th-century art criticism, Thierry de Duve shifted the aesthetic question from “is it beautiful?” (Kant’s question) to “is it art?” (Duchamp’s question, which triggers de Duve’s rereading of Kant’s Critique of Judgment). It seems that today, one might have to shift the question once again, to situate Kant after Mike Winkelmann, the graphic designer/ NFT artist known as Beeple whose NFT collage “Everydays: The First 5000 Days” was sold at a Christie’s auction for $69,346,250. The question with this work is not so much whether it is beautiful, or even whether it is art; what matters here is solely its non-fungible value (how valuable is it, or how valuable might it become?), which would trigger yet another rereading of Kant’s third critique. Shortly after the historic sale of Beeple’s work was concluded, it was widely reported that the cryptocurrency trader who bought the work may have profited financially from the sale, in that the trader had previously been buying many of the individual NFTs that made up Beeple’s collage—individual NFTs that, after the historic sale of the collage, went up significantly in value, thus balancing out the expense of buying the collage and even yielding the trader a profit. What’s interesting here is not the art—Beeple’s work is not good art[15]—but solely the non-fungible value.

    It seems clear that what has thus opened up is another regime of art. In his essay on the copy, Benjamin wrote of the shift from cult value, associated with the fascism of the original, to exhibition value, associated with the communism of the copy. Today, we are witnessing the anachronistic, zombie-like return of cult value within exhibition value, a regime that can be understood as the crypto value of the work of art. That seems evident in the physical token that buyers of Beeple’s NFTs get sent: in its gross materialism—it comes with a cloth to clean the token but that can also be used “to clean yourself up after blasting a hot load in yer pants from how dope this is!!!!!!111”; a certificate of authenticity stating “THIS MOTHERFUCKING REAL ASS SHIT (this is real life mf)”; and a hair sample, “I promise it’s not pubes”–, it functions as a faux cultic object that is meant to mask the emptiness of the NFT. Assuaging the anxieties, perhaps, of the investors placing their moneys into nothing, it also provides interesting insights into the materialisms (masculinist/ sexist, and racist—might we call them alt-right materialisms?) that reassert themselves in the realm of the digital, as part of an attempt to realize exceptionalism in a commons that could have freed itself from it.[16] As the text printed on the physical token has it: “strap on an adult diaper because yer about to be in friggn’ boner world usa motherfucker”.

    NFT-Elitism

    It’s worth asking about the politics of this. I have been clear about the politics of aesthetic exceptionalism: it is associated with the politics of sovereignty, which is a rule of the one, a mon-archy, that potentially tends abusive, tyrannical, totalitarian. That is the case for example with exceptionalism in Carl Schmitt, even if it does not have to be the case (see for example discussions of democratic exceptionalism).[17] With the NFT artwork, the politics of aesthetic exceptionalism is realizing itself in the digital realm, which until now seemed to present a potential threat to it. It has nothing to do with anti-elitism, or populism; it is not about leaving behind art-world snobbery, as some have suggested. It is in fact the very logic of snobbery and elitism that is realizing itself in the NFT artwork, in the code that marks originality, authenticity, authorship and ownership. Cleverly, snobbery and elitism work their way back in via a path that seems to lead elsewhere. It is the Banksy effect, in politics. The burning of the Banksy is an iconoclastic gesture that preserves the political theology of art that it seems to attack.[18] This is very clear in even the most basic discourse on NFTs, which will praise both the NFT’s “democratic” potential—look at how it goes against the elitism of the art world!—while asserting that the entire point of the NFT is that it enables the authentification that once again excludes fakes and forgeries from the art world. Many, if not all of the problems with art world elitism continue here.

    With the description of NFT artworks as derivatives, and their understanding as thoroughly part of the contemporary financial economy, the temptation is of course to understand them as “neoliberal”—and certainly the Banksy burning by a group of “tech and art enthusiasts” (a neo-liberal combo if there ever was one) seems to support such a reading. But the peculiar talk about authenticity and originality in the video of the Banksy burning, the surprising mention of “primary value” and its association to the original work of art (which now becomes the NFT artwork, as the video explains), in fact strikes one as strangely antiquated. Indeed, almost everything in the video strikes one as from a different, bygone time: the work, on its easel; the masked speaker, a robber known to me from the tales of my father’s childhood; the flame, slowly working its way around the canvas, which appears to be set up in front of a snowy landscape that one may have seen in a Brueghel. Everything is there to remind us that, through the neoliberal smokescreen, we are in fact seeing an older power at work—that of the “sovereign”, authentic original, the exceptional reality of “primary value” realizing itself through this burning ritual that marks not so much its destruction but its phoenix-like reappearance in the digital realm. In that sense, the burning has something chilling to it, as if it is an ancient ritual marking the migration of sovereign power from the material work of art to the NFT artwork. A transference of the sovereign spirit, if you will, and the economic soul of the work of art. For anyone who has closely observed neoliberalism, this continued presence of sovereignty in the neoliberal era will not come as a surprise—historians, political theorists, anthropologists, philosophers, and literary critics have shown that it would be a mistake to oppose neoliberalism and sovereignty historically, and in the analysis of our contemporary moment. The aesthetic regime of crypto value would rather be a contemporary manifestation of neoliberal sovereignty or of authoritarian neoliberalism (the presence of Trump in Beeple’s work is worth noting).

    Art historians and artists, however, may be taken aback by how starkly the political truth of art is laid bare here. Reduced to non-fungible value, brought back to its exceptionalist economic core, the political core of the artwork as sovereign stands out in its tension with art’s frequent association with democratic values like openness, equality, and pluralism. As the NFT indicates, democratic values have little to do with it: what matters, at the expense of the material work of art, is the originality and authenticity that enable the artwork to operate as non-fungible value. Part of finance’s fictions, the artwork thus also reveals itself as politically troubling because it is profoundly rooted in a logic of the one that, while we are skeptical of it in politics, we continue to celebrate aesthetically. How to block this dialectic, and be done with it? How to think art outside of economic value, and the politics of exceptionalism? How to end not so much art but exceptionalism as art’s structuring logic? How to free art from fascism? The NFT craze, while it doesn’t answer those questions, has the dubious benefit of identifying all of those problems.

    _____

    Arne De Boever teaches in the School of Critical Studies at the California Institute of the Arts and is the author of Finance Fictions: Realism and Psychosis in a Time of Economic Crisis (Fordham University Press, 2017), Against Aesthetic Exceptionalism (University of Minnesota Press, 2019), and other works. His most recent book is François Jullien’s Unexceptional Thought (Rowman & Littlefield, 2020).

    Back to the essay

    _____

    Acknowledgments

    Thanks to Alex Robbins, Jared Varava, Makena Janssen, Kulov, and David Golumbia.

    _____

    Notes

    [1] See: https://injectiveprotocol.com/.

    [2] See: https://news.artnet.com/art-world/financial-traders-burned-banksy-nft-1948855. A video of the burning can be accessed here: https://www.youtube.com/watch?v=C4wm-p_VFh0.

    [3] See: https://hyperallergic.com/624053/nft-art-goes-viral-and-heads-to-auction-but-what-is-it/.

    [4] A simple explanation of cryptoart’s relation to cryptocurrency can be found here: https://www.youtube.com/watch?v=QlgE_mmbRDk.

    [5] Steyerl, Hito. “In Defense of the Poor Image”. e-flux 10 (2009). Available at: https://www.e-flux.com/journal/10/61362/in-defense-of-the-poor-image/.

    [6] See: https://www.isaacjulien.com/projects/playtime/.

    [7] I am echoing here the title of my book Finance Fictions, where I began to theorize some of what is realized by the NFT artwork: Boever, Arne De. Finance Fictions: Realism and Psychosis in a Time of Economic Crisis. New York: Fordham University Press, 2017.

    [8] See: Boever, Arne De. Against Aesthetic Exceptionalism. Minneapolis: University of Minnesota Press, 2019.

    [9] See: Benjamin, Walter. “The Work of Art in the Era of Mechanical Reproduction” In: Benjamin, Walter. Illuminations: Essays and Reflections. Ed. Hannah Arendt. Trans. Harry Zohn. New York: Schocken Books, 1969. 217-251.

    [10] See: https://www.youtube.com/watch?v=vxkwRNIZgdY&feature=emb_title.

    [11] Brenner, Lexa. “The Banksy Effect: Revolutionizing Humanitarian Protest Art”. Harvard International Review XL: 2 (2019): 35-37.

    [12] Lerner, Ben. “Damage Control: The Modern Art World’s Tyranny of Price”. Harper’s Magazine 12/2013: 42-49.

    [13] Lerner, “Damage Control”, 49.

    [14] Duve, Thierry de. Kant After Duchamp. Cambridge: MIT, 1998.

    [15] While such judgments are of course always subjective, this article considers a number of good reasons for judging the work as bad art: https://news.artnet.com/opinion/beeple-everydays-review-1951656#.YFKo4eIE7p4.twitter.

    [16] The emphasis on materialism here is not meant to obscure the materialism of the digital NFT, namely its ecological footprint which is, like that of bitcoin, devastating.

    [17] See Boever, Against Aesthetic Exceptionalism.

    [18] On this, see my: “Iconic Intelligence (Or: In Praise of the Sublamental)”. boundary 2 (forthcoming).

  • Richard Hill — Multistakeholder Internet Governance Still Doesn’t Live Up to Its PR (Review of Palladino and Santaniello, Legitimacy, Power, and Inequalities in the Multistakeholder Internet Governance)

    Richard Hill — Multistakeholder Internet Governance Still Doesn’t Live Up to Its PR (Review of Palladino and Santaniello, Legitimacy, Power, and Inequalities in the Multistakeholder Internet Governance)

    a review of Nicola Palladino and Mauro Santaniello, Legitimacy, Power, and Inequalities in the Multistakeholder Internet Governance: Analyzing IANA Transition (Palgrave MacMillan, 2020)

    by Richard Hill

    ~

    While multistakeholder processes have long existed (see the Annex of this submission to an ITU group), they have recently been promoted as a better alternative to traditional governance mechanisms, in particular at the international level; and Internet governance has been put forward as an example of how multistakeholder processes work well, and better than traditional governmental processes. Thus it is very appropriate that a detailed analysis be made of a recent, highly visible, allegedly multistakeholder process: the process by which the US government relinquished its formal control over the administration of Internet names and address. That process was labelled the “IANA transition.”

    The authors are researchers at, respectively, the School of law and Governance, Dublin City University; and the Internet & Communication Policy Center, Department of Political and Social Studies, University of Salerno, Italy. They have taken part in several national and international research projects on Internet Governance, Internet Policy and Digital Constitutionalism processes. They have methodically examined various aspects of the IANA (Internet Assigned Numbers Authority) transition, and collected and analysed an impressive body of data regarding who actually participated in, and influenced, the transition process. Their research confirms what others have stated, namely that the process was dominated by insiders with vested interests, that the outcome did not resolve long-standing political issues, and that the process cannot by any means be seen as an example of an ideal multistakeholder process, and this despite claims to the contrary by the architects of the IANA transition.

    As the authors put the matter: “For those who believe that the IANA is a business concerning exclusively or primarily ICANN [Internet Corporations for Assigned Names and Numbers], the IETF [Internet Engineering Task Force], the NRO [Numbering Resource Organization], and their respective communities, the IANA transition process could be considered inclusive and fair enough, and its outcome effectively transferring the stewardship over IANA functions to the global stakeholder’s community of reference. For those who believe that the IANA stakeholders extend far beyond the organizations mentioned above, the assessment can only have a negative result” (146). Because “in the end, rather than transferring the stewardship of IANA functions to a new multistakeholder body that controls the IANA operator (ICANN), the transition process allowed the ICANN multistakeholder community to perform the oversight role that once belonged to the NTIA [the US government]” (146). Indeed “in the end, the novel governance arrangements strengthened the position of the registries and the technical community” (148). And the US government could still exercise ultimate control, because “ICANN, the PTI [Post-Transition IANA], and most of the root server organizations remain on US territory, and therefore under US jurisdiction” (149).

    That is, the transition failed to address the key political issue: “the IANA functions are at the heart of the DNS [Domain Name System] and the Internet as we know it. Thus, their governance and performance affect a vast range of actors [other than the technical and business communities involved in the operation of the DNS] that should be considered legitimate stakeholders” (147). Instead, it was one more example of “the rhetorical use of the multistakeholder discourse. In particular, … through a neoliberal discourse, the key organizations already involved in the DNS regime were able to use the ambiguity of the concept of a ‘global multistakeholder community’ as a strategic power resource.” Thus failing fully to ensure that discussions “take place through an open process with the participation of all stakeholders extending beyond the ICANN community.” While the call for participation in the process was formally open “its addressees were already identified as specific organizations. It is worth noting that these organizations did not involve external actors in the set-up phase. Rather, they only allowed other interested parties to take part in the discussion according to their rules and with minor participatory rights [speaking, but non-voting, observers]” (148).

    Thus, the authors’ “analysis suggests that the transition did not result in, nor did it lead to, a higher form of multistakeholderism filling the gap between reality and the ideal-type of what multistakeholderism ought to be, according to normative standards of legitimacy. Nor was it able to fix the well-known limitations in inclusiveness, fairness of the decision-making process, and accountability of the entire DNS regime. … Instead, the transition seems to have solidified previous dominant positions and ratified the ownership of an essential public function by a private corporation, led by interwoven economic and technical interests” (149). In particular, “the transition process showed the irrelevance of civil society, little and badly represented in the stakeholder structure before and after the transition” (150). And “multistakeholderism [in this case] seems to have resulted in misleading rhetoric legitimizing power asymmetries embedded within the institutional design of DNS management, rather than in a new governance model capable of ensuring the meaningful participation of all the interested parties.”

    In summary, the IANA transition is one more example of the failure of multistakeholder processes to achieve their desired goal. As the authors correctly note: “Initiatives supposed to be multistakeholder have often been criticized for not complying with their premises, resulting in ‘de-politicization mechanisms that limit political expression and struggle’” (153). Indeed, “While multistakeholderism is used as a rhetoric to solidify and legitimize power positions within some policy-making arena, without any mechanisms giving up power to weaker stakeholders and without making concrete efforts to include different discourses, it will continue to produce ambiguous compromises without decisions, or make decisions affected by a poor degree of pluralism” (153). As others have stated, “‘multistakeholderism reinforces existing power dynamics that have been ‘baked in’ to the model from the beginning. It privileges north-western governments, particularly the US, as well as the US private sector.’ Similarly, … multistakeholderism [can be defined] as a discursive tool employed to create consensus around the hegemony of a power élite” (12). As the authors starkly put the matter, “multistakeholder discourse could result in misleading rhetoric that solidifies power asymmetries and masks domination, manipulation, and hegemonic practices” (26). In particular because “election and engagement procedures often tend to favor an already like-minded set of collective and individual actors even if they belong to different stakeholder categories” (30).

    The above conclusions are supported by detailed, well referenced, descriptions and analyses. Chapters One and Two explain the basic context of the IANA transition, Internet governance and their relation to multistakeholder processes. Chapter One “points out how multistakeholderism is a fuzzy concept that has led to ambiguous practices and disappointing results. Further, it highlights the discursive and legitimizing nature of multistakeholderism, which can serve both as a performing narrative capable of democratizing the Internet governance domain, as well as a misleading rhetoric solidifying the dominant position of the most powerful actors in different Internet policy-making arenas” (1). It traces the history of multistakeholder governance in the Internet context, which started in 2003 (however, a broader historical context would have been useful, see the Annex of this submission to an ITU group). It discusses the conflict between developed and developing countries regarding the management and administration of domain names and addresses that dominated the discussions at the World Summit on the Information Society (WSIS) (Mueller’s Networks and States gives a more detailed account, explaining how development issues – which were supposed to be the focus of the WSIS – got pushed aside, thus resulting in the focus on Internet governance). As the authors correctly state, “the outcomes of the WSIS left the tensions surrounding Internet governance unresolved, giving rise to contestation in subsequent years and to the cyclical recurrence of political conflicts challenging the consensus around the multistakeholder model” (5). The IANA transition was seen as a way of resolving these tensions, but it relied “on the conflation of the multistakeholder approach with the privatization of Internet governance” (8).

    As the authors posit (citing well-know scholar Hoffmann, “multistakeholderism is a narrative based on three main promises: the promise of achieving global representation on an issue putting together all the affected parties; the promise of overcoming the traditional democratic deficit at the transnational level, ‘establishing communities of interest as a digitally enabled equivalent to territorial constituencies’; and the promise of higher and enforced outcomes since incorporating global views on the matter through a consensual approach should ensure more complete solutions and their smooth implementation” (10).

    Chapter Three provides a thorough introduction to the management of Internet domain names and address and of the issues related to it and to the IANA function, in particular the role of the US government and of US academic and business organizations; the seminal work of the Internet Ad Hoc Group (IAHC); the creation and evolution of ICANN; and various criticism of ICANN, in particular regarding its accountability. (The chapter inexplicably fails to mention the key role of Mocakpetris in the creation of the DNS).

    Chapter Four describes the institutional setup of the IANA transition, and the constraints unilaterally imposed by the US government (see also 104) and the various parties that dominate discussions of the issues involved. As the authors note, the call for the creation of the key group went out “without having before voted on the proposed scheme [of the group], neither within the ICANN community nor outside through a further round of public comments” (67). The structure of that group heavily influenced the discussions and the outcome.

    Chapter Five evaluates the IANA transition in terms of one of three types of legitimacy: input legitimacy, that is whether all affected parties could meaningfully participate in the process (the other two types of legitimacy are discussed in subsequent chapters, see below). By analysing in detail the profiles and affiliations of the participants with decision-making power, the authors find that “a vast majority (56) of the people who have taken part in the drafting of the IANA transition proposal are bearers of technical and operative interests” (87); “Regarding nationality, Western countries appear to be over-represented within the drafting and decisional organism involved in the IANA transition process. In particular, US citizens constitute the most remarkable group, occupying 20 seats over 90 available” (89); and  “IANA transition voting members experienced multiple and trans-sectoral affiliations, blurring the boundaries among stakeholder categories” (151). In summary “the results of this stakeholder analysis seem to indicate that the adopted categorization and appointment procedures have reproduced within the IANA transition process well-known power relationships and imbalances already existing in the DNS management, overrepresenting Western, technical, and business interests while marginalizing developing countries and civil society participation” (90).

    Chapter Six evaluates the transition with respect to process legitimacy: whether all participants could meaningfully affect the outcome. As the authors correctly note, “Stakeholders not belonging to the organizations at the core of the operational communities were called to join the process according to rules and procedures that they had not contributed to creating, and with minor participatory rights” (107). The decision-making process was complex, and undermined the inputs from weaker parties – thus funded, dedicated participants were more influential. Further, key participants were concerned about how the US government would view the outcome, and whether it would approve it (116). And discussions appear to have been restricted to a neo-liberal framework and technical framework (120, 121). As the authors state: “Ultimately, this narrow technical frame prevented the acknowledgment of the public good nature of the IANA functions, and, even more, of their essence as public policy issues” (121). Further, “most members and participants at the CWG-Stewardship had been socialized to the ICANN system, belonging to one of its structures or attending its meetings” and “the long-standing neoliberal plan of the US government and the NTIA to ‘privatize’ the DNS placed the IANA transition within a precise system of definitions, concepts, references, and assumptions that constrained the development of alternative policy discourses and limited the political action of sovereignist and constitutional coalitions” (122).

    Thus, it is not surprising that the authors find that “a single discourse shaped the deliberation. These results contradict the assumptions at the basis of the multistakeholder model of governance, which is supposed to reach a higher and more complete understanding of a particular matter through deliberation among different categories of actors, with different backgrounds, views, and perspectives. Instead, the set of IANA transition voting members in many regards resembled what has been defined as a ‘club governance’ model, which refers to an ‘elite community where the members are motivated by peer recognition and a common goal in line with values, they consider honourable’” (151).

    Chapter Seven evaluates the transition with respect to output legitimacy: whether the result achieved its goals of transferring oversight of the IANA function to a global multistakeholder community. As the authors state “ the institutional effectiveness of the IANA transition cannot be evaluated as satisfying from a normative point of view in terms of inclusiveness, balanced representation, and accountability. As a consequence, the ICANN board remains the expression of interwoven business and technical interests and is unlikely to be truly constrained by an independent entity” (135). Further, as shown in detail, “the political problems connected to the IANA functions have been left unresolved, …  it did not take a long time before they re-emerged” (153).

    Indeed, “IANA was, first of all, a political matter. Indeed, the transition was settled as a consequence of a political fact – the widespread loss of trust in the USA as the caretaker of the Internet after the Snowden disclosures. Further, the IANA transition process aimed to achieve eminently political goals, such as establishing a novel governance setting and strengthening the DNS’s accountability and legitimacy” (152). However, as the authors explain in detail, the IANA transition was turned into a technical discussion, and “The problem here is that governance settings, such as those described as club governance, base their legitimacy form professional expertise and reputation. They are well-suited to performing some form of ‘technocratic’ governance, addressing an issue with a problem-solving approach based on an already given understanding of the nature of the problem and of the goals to be reached. Sharing a set of overlapping and compatible views is the cue that puts together these networks of experts. Nevertheless, they are ill-suited for tackling political problems, which, by definition, deal with pluralism” (152).

    Chapter Seven could have benefitted from a discussion of ICANN’s new Independent Review Process, and the length of time it has taken to put into place the process to name the panellists.

    Chapter Eight, already summarized above, presents overall conclusions.

    In summary, this is a timely and important book that provides objective data and analyses of a particular process that has been put forward as a model for multistakeholder governance, which itself has been put forth as a better alternative to conventional governance. While there is no doubt that ICANN, and the IANA function, are performing their intended functions, the book shows that the IANA transition was not a model multistakeholder process: on the contrary, it exhibited many of the well-known flaws of multistakeholder processes. Thus it should not be used as a model for future governance.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Zachary Loeb — Does Facebook Have Politics? (Review of Langdon Winner, The Whale and the Reactor, second edition)

    Zachary Loeb — Does Facebook Have Politics? (Review of Langdon Winner, The Whale and the Reactor, second edition)

    a review of Langdon Winner, The Whale and the Reactor: A Search for Limits in an Age of High Technology, second edition (University of Chicago Press, 2020)

    by Zachary Loeb

    ~

    The announcement that Mark Zuckerberg and Priscilla Chan would be donating $300 million to help address some of the challenges COVID-19 poses for the 2020 elections was met with a great deal of derision. The scorn was not directed at the effort to recruit poll workers, or purchase PPE for them, but at the source from whence these funds were coming. Having profited massively from allowing COVID-19 misinformation to run rampant over Facebook, and having shirked responsibility as the platform exacerbated political tensions, the funding announcement came across not only as too little too late, but as a desperate publicity stunt. The incident was but another installment in Facebook’s tumult as the company (alongside its CEO/founder) continually finds itself cast as a villain. Facebook can take some solace in knowing that other tech companies—Google, Amazon, Uber—are also receiving increasingly negative attention, and yet it seems that for every one critical story about Amazon there are five harsh pieces about Facebook.

    Where Facebook, and Zuckerberg, had once enjoyed laudatory coverage, with the platform being hailed as an ally of democracy, by 2020 it has become increasingly common to see Facebook (and Zuckerberg) treated as democracy’s gravediggers. Indeed, much of the animus found in the increasingly barbed responses to Facebook seem to be animated by a sense of betrayal. Many people, including more than a few journalists and scholars, had initially been taken in by Facebook’s promises of a more open and connected world, even if they are loathe to admit that they had ever fallen for that ruse now. Certainly, or so the shift in sentiment conveys, Facebook and Zuckerberg deserve to be angrily upbraided and treated with withering skepticism now… but who could have seen this coming?

    “Technologies are not merely aids to human activity, but also powerful forces acting to reshape that activity and its meaning” (6). When those words were first published, in 1986, Mark Zuckerberg was around two years old, and yet those words provide a more concise explanation of Facebook than any Facebook press release or defensive public speech given by Zuckerberg. Granted, those words were not written specifically about Facebook (how could they have been?), but in order to express a key insight about the ways in which technologies impact the societies in which they are deployed. The point being not only to consider how technologies can have political implications, but to emphasize that technologies are themselves political. Or to put it slightly differently, Langdon Winner was warning about Facebook before there was a Facebook to warn about.

    More than thirty years after its initial publication, The University of Chicago Press has released a new edition of Langdon Winner’s The Whale and the Reactor. Considering the frequency with which this book, particularly its second chapter “Do Artifacts Have Politics?,” is still cited today, it is hard to suggest that Winner’s book has been forgotten by scholars. And beyond the academy, those who have spent even a small amount of time reading some of the prominent recent STS or media studies works will have likely come across his name. Therefore, the publication of the this second edition—equipped with a new preface, afterword, an additional chapter, and a spiffy red cover—represents an important opportunity to revisit Winner’s work. While its citational staying power suggests that The Whale and the Reactor has become something of an essential touchstone for works on the politics of technological systems, the larger concerns coursing through the book have not lost any of their weight in the years since the book was published.

    For at its core The Whale and the Reactor is not about the types of technologies we are making, but about the type of society we are making.

    Divided into three sections, The Whale and the Reactor wastes no time in laying out its central intervention. Noting that technology had rarely been treated as a serious topic for philosophical inquiry, Winner sets about arguing that an examined life must examine the technological systems that sustain that life. That technology has so often been relegated to the background has given rise to a sort of “technological somnambulism” whereby many “willingly sleepwalk” as the world is technologically reconfigured around them (10). Moving forward in this dreamy state, the sleepers may have some vague awareness of the extent to which these technological systems are becoming interwoven into their daily lives, but by the time they awaken (supposing they ever do awaken) these systems have accumulated sufficient momentum as to make it seemingly impossible to turn them off at all. Though The Whale and the Reactor is not a treatise on somnambulism, this characterization is significant insofar as a sleepwalker is one who staggers through the world in a state of unawareness, and thus cannot be held truly responsible. Contrary to such fecklessness, the argument presented by Winner is that responsibility for the world being remade by technology is shared by all those who live in that world. Sleepwalking is not an acceptable excuse.

    In what is almost certainly the best-known section of the book, Winner considers whether or not artifacts have politics—answering this question strongly in the affirmative. Couching his commentary in a recognition that “Scarcely a new invention comes along that someone doesn’t proclaim it as the salvation of a free society” (20), Winner highlights that social and economic forces leave clear markers on technologies, but he notes that the process works in the opposite direction as well. Two primary ways in which “artifacts can contain political priorities” (22) are explored: firstly, situations wherein a certain artifact is designed in such a way as to settle a particular larger issue; and secondly, technologies that are designed to function within, and reinforce, a certain variety of political organization. As an example of the first variety, Winner gives an example of mechanization at a nineteenth century reaper manufacturing plant, wherein the process of mechanization was pursued not to produce higher quality or less expensive products, but for the purposes of breaking the power of the factory’s union. While an example of the second sort of politics can be seen in the case of atomic weaponry (and nuclear power) wherein the very existence of these technologies necessitates complex organizations of control and secrecy. Though, of the two arguments, Winner frames the first example as presenting clearer proof, technologies of the latter case make a significant impact insofar as they tend to make “moral reasons other than those of practical necessity appear increasingly obsolete” (36) for the political governance of technological systems.

    Inquiring as to the politics of a particular technology provides a means by which to ask questions about the broader society, specifically: what kind of social order gets reified by this technology? One of freedom and equality? One of control and disenfranchisement? Or one that distracts from the maintenance of the status quo by providing the majority with a share in technological abundance? It is easy to avoid answering such questions when you are sleepwalking, and as a result, “without anyone having explicitly chosen it, dependency upon highly centralized organizations has gradually become a dominant social form” (47). That this has not been “explicitly chosen” is partially a result of the dominance of a technologically optimistic viewpoint that has held to “a conviction that all technology—whatever its size, shape, or complexion—is inherently liberating” (50). Though this bright-eyed outlook is periodically challenged by an awareness of the ways that some technologies can create or exacerbate hazards, these dangers wind up being treated largely as hurdles that will be overcome by further technological progress. When all technologies are seen as “inherently liberating” a situation arises wherein “liberation” comes to be seen only in terms of what can be technologically delivered. Thus, the challenge is to ask “What forms of technology are compatible with the kind of society we want to build?” (52) rather than simply assume that we will be content in whatever world we sleepily wander into. Rather than trust that technology will be “inherently liberating,” Winner emphasizes that it is necessary to ask what kinds of technology will be “compatible with freedom, social justice, and other key political ends” (55), and to pursue those technologies.

    Importantly, a variety of people and groups have been aware of the need to push for artifacts that more closely align with their political ideals, though these response have taken on a range of forms. Instead of seeing technology as deeply intertwined with political matters, some groups saw technology as a way of getting around political issues: why waste time organizing for political change when microcomputers and geodesic domes can allow you to build that alternative world here and now? In contrast to this consumeristic, individualistically oriented attitude (exemplified by works such as the Whole Earth Catalog), there were also efforts to ask broader political questions about the nature of technological systems such as the “appropriate technology” movement (which grew up around E.F. Schumacher’s Small is Beautiful). Yet such attempts appear already in the past, rearguard actions that were trying to meekly resist the increasing dominance of complex technical systems. As the long seventies shifted into the 1980s and increasing technological centralization became evident, such movements appear as romantic gestures towards the dream of decentralization. And though the longing for escape from centralized control persists, the direction  “technological ‘progress’ has followed” is one in which “people find themselves dependent upon a great many large, complex systems whose centers are, for all practical purposes, beyond their power to influence” (94).

    Perhaps no technology simultaneously demonstrates the tension between the dream of decentralization and growth of control quite like the computer. Written in the midst of what was being hailed as “the computer revolution” or the “information revolution” (98), The Whale and the Reactor bore witness to the exuberance with which the computer was greeted even as this revolution remained “conspicuously silent about its own ends” (102). Though it was not entirely clear what problem the computer was the solution to, there was still a clear sentiment that the computer had to be the solution to most problems. “Mythinformation” is the term Winner deploys to capture this “almost religious conviction that a widespread adoption of computers and communications systems along with easy access to electronic information will automatically produce a better world for human living” (105). Yet “mythinformation” performs technological politics in inverse order: instead of deciding on political goals and then seeking out the right technological forms for achieving those goals, it takes a technology (the computer) and then seeks to rearrange political problems in such a way as to make them appear as though they can be addressed by that technology. Thus, “computer romantics” hold to the view that “increasing access to information enhances democracy and equalizes social power” (108), less as a reflection of the way that political power works and more as a response to the fact that “increasing access to information” is one of the things that computers do well. Despite the equalizing hopes, earnest though they may have been, that were popular amongst the “computer romantics” the trends that were visible early in “the computer revolution” gave ample reason to believe that the main result would be “an increase in power by those who already had a great deal of power” (107). Indeed, contrary to the liberatory hopes that were pinned on “the computer revolution” the end result might be one wherein “confronted with omnipresent, all-seeing data banks, the populace may find passivity and compliance the safest route, avoiding activities that once represented political liberty” (115).

    Considering the overwhelming social forces working in favor of unimpeded technological progress, there are nevertheless a few factors that have been legitimated as reasons for arguing for limits. While there is a long trajectory of theorists and thinkers who have mulled over the matter of ecological despoilment, and while environmental degradation is a serious concern, “the state of nature” represents a fraught way to consider technological matters. For some, the environment has become little more than standing reserve to be exploited, while others have formed an almost mystical attachment to an imagination of pristine nature; in this context “ideas about things natural must be examined and criticized” as well (137). Related to environmental matters are concerns that take as their catchword “risk,” and which attempt to reframe the discussion away from hopes and towards potential dangers. Yet, in addition to cultural norms that praise certain kinds of “risk-taking,” a focus on risk assessment tends to frame situations in terms of tradeoffs wherein one must balance dangers against potential benefits—with the result being that the recontextualized benefit is generally perceived as being worth it. If the environment and risk are unsatisfactory ways to push for limits, so too has become the very notion of “human values” which “acts like a lawn mower that cuts flat whole fields of meaning and leaves them characterless” (158).

    In what had originally been The Whale and the Reactor’s last chapter, Winner brought himself fully into the discussion—recalling how it was that he came to be fascinated with these issues, and commenting on the unsettling juxtaposition he felt while seeing a whale swimming not far from the nuclear reactor at Diablo Canyon. It is a chapter that critiques the attitude towards technology, that Winner saw in many of his fellow citizens, as being one of people having “gotten used to having the benefits of technological conveniences without expecting to pay the costs” (171). This sentiment is still fully on display more than thirty years later, as Winner shifts his commentary (in a new chapter for this second edition) to the age of Facebook and the Trump Presidency. Treating the techno-utopian promises that had surrounded the early Internet as another instance of technology being seen as “inherently liberating,” Winner does not seem particularly surprised by the way that the Internet and social media are revealing that they “could become a seedbed for concentrated, ultimately authoritarian power” (189). In response to the “abuses of online power,” and beneath all of the glitz and liberating terminology that is affixed to the Internet, “it is still the concerns of consumerism and techno-narcissism that are emphasized above all” (195). Though the Internet had been hailed as a breakthrough, it has wound up leading primarily to breakdown.

    Near the book’s outset, Winner observes how “In debates about technology, society, and the environment, an extremely narrow range of concepts typically defines the realm of acceptable discussion” (xii), and it is those concepts that he wrestles with over the course of The Whale and the Reactor. And the point that Winner returns to throughout the volume is that technological choices—whether they are the result of active choice or a result of our “technological somnambulism”—are not just about technology. Rather, “What appear to be merely instrumental choices are better seen as choices about the form of social and political life a society builds, choices about the kinds of people we want to become” (52).

    Or, to put it a slightly different way, if we are going to talk about the type of technology we want, we first need to talk about the type of society we want, whether the year is 1986 or 2020.

    *

    Langdon Winner began his foreword to the 2010 edition of Lewis Mumford’s Technics and Civilization with the comment that “Anyone who studies the human dimensions of technological change must eventually come to terms with Lewis Mumford.” And it may be fair to note, in a similar vein, that anyone who studies the political dimensions of technological change must eventually come to terms with Langdon Winner. The staying power of The Whale and the Reactor is something which Winner acknowledges with a note of slightly self-deprecating humor, in the foreword to the book’s second edition, where he comments “At times, it seems my once bizarre heresy has finally become a weary truism” (vii).

    Indeed, to claim in 2020 that artifacts have politics is not to make a particularly radical statement. That statement has been affirmed enough times as to hardly make it a question that needs to be relitigated. Yet the second edition of The Whale and the Reactor is not a victory lap wherein Winner crows that he was right, nor is it the ashen lamentation of a Cassandra glumly observing that what they feared has transpired. Insofar as The Whale and the Reactor deserves this second edition, and to be clear it absolutely deserves this second edition, it is because the central concerns animating the book remain just as vital today.

    While the second edition contains a smattering of new material, the vast majority of the book remains as it originally was. As a result the book undergoes that strange kind of alchemy whereby a secondary source slowly transforms into a primary source—insofar as The Whale and the Reactor can now be treated as a document showing how, at least some, scholars were making sense of “the computer revolution” while in the midst of it. The book’s first third, which contains the “Do Artifacts Have Politics?” chapter, has certainly aged the best and the expansiveness with which Winner addresses the question of politics and technology makes it clear why those early chapters remain so widely read, while ensuring that these chapters have a certain timeless quality to them. However, as the book shifts into its exploration of “Technology: Reform and Revolution” the book does reveal its age. Read today, the commentary on “appropriate technology” comes across more as a reminder of a historical curio than as an exploration of the shortcomings of an experiment that recently failed. It feels somewhat odd to read Winner’s comments on “the state of nature,” bereft as they are of any real mention of climate change. And though Winner could have written in 1986 that technology was frequently overlooked as a topic deserving of philosophical scrutiny, today there are many works responding to that earlier lack (and many of those works even cite Winner). While Winner certainly cannot be faulted for not seeing the future, what makes some of these chapters feel particularly dated is that in many other places Winner excelled so remarkably at seeing the future.

    The chapter on “Mythinformation” stands as an excellent critical snapshot of the mid-80s enthusiasm that surrounded “the computer revolution,” with Winner skillfully noting how the utopian hopes surrounding computers were just the latest in the well-worn pattern wherein every new technology is seen as “inherently liberating.” In writing on computers, Winner does important work in separating the basics of what these machines literally can do, from the sorts of far-flung hopes that their advocates attached to them. After questioning whether the issues facing society are genuinely ones that boil down to access to information, Winner noted that it was more than likely that the real impact of computers would be to help those in control stay in control. As he puts it, “if there is to be a computer revolution, the best guess is that it will have a distinctively conservative character” (107) .In 1986, it may have been necessary to speak of this in terms of a “best guess,” and such comments may have met with angry responses from a host of directions, but in 2020 it seems fairly clear that Winner’s sense of what the impact of computers would be was not wrong.

    Considering the directions that widespread computerization would push societies, Winner hypothesized that it could lead to a breakdown in certain kinds of in-person contact and make it so that people would “become even more susceptible to the influence of employers, news media, advertisers, and national political leaders” (116). And moving to the present, in the second edition’s new chapter, Winner observes that despite the shiny toys of the Internet the result has been one wherein people “yield unthinkingly to various kinds of encoded manipulation (especially political manipulation), varieties of misinformation, computational propaganda, and political malware” (187). It is not that The Whale and the Reactor comes out to openly declare “don’t tell me that you weren’t warned,” but there is something about the second edition being published now, that feels like a pointed reminder. As former techno-optimists rebrand as techno-skeptics, the second edition is a reminder that some people knew to be wary from the beginning. Some may anxiously bristle as the CEOs of tech giants testify before Congress, some may feel a deep sense of disappointment every time they see yet another story about Facebook’s malfeasance, but The Whale and the Reactor is a reminder that these problems could have been anticipated. If we are unwilling to truly confront the politics of technologies when those technologies are new, we may find ourselves struggling to deal with the political impacts of those technologies once they have wreaked havoc.

    Beyond its classic posing of the important “do artifacts have politics?” question, the present collision between technology and politics helps draw attention to a deeper matter running through The Whale and the Reactor. Namely, that the book keeps coming back to the idea of democracy. Indeed, The Whale and the Reactor shows a refreshingly stubborn commitment to this idea. Technology clearly matters in the book, and technologies are taken very seriously throughout the book, but Winner keeps returning to democracy. In commenting on the ways in which artifacts have politics, the examples that Winner explores are largely ones wherein technological systems are put in place that entrench the political authority of a powerful minority, or which require the development of regimes that exceed democratic control. For Winner, democracy (and being a participant in a democracy) is an active process, one that cannot be replaced by “passive monitoring of electronic news and information” which “allows citizens to feel involved while dampening the desire to take an active part” (111). Insofar as “the vitality of democratic politics depends upon people’s willingness to act together in pursuit of their common ends” (111), a host of technological systems have been put in place that seem to have simultaneously sapped “people’s willingness” while also breaking down a sense of “common ends.” And though the Internet may trigger some nostalgic memory of active democracy, it is only a “pseudopublic realm” wherein the absence of the real conditions of democracy “helps generate wave after wave of toxic discourse along with distressing patterns of oligarchical rule, incipient authoritarianism, and governance by phonies and confidence men” (192).

    Those who remain committed to arguing for the liberatory potential of computers and the Internet, a group which includes individuals from a range of perspectives, might justifiably push back against Winner by critiquing the vision of democracy he celebrates. After all, there is something rather romantic about  Winner’s evocations of New England townhall meetings  and his comments on the virtues of face-to-face encounters. Do all participants in such encounters truly get to participate equally? Are such situations even set up so that all people can participate equally? What sorts of people and what modes of participation are privileged by such a model of democracy? Is a New England townhall meeting really a model for twenty-first century democracy? Here it is easy to picture Winner responding that what such questions reveal is the need to create technologies that will address those problems—and where a split may then open up is around the question of whether or not computers and the Internet represent such tools. That “technologies are not merely aids to human activity, but also powerful forces acting to reshape that activity and its meaning” (6) opens up a space in which different technologies can be built, even as other technologies can be dismantled, but such a recognition forces us to look critically at our technologies and truly confront the type of world that we are making and reinforcing for each other. And, in terms of computers and the Internet, the question that The Whale and the Reactor forces to the fore is one of: which are we putting first, computers or democracy?

    Winner warned his readers of the dangers of “technological somnambulism,” but it unfortunately seems that his call was not sufficient to wake up the sleepers in his midst in the 1980s. Alas, that The Whale and the Reactor remains so strikingly relevant is partially a testament to the persistence of the sleepwalkers’ continual slouch into the future. And though there may be some hopeful signs of late that more and more people are groggily stirring and rubbing the slumber from their eyes—the resistance to facial recognition is certainly a hopeful sign—a danger persists that many will conclude that since they have reached this spot that they must figure out some way to justify being here. After all, few want to admit that they have been sleepwalking. What makes The Whale and the Reactor worth revisiting today is not only that Winner asks the question “do artifacts have politics?” but the way in which, in responding to this question, he is willing to note that there are some artifacts that have bad politics. That there are some artifacts that do not align with our political goals and values. And what’s more, that when we are confronted with such artifacts, we do not need to pretend that they are our friends just because they have rearranged our society in such a way that we have no choice but to use them.

    In the foreword to the first edition of The Whale and the Reactor, Winner noted “In an age in which the inexhaustible power of scientific technology makes all things possible, it remains to be seen where we will draw the line, where we will be able to say, here are the possibilities that wisdom suggests we avoid” (xiii). For better, or quite likely for worse, that still remains to be seen today.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

  • Zachary Loeb — General Ludd in the Long Seventies (Review of Matt Tierney, Dismantlings)

    Zachary Loeb — General Ludd in the Long Seventies (Review of Matt Tierney, Dismantlings)

    a review of Matt Tierney, Dismantlings: Words Against Machines in the American Long Seventies (Cornell University Press, 2019)

    by Zachary Loeb

    ~

    The guy said, “If machinery
    makes you so happy
    go buy yourself
    a Happiness Machine.”
    Then he realized:
    They were trying to do
    exactly that.

    – Kenneth Burke, “Routine for a Stand-Up Comedian” (15)

    A sledgehammer is a fairly versatile tool. You can use it do destroy things, you can use it to build things, and in some cases you can use it to destroy things so that you can build things. Granted, it remains a rather heavy and fairly blunt tool, it is not particularly well suited for fine detail work requiring a high degree of precision. Which is, likely, one of the reasons why those who are famed for wielding sledgehammers often wind up being characterized as being just as blunt and unsubtle as the heavy instruments they swung.

    And, perhaps, no group has been more closely associated with sledgehammers, than the Luddites. Those early 19th century skilled crafts workers who took up arms to defend their communities and their livelihoods from the “obnoxious machines” being introduced by their employers. Though the tactic of machine breaking as a form of protest has a lengthy history that predates (and post-dates) the Luddites, it is a tactic that has come to be bound up with the name of the followers of the mysterious General Ludd. Despite the efforts of writers and thinkers to rescue the Luddite’s legacy from “the enormous condescension of posterity” (Thompson, 12), the term “Luddite” today generally has less to do with a specific historical group and has instead largely become an epithet to be hurled at anyone who dares question the gospel of technological progress. Yet, as the second decade of the twenty-first century comes to a close, it may well be that “Luddite” has lost some of its insulting sting against the backdrop of metastasizing tech giants, growing mountains of toxic e-waste, and an ecological crisis that owes much to an unquestioned faith in the benefits of technology.

    General Ludd may well get the last laugh.

    That the Luddites have lingered so fiercely in the public imagination is a testament to the fact that the Luddites, and the actions for which they are remembered, are good to think with. Insofar as one can talk about Luddism it represents less a coherent body of thought created by the Luddites themselves, and more the attempt by later scholars, critics, artists, and activists to try to make sense of what is usable from the Luddite legacy. And it is this effort to think through and think with, that Matt Tierney explores in his phenomenal book Dismantlings: Words Against Machines in the American Long Seventies. While the focus of Dismantlings, as its title makes clear, is on the “long seventies” (the years from 1965 to 1980) the book represents an important intervention in current discussions and debates around the impacts of technology on society. Just as the various figures Tierney discussed turned their thinking (to varying extents) back to the Luddites, so too the book argues is it worth revisiting the thinking and writing on the matter from the long seventies. This is not a book on the historical Luddites, instead this book is a vital contribution to attempts to theorize what Luddism might mean, and how we are to confront the various technological challenges facing us today.

    Largely remembered for occurrences including the Vietnam War, the Civil Rights movement, the space race, and a general tone of social upheaval – the long seventies also represented a period when technological questions were gaining prominence. With thinkers such as Marshall McLuhan, Buckminster Fuller, Norbert Wiener, and Stewart Brand all putting forth visions of the way that the new consumer technologies would remake society: creating “global villages” or giving rise to a perception of all of humanity as passengers on “spaceship earth.” Yet they were hardly the only figures contemplating technology in that period, and many of the other visions that emerged aimed to directly challenge some of the assumptions and optimism of the likes of McLuhan and Fuller. In the long seventies, the question of what would come next was closely entwined with an evaluation of what had come before, indeed “the breaking of retrogressive notions of technology coupled with the breaking of retrogressive technologies…undergoes a period of vital activity during the Long Seventies in the poems, fictions, and activist speech of what was then called cyberculture,” (15). Granted, this was a “breaking” that generally had more to do with theorizing than with actual machine smashing. Instead it could more accurately be seen as “dismantling,” the careful taking apart so that the functioning can be more fully understood and evaluated. Yet it is a thinking that, importantly, occurred against a recognition that the world was, as Norbert Wiener observed, “the world of Belsen and Hiroshima” (8). To make sense of the resistant narratives towards technology in the long seventies it is necessary to engage critically with the terminology of the period, and thus Tierney’s book represents a sort of conceptual “counterlexicon,” to do just that.

    As anyone who knows about the historical Luddites can attest, they did not hate technology (as such). Rather they were opposed to particular machines being used in a particular way at a particular place and time. And it is a similar attitude towards Luddism (not as an opposition to all technology, but as an understanding that technology has social implications) that Tierney discusses in the long seventies. Luddism here comes to represent “a gradual relinquishing of machines whose continued use would contravene ethical principles” (30), and this attitude is found in Langdon Winner’s concept of “epistemological Luddism” (as discussed in his book Autonomous Technology) and in the poetry of Audre Lorde. While Lorde’s line “for the master’s tools will never dismantle the master’s house” continues to be well known by activists, the question of “tools” can also be engaged with quite literally. Approached with a mind towards Luddism, Lorde’s remarks can be seen as indicating that it is not only that “the master’s house” must be dismantled but “the master’s tools” as well – and Lorde’s writing suggests poetry as a key tool for the dismantler. The version of Luddism that emerges in the late seventies represents a “sort of relinquishing” it “is not about machine-smashing at all” (47), instead it entails a careful work of examining machines to determine which are worth keeping.

    The attitudes towards technology of the long seventies were closely entwined with a sense of the world as made seemingly smaller and more connected thanks to the new technologies of the era. A certain strand of thinking in this period, exemplified by McLuhan’s “global village” or Fuller’s “Spaceship Earth,” achieved great popular success even as reactionary racist and nativist notions lurked just below the surface of the seeming technological optimism of those concepts. Contrary to the “fatalistic acceptance of new technological constraints on life” (48), works by science fiction authors like Ursula Le Guin and Samuel R. Delaney presented a notion of “communion, as a collaborative process of making do” (51). Works like The Dispossessed (Le Guin) and Triton (Delaney), presented readers with visions, and questions, of “real coexistence…not the passage but the sharing of a moment” (63). In contrast to the “technological Messianism” (74) of the likes of Fuller and McLuhan, the “communion” based works by the likes of Le Guin and Delaney focused less on exuberance for the machines themselves but instead sought to critically engage with what types of coexistence such machines would and could genuinely facilitate.

    Coined by Alice Mary Hilton, in 1963, the idea of “cyberculture” did not originally connote the sort of blissed-out-techno-optimism that the term evokes today. Rather it was meant to be “an alternative to the global village and the one-town world, and an insistence on collective action in a world not only of Belsen and Hiroshima but also of ongoing struggles toward decolonization, sexual and gender autonomy, and racial justice” (12). Thus, “cyberculture” (and cybernetics more generally) may represent one of the alternative pathways along which technological society could have developed. What “cyberculture” represented was not an exuberant embrace of all things “cyber,” but an attempt to name and thereby open a space for protest, not “against thinking machines” but which would “interrupt the advancing consensus that such machines had shrunk the globe” (81). These concepts achieved further maturation in the Ad Hoc Committee’s “Triple Revolution Manifesto” (from 1964), which sought to link an emancipatory political program to advances in new technology, linking “cybernation to a decrease in capitalist, racist, and militarist violence” (85). Seizing upon an earnest belief that the technological ethics could guide new technological developments towards just ends, “cyberculture” also imagined that such tools could supplant scarcity with abundance.

    What “cyberculture” based thinking consists of is a sort of theoretical imagining, which is why a document like a manifesto represents such an excellent example of “cyberculture” in practice. It is a sort of “distortion” that recognizes how “the fates of militarism, racism, and cybernation have only ever been knotted together” and “thus calls for imaginative practices, whether literary or activist, for cutting through the knot” (95). This is the sort of theorizing that can be seen in Martin Luther King, Jr.’s commentary on how science and technology had made of “this world a neighborhood” without yet making “of it a brotherhood” (96). The technological ethics of the advocates of “cyberculture” could be the tools with which to make “it a brotherhood” without discarding all of the tools that had made it first “a neighborhood.” The risks and opportunities of new technological forms were also commented upon in works like Shulamith Firestone’s Dialectic of Sex wherein she argued that women needed to seize and guide these technologies. Blending analysis of what is with a program for what could be, Firestone’s work shows “that if other technologies are possible, then other social practices, even practices that are rarely considered in relation to new technology, may be possible too” (105).

    For some, in the long seventies, challenging machinery still took on a destructive form. Though this often entailed a sort of “revolutionary suicide” which represented an attempt to “prevent the becoming-machine of subjugated human bodies and selves” (113). A refusal to become a machine oneself, and a refusal to allow oneself to become fodder for the machine. Such a self-destructive act flows from the Pynchon-esque tragic recognition of a growing consensus “that nothing can be done to oppose” the new machines (122). Such woebegone dejection is in contrast to other attitudes that sought to not only imagine but to also construct new tools that would put the people and community first. John Mohawk, of the Haudenosaunee Confederacy of Mohawk, Oneida, Onondaga, Cayuga, and Seneca people gave voice to this in his theorizing of “liberation technology.” As Mohawk explained at a UN session, “Decentralized technologies that meet the needs of the people those technologies serve will necessarily give life to a different kind of political structure, and it is safe to predict that the political structure that results will be anticolonial in nature” (127). The search for such alternative technologies suggested a framework in which what was needed was “machines to suit the community, or else no machines at all” (129) – a position that countered the technological abundance hoped for by “cyberculture” with an appeal for technologies of subsistence. After all, this was the world of Belsen and Hiroshima, “a world of new and barely understood technologies” (149), in such a world “where the very skin of the planet is a ledger of technological misapplications” (154) it is wise to proceed with caution and humility.

    The long seventies present a fascinating kaleidoscope of visions of technologies, how to live with them, how to select them, and how to think about them. What makes the long seventies so worthy of revisiting is that they and the present moment are both “seized with a critical discourse about technology, and by a popular social upheaval in which new social movements emerge, grow, and proliferate” (5). Luddism may be routinely held up as a foolish reaction, but “by breaking apart certain machines, we can learn to use them better, or never use them again. By dissecting certain technocentric cultural logics, we can likewise challenge or reject them” (162). That the Luddites are so constantly vilified may ultimately be a signal of their dangerous power, insofar as they show that people need not passively sit and accept everything that is sold to them as technological progress. Dismantling represents a politics “not as machine hating, but as a way to protect life against a large=scale regimentation and policing of security, labor, time, and community” (166).

    To engage in the fraught work of technological critique is to open oneself up to being labeled a Luddite (with the term being hurled as an epithet), to accusations of complicity in the very systems you are critiquing, and to a realization that many people simply don’t want to listen to their smartphone habits being criticized. Yet the various conceptual frameworks that can be derived from a consideration of “words against machines in the American long seventies” provide “tactics that might be repeated or emulated, if nostalgia and cynicism do not bar the way” (172). Such concepts present a method of pushing back at the “yes, but” logic which riddles so many discussions of technology today – conversations in which the downsides are acknowledged (the “yes”), yet where the counter is always offered that perhaps there’s still a way to use those technologies correctly (the “but”).

    In contrast to the comfortable rut of “yes, but” Tierney’s book argues for dismantling, wherein “to dismantle is to set aside the dithering of yes, but and to try instead the hard work of critique” (175).

    Running through many of the thinkers, writers, and activists detailed in Dismantlings is a genuine attempt to come to terms with the ways in which new technological forces are changing society. Though many of these individuals responded to such changes not by picking up hammers, but by turning to writing, this activity was always couched in a sense that the shifts afoot truly mattered. Agitated by the roaring clangor of the machines of their day, these figures from the long seventies were looking at the machines of their moment in order to consider what would need to be done to construct a different future. And they did this while looking askance at the more popular techno-utopian visions of the future being promulgated in their day. Writing of the historic Luddites, the historian David Noble commented that, “the Luddites were perhaps the last people in the West to perceive technology in the present tense and to act upon that perception” (Noble, 7), and it may be tempting to suggest that the various figures cataloged in Dismantlings were too focused on the future to have acted upon technology in their present. Nevertheless, as Tierney notes, “the present does not precede the future; rather the future (like its past) distorts and neighbors the present” (173) – the Luddites may have acted in the present, but their eyes were also on the future. It is worth remembering that we do not make sense of the technologies around us solely by what they mean now, but by what we think they will mean for the future.

    While Dismantlings provides a “counterlexicon” drawn from the writing/thinking/acting of a range of individuals in the late seventies, there is something rather tragic about reading these thoughts two decades into the twenty-first century. After all, readers of Dismantlings find themselves in what would have been the future to these late seventies thinkers. And, to be blunt, the world of today seems more in line with those thinkers’ fears for the future than with their hopes. An “epistemological Luddism” has not been used to carefully evaluate which tools to keep and which to discard, “communion” has not become a guiding principle, and “cyberculture” has drifted away from Hiton’s initial meaning to become a stand-in for a sort of uncritical techno-utopianism. The “master’s tools” have expanded to encompass ever more powerful tools, and the “master’s house” appears sturdier than ever – worse still many of us may have become so enamored by some of “the master’s tools” that we have started to entertain delusions that these are actually our tools. To a certain extent, Dismantlings stands as a reminder of a range of individuals who tried to warn us that we would wind up in the mess in which we find ourselves. Those who are equipped with such powers of perception are often mocked and derided in their own time, but looking back at them with hindsight one can get a discomforting sense of just how prescient they truly were.

    Matt Tierney’s Dismantlings: Words Against Machines in the American Long Seventies is a remarkable book. It is also a difficult book. Difficult not because of impenetrable theoretical prose (the writing is clear and crisp), but because it is always challenging to go back and confront the warnings that were ignored. At a moment when headlines are filled with sordid tales of the malfeasance of the tech behemoths, and increasingly terrifying news of the state of the planet, it is both reassuring and infuriating to recognize that it did not have to be this way. True, these long seventies figures did not specifically warn about Facebook, and climate change was not the term they used to speak of environmental degradation – but it’s doubtful that many of these figures would be particularly surprised by either occurrence.

    As a contribution to scholarship, Dismantlings represents a much needed addition to the literature on the long seventies – particularly the literature that considers technology in that period. While much of the present literature (much of it excellent) dealing with those years has tended to focus on the hippies who fell in love with their computers, Tierney’s book is a reminder of those who never composed poems of praise for their machines. After all, not everyone believed that the computer would be an emancipatory technology. This book brings together a wide assortment of figures and draws useful connections between them that will hopefully rescue many a name from obscurity. And even those names that can hardly be called obscure appear in a new light when viewed through the lenses that Tierney develops in this book. While readers may be familiar with names like Lorde, Le Guin, Delaney, and Pynchon – Tierney makes it clear that there is much to be gained by reading Hilton, Mohawk, Firestone, and revisiting the “Triple Revolution Manifesto.”

    Tierney also offers a vital intervention into ongoing discussions over the meaning of Luddism. While it may be fair to say that such discussions are occurring amongst a rather small group of people, it is a passionate debate nevertheless. Tierney avoids re-litigating the history of the original Luddites, and his timeline cuts off before the emergence of the Neo-Luddites, but his book provides valuable insight into the transformations the idea of Luddism went through in the long seventies. Granted, Luddism does not always appear to be a term that was being embraced by the figures in Tierney’s history. Certainly, Winner developed the concept of “epistemological Luddism,” and Pynchon is still remembered for his “Is it O.K. to Be a Luddite?” op-ed, but many of those who spoke about dismantling did not don the mask, or pick up the hammer, of General Ludd. Thus, this book is a clear attempt not to restate others’ views on Luddism, but to freshly theorize the idea. Drawing on his long seventies sources, Tierney writes that:

    Luddism is not the destruction of all machines. And neither is it the hatred of machines as such. Like cyberculture, it is another word for dismantling. Luddism is the performative breaking of machines that limit species expression and impede planetary survival. (13)

    This is a robust and loaded definition of Luddism. While it clearly moves Luddism towards a practice instead of simply a descriptor for particular historical actors, it also presents Luddism as a constructive (as opposed to destructive) process. There are several aspects of Tierney’s definition that deserve particular attention. First, by also evoking “cyberculture” (referring to Hilton’s ethically grounded notion when she coined the term), Tierney demonstrates that Luddism is not the only word or tactic for dismantling. Second, by evoking “the performative breaking,” Tierney moves Luddism away from the blunt force of hammers and towards the more difficult work of critical evaluation. Lastly, by linking Luddism to “species expression and…planetary survival,” Tierney highlights that even if this Luddism is not “the hatred of machines as such” it still entails the recognition that there are some machines that should be hated – and that should be taken apart. It’s the sort of message that you can imagine many people getting behind, even as one can anticipate the choruses of “yes, but” that would be sure to greet this.

    Granted, even though Tierney considers a fair number of manifestos of a revolutionary sort, Dismantlings is not a new Luddite manifesto (though it might be a Luddite lexicon). While Tierney writes of the various figures he analyzes with empathy and affection, he also writes with a certain weariness. After all, as was noted earlier, we are currently living in the world about which these critics tried to warn us. And therefore Tierney can note, “if no political overturning followed the literary politics of cyberculture and Luddism in their own moment, then certainly none will follow them now” (25). Nevertheless, Tierney couches these dour comments in the observation that, “even as a revolution fails, its failure fuels common feeling without which subsequent revolutions cannot succeed” (25). At the very least the assorted thinkers and works described in Dismantlings provide a rich resource to those in the present who are concerned about “species expression” and “planetary survival.” Indeed, those advocating to break up the tech companies or pushing for the Green New Deal can learn a great deal by revisiting the works discussed in Dismantlings.

    Nevertheless, it feels as though there are some key characters missing from Dismantlings. To be clear this point is not meant to detract from Tierney’s excellent and worthwhile book. Furthermore, it must be noted that devotees of particular theorists and social critics tend to have a strong “why isn’t [the theorist/social critic I am devoted to] discussed more in here!?” reaction to works. Nevertheless, there were certain figures who seemed to be oddly missing from Dismantlings. Reflecting on the types of machines against which figures in the long seventies were reacting, Tierney writes of “the war machine, the industrial machine, the computer, and the machines of state are all connected” (4). And it was the dangerous connection of all of these that the social critic Lewis Mumford sought to describe in his theorizing of “the megamachine” – theorizing which he largely did in his two volume Myth of the Machine (which was published in the long seventies). Though Mumford’s idea of “technic” eras is briefly mentioned early in Dismantlings, his broader thinking that touches directly on the core areas of Dismantlings are not remarked on. Several figures who were heavily influenced by Mumford’s work appear in Dismantlings (notably Bookchin and Roszak), and Mumford’s thought could have certainly bolstered some of the books arguments. Mumford, after all, saw himself as a bit of an anti-McLuhan – and in evaluating thinkers who were concerned with what technology meant for “species expression” and “planetary survival” Mumford deserves more attention. Given the overall thrust of Dismantlings it also might have been interesting to see Erich Fromm’s The Revolution of Hope: toward a humanized technology and Ivan Illich’s Tools for Conviviality discussed. Granted, these comments are not meant as attacks on Tierney’s excellent book – they are simply an observation by an avowed Mumford partisan.

    To fully appreciate why the thoughts from the long seventies still matter today it may be useful to consider a line from one of Mumford’s early works. As Mumford wrote, in 1931, “every generation revolts against its fathers and makes friends with its grandfathers” (Mumford, 1). To a certain extent, Dismantlings is an argument for those currently invested in debates around technology to revisit “and make friends” with earlier generations of critics. There is much to be gained from such a move. Notable here is a shift in an evaluation of dangers. Throughout Dismantlings Tierney returns frequently to Wiener’s line that “this is the world of Belsen and Hiroshima” – and without meaning to be crass this is an understanding of the world that has somewhat receded into the past as the memory of those events becomes enshrined in history books. Yet for the likes of Wiener and many of the other individuals discussed in Dismantlings, “Belsen and Hiroshima” were not abstractions or distant memories – they were not the crimes that could be consigned to the past. Rather they were bleak reminders of the depths to which humanity could sink, and the way in which science and technology could act as a weight to drag humanity even deeper. Today’s world is the world of climate change, border walls, and surveillance capitalism – but it is still “the world of Belsen and Hiroshima.”

    There is much that needs to be dismantled, and not much time in which to do that work.

    The lessons from the long seventies are those that we are still struggling to reckon with today, including the recognition that in order to fully make sense of the machines around us it may be necessary to dismantle many of them. Of course, “not everything should be dismantled, but many things should be and some things must be, even if we don’t know where to begin” (163).

    Tierney’s book does not provide an easy answer, but it does show where we should begin.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • Lewis Mumford. The Brown Decades. New York: Dover Books, 1971.
    • David F. Noble. Progress Without People. Toronto: Between the Lines, 1995.
    • E.P. Thompson. The Making of the English Working Class. New York: Vintage Books, 1966.
  • Zachary Loeb — Flamethrowers and Fire Extinguishers (Review of Jeff Orlowski, dir., The Social Dilemma)

    Zachary Loeb — Flamethrowers and Fire Extinguishers (Review of Jeff Orlowski, dir., The Social Dilemma)

    a review of Jeff Orlowski, dir., The Social Dilemma (Netflix/Exposure Labs/Argent Pictures, 2020)

    by Zachary Loeb

    ~

    The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!

    – Joseph Weizenbaum (1976)

    Why did you last look at your smartphone? Did you need to check the time? Was picking it up a conscious decision driven by the need to do something very particular, or were you just bored? Did you turn to your phone because its buzzing and ringing prompted you to pay attention to it? Regardless of the particular reasons, do you sometimes find yourself thinking that you are staring at your phone (or other computerized screens) more often than you truly want? And do you ever feel, even if you dare not speak this suspicion aloud, that your gadgets are manipulating you?

    The good news is that you aren’t just being paranoid, your gadgets were designed in such a way as to keep you constantly engaging with them. The bad news is that you aren’t just being paranoid, your gadgets were designed in such a way as to keep you constantly engaging with them. What’s more, on the bad news front, these devices (and the platforms they run) are constantly sucking up information on you and are now pushing and prodding you down particular paths. Furthermore, alas more bad news, these gadgets and platforms are not only wreaking havoc on your attention span they are also undermining the stability of your society. Nevertheless, even though there is ample cause to worry, the new film The Social Dilemma ultimately has good news for you: a collection of former tech-insiders is starting to speak out! Sure, many of these individuals are the exact people responsible for building the platforms that are currently causing so much havoc—but they meant well, they’re very sorry, and (did you hear?) they meant well.

    Directed by Jeff Orlowski, and released to Netflix in early September 2020, The Social Dilemma is a docudrama that claims to provide a unsparing portrait of what social media platforms have wrought. While the film is made up of a hodgepodge of elements, at the core of the work are a series of interviews with Silicon Valley alumni who are concerned with the direction in which their former companies are pushing the world. Most notable amongst these, the film’s central character to the extent it has one, is Tristan Harris (formerly a design ethicists at Google, and one of the cofounders of The Center for Humane Technology) who is not only repeatedly interviewed but is also shown testifying before the Senate and delivering a TED style address to a room filled with tech luminaries. This cast of remorseful insiders is bolstered by a smattering of academics, and non-profit leaders, who provide some additional context and theoretical heft to the insiders’ recollections. And beyond these interviews the film incorporates a fictional quasi-narrative element depicting the members of a family (particularly its three teenage children) as they navigate their Internet addled world—with this narrative providing the film an opportunity to strikingly dramatize how social media “works.”

    The Social Dilemma makes some important points about the way that social media works, and the insiders interviewed in the film bring a noteworthy perspective. Yet beyond the sad eyes, disturbing animations, and ominous music The Social Dilemma is a piece of manipulative filmmaking on par with the social media platforms it critiques. While presenting itself as a clear-eyed expose of Silicon Valley, the film is ultimately a redemption tour for a gaggle of supposedly reformed techies wrapped in an account that is so desperate to appeal to “both sides” that it is unwilling to speak hard truths.

    The film warns that the social media companies are not your friends, and that is certainly true, but The Social Dilemma is not your friend either.

    The Social Dilemma

    As the film begins the insiders introduce themselves, naming the companies where they had worked, and identifying some of the particular elements (such as the “like” button) with which they were involved. Their introductions are peppered with expressions of concern intermingled with earnest comments about how “Nobody, I deeply believe, ever intended any of these consequences,” and that “There’s no one bad guy.” As the film transitions to Tristan Harris rehearsing for the talk that will feature later in the film, he comments that “there’s a problem happening in the tech industry, and it doesn’t have a name.” After recounting his personal awakening, whilst working at Google, and his attempt to spark a serious debate about these issues with his coworkers, the film finds “a name” for the “problem” Harris had alluded to: “surveillance capitalism.” The thinker who coined that term, Shoshana Zuboff, appears to discuss this concept which captures the way in which Silicon Valley thrives not off of users’ labor but off of every detail that can be sucked up about those users and then sold off to advertisers.

    After being named, “surveillance capitalism” hovers in the explanatory background as the film considers how social media companies constantly pursue three goals: engagement (to keep you coming back), growth (to get you to bring in more users), and advertising (to get better at putting the right ad in front of your eyes, which is how the platforms make money). The algorithms behind these platforms are constantly being tweaked through A/B testing, with every small improvement being focused around keeping users more engaged. Numerous problems emerge: designed to be addictive, these platforms and devices claw at users’ attention; teenagers (especially young ones) struggle as their sense of self-worth becomes tied to “likes;” misinformation spreads rapidly in an information ecosystem wherein the incendiary gets more attention than the true; and the slow processes of democracy struggle to keep up with the speed of technology. Though the concerns are grave, and the interviewees are clearly concerned, the tonality is still one of hopefulness; the problem here is not really social media, but “surveillance capitalism,” and if “surveillance capitalism” can be thwarted then the true potential of social media can be attained. And the people leading that charge against “surveillance capitalism”? Why, none other than the reformed insiders in the film.

    While the bulk of the film consists of interviews, and news clips, the film is periodically interrupted by a narrative in which a family with three teenage children is shown. The Mother (Barbara Gehring) and Step-Father (Chris Grundy) are concerned with their children’s social media usage, even as they are glued to their own devices. As for the children: the oldest Cassandra (Kara Hayward) is presented as skeptical towards social media, the youngest Isla (Sophia Hammons) Is eager for online popularity, and the middle child Ben (Skyler Gisondo) eventually falls down the rabbit hole of recommended conspiratorial content. As the insiders, and academics, talk about the various dangers of social media the film shifts to the narrative to dramatize these moments – thus a discussion of social media’s impact on young teenagers, particularly girls, cuts to Isla being distraught after an insulting comment is added to one of the images she uploads. Cassandra (that name choice can’t be a coincidence) is presented as most in line with the general message of the film and the character refers to Jaron Lanier as a “genius” and in another sequence is shown reading Zuboff’s The Age of Surveillance Capitalism. Yet the member of the family the film dwells on the most is almost certainly Ben. For the purposes of dramatizing how an algorithm works, the film repeatedly returns to a creepy depiction of the Advertising, Engagement, and Growth Ais (all played by Vincent Kartheiser) as they scheme to get Ben to stay glued to his phone. Beyond the screens, the world in the narrative is being rocked by a strange protest movement calling itself “The Extreme Center” – whose argument seems to be that both sides can’t be trusted – and Ben eventually gets wrapped up in their message. The family’s narrative concludes with Ben and Cassandra getting arrested at a raucous rally held by “The Extreme Center,” sitting handcuffed on the ground and wondering how it is that this could have happened.

    To the extent that The Social Dilemma builds towards a conclusion, it is the speech that Harris gives (before an audience that includes many of the other interviewees in the film). And in that speech, and the other comments made around it, the point that is emphasized is that Silicon Valley must get away from “surveillance capitalism.” It must embrace “humane technology” that seeks to empower users not entangle them. Emphasizing that, despite how things have turned out, that “I don’t think these guys set out to be evil” the various insiders double-down on their belief in high-tech’s liberatory potential. Contrasting rather unflattering imagery of Mark Zuckerberg (without genuinely calling him out) testifying with images of Steve Jobs in his iconic turtleneck, the film claims “the idea of humane technology, that’s where Silicon Valley got its start.” And before the credits roll, Harris seems to speak for his fellow insiders as he notes “we built these things, and we have a responsibility to change it.” For those who found the film unsettling, and who are confused by exactly what they are meant to do if they are not part of Harris’s “we,” the film offers some straightforward advice. Drawing on their own digital habits, the insiders recommend: turning off notifications, never watching a recommended video, opting for a less-invasive search engine, trying to escape your content bubble, keeping your devices out of your bedroom, and being a critical consumer of information.

    It is a disturbing film, and it is constructed so as to unsettle the viewer, but it still ends on a hopeful note: reform is possible, and the people in this film are leading that charge. The problem is not social media as such, but what the ways in which “surveillance capitalism” has thwarted what social media could really be. If, after watching The Social Dilemma, you feel concerned about what “surveillance capitalism” has done to social media (and you feel prepared to make some tweaks in your social media use) but ultimately trust that Silicon Valley insiders are on the case—then the film has succeeded in its mission. After all, the film may be telling you to turn off Facebook notifications, but it doesn’t recommend deleting your account.

    Yet one of the points the film makes is that you should not accept the information that social media presents to you at face value. And in the same spirit, you should not accept the comments made by oh-so-remorseful Silicon Valley insiders at face value either. To be absolutely clear: we should be concerned about the impacts of social media, we need to work to rein in the power of these tech companies, we need to be willing to have the difficult discussion about what kind of society we want to live in…but we should not believe that the people who got us into this mess—who lacked the foresight to see the possible downsides in what they were building—will get us out of this mess. If these insiders genuinely did not see the possible downsides of what they were building, than they are fools who should not be trusted. And if these insiders did see the possible downsides, continued building these things anyways, and are now pretending that they did not see the downsides, than they are liars who definitely should not be trusted.

    It’s true, arsonists know a lot about setting fires, and a reformed arsonist might be able to give you some useful fire safety tips—but they are still arsonists.

    There is much to be said about The Social Dilemma. Indeed, anyone who cares about these issues (unfortunately) needs to engage with The Social Dilemma if for no other reason than the fact that this film will be widely watched, and will thus set much of the ground on which these discussions take place. Therefore, it is important to dissect certain elements of the film. To be clear, there is a lot to explore in The Social Dilemma—a book or journal issue could easily be published in which the docudrama is cut into five minute segments with academics and activists being each assigned one segment to comment on. While there is not the space here to offer a frame by frame analysis of the entire film, there are nevertheless a few key segments in the film which deserve to be considered. Especially because these key moments capture many of the film’s larger problems.

    “when bicycles showed up”

    A moment in The Social Dilemma that perfectly, if unintentionally, sums up many of the major flaws with the film occurs when Tristan Harris opines on the history of bicycles. There are several problems in these comments, but taken together these lines provide you with almost everything you need to know about the film. As Harris puts it:

    No one got upset when bicycles showed up. Right? Like, if everyone’s starting to go around on bicycles, no one said, ‘Oh, my God, we’ve just ruined society. [chuckles] Like, bicycles are affecting people. They’re pulling people away from their kids. They’re ruining the fabric of democracy. People can’t tell what’s true.’ Like we never said any of that stuff about a bicycle.

    Here’s the problem, Harris’s comments about bicycles are wrong.

    They are simply historically inaccurate. Some basic research into the history of bicycles that looks at the ways that people reacted when they were introduced would reveal that many people were in fact quite “upset when bicycles showed up.” People absolutely were concerned that bicycles were “affecting people,” and there were certainly some who were anxious about what these new technologies meant for “the fabric of democracy.” Granted, that there were such adverse reactions to the introduction of bicycles should not be seen as particularly surprising, because even a fairly surface-level reading of the history of technology reveals that when new technologies are introduced they tend to be met not only with excitement, but also with dread.

    Yet, what makes Harris’s point so interesting is not just that he is wrong, but that he is so confident while being so wrong. Smiling before the camera, in what is obviously supposed to be a humorous moment, Harris makes a point about bicycles that is surely one that will stick with many viewers—and what he is really revealing is that he needs to take some history classes (or at least do some reading). It is genuinely rather remarkable that this sequence made it into the final cut of the film. This was clearly an expensive production, but they couldn’t have hired a graduate student to watch the film and point out “hey, you should really cut this part about bicycles, it’s wrong”? It is hard to put much stock in Harris, and friends, as emissaries of technological truth when they can’t be bothered to do basic research.

    That Harris speaks so assuredly about something which he is so wrong about gets at one of the central problems with the reformed insiders of The Social Dilemma. Though these are clearly intelligent people (lots of emphasis is placed on the fancy schools they attended), they know considerably less than they would like the viewers to believe. Of course, one of the ways that they get around this is by confidently pretending they know what they’re talking about, which manifests itself by making grandiose claims about things like bicycles that just don’t hold up. The point is not to mock Harris for this mistake (though it really is extraordinary that the segment did not get cut), but to make the following point: if Harris, and his friends, had known a bit more about the history of technology, and perhaps if they had a bit more humility about what they don’t know, perhaps they would not have gotten all of us into this mess.

    A point that is made by many of the former insiders interviewed for the film is that they didn’t know what the impacts would be. Over and over again we hear some variation of “we meant well” or “we really thought we were doing something great.” It is easy to take such comments as expressions of remorse, but it is more important to see such comments as confessions of that dangerous mixture of hubris and historical/social ignorance that is so common in Silicon Valley. Or, to put it slightly differently, these insiders really needed to take some more courses in the humanities. You know how you could have known that technologies often have unforeseen consequences? Study the history of technology. You know how you could have known that new media technologies have jarring political implications? Read some scholarship from media studies. A point that comes up over and over again in such scholarly work, particularly works that focus on the American context, is that optimism and enthusiasm for new technology often keeps people (including inventors) from seeing the fairly obvious risks—and all of these woebegone insiders could have known that…if they had only been willing to do the reading. Alas, as anyone who has spent time in a classroom knows, a time honored way of covering up for the fact that you haven’t done the reading is just to speak very confidently and hope that your confidence will successfully distract from the fact that you didn’t do the reading.

    It would be an exaggeration to claim “all of these problems could have been prevented if these people had just studied history!” And yet, these insiders (and society at large) would likely be better able to make sense of these various technological problems if more people had an understanding of that history. At the very least, such historical knowledge can provide warnings about how societies often struggle to adjust to new technologies, can teach how technological progress and social progress are not synonymous, can demonstrate how technologies have a nasty habit of biting back, and can make clear the many ways in which the initial liberatory hopes that are attached to a technology tend to fade as it becomes clear that the new technology has largely reinscribed a fairly conservative status quo.

    At the very least, knowing a bit more about the history of technology can keep you from embarrassing yourself by confidently making claiming that “we never said any of that stuff about a bicycle.”

    “to destabilize”

    While The Social Dilemma expresses concern over how digital technologies impact a person’s body, the film is even more concerned about the way these technologies impact the body politic. A worry that is captured by Harris’s comment that:

    We in the tech industry have created the tools to destabilize and erode the fabric of society.

    That’s quite the damning claim, even if it is one of the claims in the film that probably isn’t all that controversial these days. Though many of the insiders in the film pine nostalgically for those idyllic days from ten years ago when much of the media and the public looked so warmly towards Silicon Valley, this film is being released at a moment when much of that enthusiasm has soured. One of the odd things about The Social Dilemma is that politics are simultaneously all over the film, and yet politics in the film are very slippery. When the film warns of looming authoritarianism: Bolsanaro gets some screen time, Putin gets some ominous screen time—but though Trump looms in the background of the film he’s pretty much unseen and unnamed. And when US politicians do make appearances we get Marco Rubio and Jeff Flake talking about how people have become too polarized and Jon Tester reacting with discomfort to Harris’s testimony. Of course, in the clip that is shown, Rubio speaks some pleasant platitudes about the virtues of coming together…but what does his voting record look like?

    The treatment of politics in The Social Dilemma comes across most clearly in the narrative segment, wherein much attention is paid to a group that calls itself “The Extreme Center.” Though the ideology of this group is never made quite clear, it seems to be a conspiratorial group that takes as its position that “both sides are corrupt” – rejecting left and right it therefore places itself in “the extreme center.” It is into this group, and the political rabbit hole of its content, that Ben falls in the narrative – and the raucous rally (that ends in arrests) in the narrative segment is one put on by the “extreme center.” It may appear that “the extreme center” is just a simple storytelling technique, but more than anything else it feels like the creation of this fictional protest movement is really just a way for the film to get around actually having to deal with real world politics.

    The film includes clips from a number of protests (though it does not bother to explain who these people are and why they are protesting), and there are some moments when various people can be heard specifically criticizing Democrats or Republicans. But even as the film warns of “the rabbit hole” it doesn’t really spend much time on examples. Heck, the first time that the words “surveillance capitalism” get spoken in the film are in a clip of Tucker Carlson. Some points are made about “pizzagate” but the documentary avoids commenting on the rapidly spreading QAnon conspiracy theory. And to the extent that any specific conspiracy receives significant attention it is the “flat earth” conspiracy. Granted, it’s pretty easy to deride the flat earthers, and in focusing on them the film makes a very conscious decision to not focus on white supremacist content and QAnon. Ben falls down the “extreme center” rabbit hole, and it may well be that the reason why the filmmakers have him fall down this fictional rabbit hole is so that they don’t have to talk about the likelihood that (in the real world) he would likely fall down a far-right rabbit hole. But The Social Dilemma doesn’t want to make that point, after all, in the political vision it puts forth the problem is that there is too much polarization and extremism on both sides.

    The Social Dilemma clearly wants to avoid taking sides. And in so doing demonstrates the ways in which Silicon Valley has taken sides. After all, to focus so heavily on polarization and the extremism of “both sides” just serves to create a false equivalency where none exists. But, the view that “the Trump administration has mismanaged the pandemic” and the view that “the pandemic is a hoax” – are not equivalent. The view that “climate change is real” and “climate change is a hoax” – are not equivalent. People organizing for racial justice and people organizing because they believe that Democrats are satanic cannibal pedophiles – are not equivalent. The view that “there is too much money in politics” and the view that “the Jews are pulling the strings” – are not equivalent. Of course, to say that these things “are not equivalent” is to make a political judgment, but by refusing to make such a judgment The Social Dilemma presents both sides as being equivalent. There are people online who are organizing for the cause of racial justice, and there are white-supremacists organizing online who are trying to start a race war—those causes may look the same to an algorithm, and they may look the same to the people who created those algorithms, but they are not the same.

    You cannot address the fact that Facebook and YouTube have become hubs of violent xenophobic conspiratorial content unless you are willing to recognize that Facebook and YouTube actively push violent xenophobic conspiratorial content.

    It is certainly true that there are activist movements from the left and the right organizing online at the moment, but when you watch a movie trailer on YouTube the next recommended video isn’t going to be a talk by Angela Davis.

    “it’s the critics”

    Much of the content of The Social Dilemma is unsettling, and the film makes it clear that change is necessary. Nevertheless, the film ends on a positive note. Pivoting away from gloominess, the film shows the rapt audience nodding as Harris speaks of the need for “humane technology,” and this assembled cast of reformed insiders is presented as proof that Silicon Valley is waking up to the need to take responsibility. Near the film’s end, Jaron Lanier hopefully comments that:

    it’s the critics that drive improvement. It’s the critics who are the true optimists.

    Thus, the sense that is conveyed at the film’s close is that despite the various worries that had been expressed—the critics are working on it, and the critics are feeling good.

    But, who are the critics?

    The people interviewed in the film, obviously.

    And that is precisely the problem. “Critic” is something of a challenging term to wrestle with as it doesn’t necessarily take much to be able to call yourself, or someone else, a critic. Thus, the various insiders who are interviewed in the film can all be held up as “critics” and can all claim to be “critics” thanks to the simple fact that they’re willing to say some critical things about Silicon Valley and social media. But what is the real content of the criticisms being made? Some critics are going to be more critical than others, so how critical are these critics? Not very.

    The Social Dilemma is a redemption tour that allows a bunch of remorseful Silicon Valley insiders to rebrand themselves as critics. Based on the information provided in the film it seems fairly obvious that a lot of these individuals are responsible for causing a great deal of suffering and destruction, but the film does not argue that these men (and they are almost entirely men) should be held accountable for their deeds. The insiders have harsh things to say about algorithms, they too have been buffeted about by nonstop nudging, they are also concerned about the rabbit hole, they are outraged at how “surveillance capitalism” has warped technological possibilities—but remember, they meant well, and they are very sorry.

    One of the fascinating things about The Social Dilemma is that in one scene a person will proudly note that they are responsible for creating a certain thing, and then in the next scene they will say that nobody is really to blame for that thing. Certainly not them, they thought they were making something great! The insiders simultaneously want to enjoy the cultural clout and authority that comes from being the one who created the like button, while also wanting to escape any accountability for being the person who created the like button. They are willing to be critical of Silicon Valley, they are willing to be critical of the tools they created, but when it comes to their own culpability they are desperate to hide behind a shield of “I meant well.” The insiders do a good job of saying remorseful words, and the camera catches them looking appropriately pensive, but it’s no surprise that these “critics” should feel optimistic, they’ve made fortunes utterly screwing up society, and they’ve done such a great job of getting away with it that now they’re getting to elevate themselves once again by rebranding themselves as “critics.”

    To be a critic of technology, to be a social critic more broadly, is rarely a particularly enjoyable or a particularly profitable undertaking. Most of the time, if you say anything critical about technology you are mocked as a Luddite, laughed at as a “prophet of doom,” derided as a technophobe, accused of wanting everybody to go live in caves, and banished from the public discourse. That is the history of many of the twentieth century’s notable social critics who raised the alarm about the dangers of computers decades before most of the insiders in The Social Dilemma were born. Indeed, if you’re looking for a thorough retort to The Social Dilemma you cannot really do better than reading Joseph Weizenbaum’s Computer Power and Human Reason—a book which came out in 1976. That a film like The Social Dilemma is being made may be a testament to some shifting attitudes towards certain types of technology, but it was not that long ago that if you dared suggest that Facebook was a problem you were denounced as an enemy of progress.

    There are many phenomenal critics speaking out about technology these days. To name only a few: Safiya Noble has written at length about the ways that the algorithms built by companies like Google and Facebook reinforce racism and sexism; Virginia Eubanks has exposed the ways in which high-tech tools of surveillance and control are first deployed against society’s most vulnerable members; Wendy Hui Kyong Chun has explored how our usage of social media becomes habitual; Jen Schradie has shown the ways in which, despite the hype to the contrary, online activism tends to favor right-wing activists and causes; Sarah Roberts has pulled back the screen on content moderation to show how much of the work supposedly being done by AI is really being done by overworked and under-supported laborers; Ruha Benjamin has made clear the ways in which discriminatory designs get embedded in and reified by technical systems; Christina Dunbar-Hester has investigated the ways in which communities oriented around technology fail to overcome issues of inequality; Sasha Costanza-Chock has highlighted the need for an approach to design that treats challenging structural inequalities as the core objective, not an afterthought; Morgan Ames expounds upon the “charisma” that develops around certain technologies; and Meredith Broussard has brilliantly inveighed against the sort of “technochauvinist” thinking—the belief that technology is the solution to every problem—that is so clearly visible in The Social Dilemma. To be clear, this list of critics is far from all-inclusive. There are numerous other scholars who certainly could have had their names added here, and there are many past critics who deserve to be named for their disturbing prescience.

    But you won’t hear from any of those contemporary critics in The Social Dilemma. Instead, viewers of the documentary are provided with a steady set of mostly male, mostly white, reformed insiders who were unable to predict that the high-tech toys they built might wind up having negative implications.

    It is not only that The Social Dilemma ignores most of the figures who truly deserve to be seen as critics, but that by doing so what The Social Dilemma does is set the boundaries for who gets to be a critic and what that criticism can look like. The world of criticism that The Social Dilemma sets up is one wherein a person achieves legitimacy as a critic of technology as a result of having once been a tech insider. Thus what the film does is lay out, and then set about policing the borders of, what can pass for acceptable criticism of technology. This not only limits the cast of critics to a narrow slice of mostly white mostly male insiders, it also limits what can be put forth as a solution. You can rest assured that the former insiders are not going to advocate for a response that would involve holding the people who build these tools accountable for what they’ve created. On the one hand it’s remarkable that no one in the film really goes after Mark Zuckerberg, but many of these insiders can’t go after Zuckerberg—because any vitriol they direct at him could just as easily be directed at them as well.

    It matters who gets to be deemed a legitimate critic. When news networks are looking to have a critic on it matters whether they call Tristan Harris or one of the previously mentioned thinkers, when Facebook does something else horrendous it matters whether a newspaper seeks out someone whose own self-image is bound up in the idea that the company means well or someone who is willing to say that Facebook is itself the problem. When there are dangerous fires blazing everywhere it matters whether the voices that get heard are apologetic arsonists or firefighters.

    Near the film’s end, while the credits play, as Jaron Lanier speaks of Silicon Valley he notes “I don’t hate them. I don’t wanna do any harm to Google or Facebook. I just want to reform them so they don’t destroy the world. You know?” And these comments capture the core ideology of The Social Dilemma, that Google and Facebook can be reformed, and that the people who can reform them are the people who built them.

    But considering all of the tangible harm that Google and Facebook have done, it is far past time to say that it isn’t enough to “reform” them. We need to stop them.

    Conclusion: On “Humane Technology”

    The Social Dilemma is an easy film to criticize. After all, it’s a highly manipulative piece of film making, filled with overly simplified claims, historical inaccuracies, conviction lacking politics, and a cast of remorseful insiders who still believe Silicon Valley’s basic mythology. The film is designed to scare you, but it then works to direct that fear into a few banal personal lifestyle tweaks, while convincing you that Silicon Valley really does mean well. It is important to view The Social Dilemma not as a genuine warning, or as a push for a genuine solution, but as part of a desperate move by Silicon Valley to rehabilitate itself so that any push for reform and regulation can be captured and defanged by “critics” of its own choosing.

    Yet, it is too simple (even if it is accurate) to portray The Social Dilemma as an attempt by Silicon Valley to control both the sale of flamethrowers and fire extinguishers. Because such a focus keeps our attention pinned to Silicon Valley. It is easy to criticize Silicon Valley, and Silicon Valley definitely needs to be criticized—but the bright-eyed faith in high-tech gadgets and platforms that these reformed insiders still cling to is not shared only by them. The people in this film blame “surveillance capitalism” for warping the liberatory potential of Internet connected technologies, and many people would respond to this by pushing back on Zuboff’s neologism to point out that “surveillance capitalism” is really just “capitalism” and that therefore the problem is really that capitalism is warping the liberatory potential of Internet connected technologies. Yes, we certainly need to have a conversation about what to do with Facebook and Google (dismantle them). But at a certain point we also need to recognize that the problem is deeper than Facebook and Google, at a certain point we need to be willlng to talk about computers.

    The question that occupied many past critics of technology was the matter of what kinds of technology do we really need? And they were clear that this was a question that was far too important to be left to machine-worshippers.

    The Social Dilemma responds to the question of “what kind of technology do we really need?” by saying “humane technology.” After all, the organization The Center for Humane Technology is at the core of the film, and Harris speaks repeatedly of “humane technology.” At the surface level it is hard to imagine anyone saying that they disapprove of the idea of “humane technology,” but what the film means by this (and what the organization means by this) is fairly vacuous. When the Center for Humane Technology launched in 2018, to a decent amount of praise and fanfare, it was clear from the outset that its goal had more to do with rehabilitating Silicon Valley’s image than truly pushing for a significant shift in technological forms. Insofar as “humane technology” means anything, it stands for platforms and devices that are designed to be a little less intrusive, that are designed to try to help you be your best self (whatever that means), that try to inform you instead of misinform you, and that make it so that you can think nice thoughts about the people who designed these products. The purpose of “humane technology” isn’t to stop you from being “the product,” it’s to make sure that you’re a happy product. “Humane technology” isn’t about deleting Facebook, it’s about renewing your faith in Facebook so that you keep clicking on the “like” button. And, of course, “humane technology” doesn’t seem to be particularly concerned with all of the inhumanity that goes into making these gadgets possible (from mining, to conditions in assembly plants, to e-waste). “Humane technology” isn’t about getting Ben or Isla off their phones, it’s about making them feel happy when they click on them instead of anxious. In a world of empowered arsonists, “humane technology” seeks to give everyone a pair of asbestos socks.

    Many past critics also argued that what was needed was to place a new word before technology – they argued for “democratic” technologies, or “holistic” technologies, or “convivial” technologies, or “appropriate” technologies, and this list could go on. Yet at the core of those critiques was not an attempt to salvage the status quo but a recognition that what was necessary in order to obtain a different sort of technology was to have a different sort of society. Or, to put it another way, the matter at hand is not to ask “what kind of computers do we want?” but to ask “what kind of society do we want?” and to then have the bravery to ask how (or if) computers really fit into that world—and if they do fit, how ubiquitous they will be, and who will be responsible for the mining/assembling/disposing that are part of those devices’ lifecycles. Certainly, these are not easy questions to ask, and they are not pleasant questions to mull over, which is why it is so tempting to just trust that the Center for Humane Technology will fix everything, or to just say that the problem is Silicon Valley.

    Thus as the film ends we are left squirming unhappily as Netflix (which has, of course, noted the fact that we watched The Social Dilemma) asks us to give the film a thumbs up or a thumbs down – before it begins auto-playing something else.

    The Social Dilemma is right in at least one regard, we are facing a social dilemma. But as far as the film is concerned, your role in resolving this dilemma is to sit patiently on the couch and stare at the screen until a remorseful tech insider tells you what to do.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to Calculation. New York: WH Freeman & Co.
  • Sareeta Amrute — Sounding the Flat Alarm (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    Sareeta Amrute — Sounding the Flat Alarm (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    a review of Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019)

    by Sareeta Amrute

    Shoshana Zuboff’s The Age of Surveillance Capitalism begins badly: the author’s house burns down. Her home is struck by lightning, it takes Zuboff a few minutes to realize the enormity of the conflagration happening all around her and escape. The book, written after the fire goes out, is a warning about the enormity of the changes kindled while we slept. Zuboff describes a world in which autonomy, agency, and privacy–the walls of her house–are under threat by a corporate apparatus that records everything in order to control behavior. That act of monitoring and recording inaugurates a new era in the development of capitalism that Zuboff believes is destructive of both individual liberty and democratic institutions.

    Surveillance Capitalism  is the alarm to all of us to get out of the house, lest it burn down all around us. In making this warning however, Zuboff discounts the long history of surveillance outside the middle class enclaves of Europe and the United States and assumes that protecting the privacy of individuals in that same location will solve the problem of surveillance for the Rest.

    The house functions as a metaphor throughout the book, first as a warning about how difficult it is to recognize a radical remaking of our world as it is happening: this change is akin to a lightning strike. The second is as an indicator of the kind of world we inhabit: it is a world that could be enhancing of life, instead it treats life as a resource to be extracted. The third uses the idea of house as protection to solve the other two problems.

    Zuboff contrasts an early moment of the digitally connected world, an internet of things that was on a closed circuit within one house, to the current moment, where the same devices are wired to the companies that make them. For Zuboff, that difference demonstrates the exponential changes that happened in between the early promise of the internet and its current malformation. Surveillance Capital argues that from the connective potential of the early Internet has come the current dystopian state of affairs, where human behavior is monitored by companies in order to nudge that behavior toward predetermined ends. In this way, Surveillance Capitalism reverses an earlier moment of connectivity boosterism, exemplified by the title of Thomas Friedman’s popular 2005 book, The World is Flat, which celebrated technologically-produced globalization.[1] The decades from the mid to late 2000s witnessed a significant critique of the flat world hypothesis, which could be summed up as an argument for both the vast unevenness of the world, and for the continuous remaking of global tropes into local and varied meanings. Yet, here we are again it seems in 2020, except instead of celebrating flatness, we are sounding the flat alarm.

    The book’s very dimensions–it is a doorstop, on purpose–act as an inoculation against the thinness and flatness Zuboff diagnoses as predominant features of our world. Zuboff argues that these features are unprecedented, that they mark an extreme deviation from capitalism as it has been. They therefore require both a new name and new analytic tools. The name
    “surveillance capitalism” describes information-gathering enterprises that are unprecedented in human history, and that information, Zuboff writes, is used to predict “our futures for the sake of others’ gain, not ours” (11). As tech companies increasingly use our data to steer behavior towards products and advertising, our ability to experience a deep interiority where we can exercise autonomous choice shrinks. Importantly for Zuboff, these companies collect not just data willingly giving, but the data exhaust that we often unknowingly and unintentionally emit as we move through a world mediated by our devices. Behavioral nudges mark for Zuboff the ultimate endpoint for a capitalism gone awry, a capitalism drives humans to abandon free will in favor of being governed by corporations that use aggregate data about individual interactions to determine future human action.

    Zuboff’s flat alarm usefully takes the reader through the philosophical underpinnings of behaviorism, following the work of B.F. Skinner, a psychologist working at Harvard in the mid-twentieth century who believed adjusting human behavior was a matter of changing external environments through positive and negative stimuli, or reinforcements. Zuboff argues that behaviorist attitudes toward the world, considered outré in their time, have moved to the heart of Silicon Valley philosophies of disruption, where they meet a particular kind of mode of capital accumulation driven by the logics of venture, neutrality, and macho meritocracies. The result is a kind of ideology of tools and of making humans into tools, that Zuboff terms instrumentarianism, at once driven to produce companies that are profitable for venture capitalists and investors and to treat human beings as sources of data to be turned toward profitability. Widespread surveillance is a necessary feature of this new world order because it is through that observation of every detail of human life that these companies can amass the data they need to turn a profit by predicting and ultimately controlling, or tuning, human behavior.

    Zuboff identifies key figures in the development of surveillance capitalism, including the aforementioned Skinner. Her particular mode of critique tends to focus on CEOs, and Zuboff reads their pronouncements as signs of the legacy of behaviorism in the C-Suites of contemporary firms. Zuboff also spends several chapters situating the critics of these surveillance capitalists as those who need to raise the flat world alarm. She compares this need to both her personal experience with the house fire and the experience of thinkers such as Hanah Arendt writing on totalitarianism. Here, she draws an explicit critique that conjoins totalitarianism and surveillance capital. Zuboff argues that just as totalitarianism was unthinkable as it was unfolding, so too does surveillance capitalism seem an impossible future given how we like to think about human behavior and its governance. Zuboff’s argument here is highly persuasive, since she is suggesting that the critics will always come to realize what it is they are critiquing just before it is too late to do anything about it. She also argues that behaviorism is in some sense the inverse of state-governed totalitarianism, since while totalitarianism attempted to discipline humans from the inside out, surveillance capitalism is agnostic when it comes to interiority–it only deals in and tries to engineer surface effects. For all this ‘neutrality’ over and against belief, it is equally oppressive, because it aims at social domination.

    Previous reviews have provided an overview of the chapters in this book; I will not repeat the exercise, except to say that the introduction nicely lays out her overall argument and could be used effectively to broach the topic of surveillance for many audiences. The chapters outlining B.F. Skinner’s imprint on behaviorist ideologies are also useful to provide historical context to the current age, as is the general story of Google’s turn toward profitability as told in Part I. And, yet, the promise of these earlier chapters–particularly the nice turn of phrase, the “‘behavioral means of production” yield in the latter chapters to an impoverished account of our options and of the contradictions at work within tech companies. These lacunae are due at least in part to Zuboff’s choice of revolutionary subject–the middle class consumer.

    Toward the end of Surveillance Capitalism, Zuboff rebuilds her house, this time with thicker walls. She uses her house’s regeneration to argue for a philosophical concept she calls the “right to sanctuary,” based largely on the writings of Gaston Bachelard, whose Poetics of Space describes for Zuboff how the shelter of home shapes “many of our most fundamental ways of making sense of experience” (477). Zuboff believes that surveillance capitalists want to bring down all these walls, for the sake of opening up our every action to collection and our every impulse to guidance from above. One might pause here and wonder whether the breaking down of walls is not fundamental to capitalism from the beginning, rather than an aberration of the current age. In other words, does the age of surveillance mark such a radical break from the general thrust of capital’s need to open up new markets and exploit new raw materials? Or, more to the point, for whom does it signify a radical aberration?  Posing this question would bring into focus the need to interrogate the complicitness of the very categories of autonomy, agency, and privacy in the extension of capitalism across geographies, and to historicize the production of interiority within that same frame.

    Against the contemporary tendency toward effacing the interior life of families and individuals, Zuboff offers sanctuary as the right to protection from surveillance. In this moment, that protection needs thick walls. For Zuboff, those walls need to be built by young people–one gets the sense that she is speaking across these sections to her own children and those of her children’s generation. The problem with describing sanctuary in this way is that it narrows the scope for both understanding the stakes of surveillance and recognizing where the battles for control over data will be fought.

    As a broadside, Surveillance Capitalism works through a combination of rhetoric and evidence. Zuboff hopes that a younger generation will fight the watchers for control over their own data. Yet, by addressing largely a well-off, college-educated, and young audience, Zuboff restricts the people who are being asked to take up the cause, and fails to ask the difficult question of what it would take to build a house with thicker walls for everyone.

    A persistent concern while reading this book is whether its analysis can encompass otherwheres. The populations that are most at risk under surveillance capitalism include immigrants, minorities, and workers, both within and outside the United States. The framework of data exhaust and its use to predict and govern behavior does not quite illuminate the uses of data collection to track border crossers, “predict” crime, and monitor worker movements inside warehouses. These relationships require an analysis that can get at the overlap between corporate and government surveillance, which Surveillance Capitalism studiously avoids. The book begins with an analysis of a system of exploitation based on turning data into profits, and argues that the new mode of production makes the motor of capitalism shift from products to information, a point well established by previous literature. Given this analysis, it astonishing that the last section of the book returns to a defense of individual rights, without stopping to question whether the ‘hive’ forms of organization that Zuboff finds in the logics of surveillance capital may have been a cooptation of radical kinds of social organizing arranged against a different model of exploitation. Leaderless movements like Occupy should be considered fully when describing hives, along with contemporary initiatives like tech worker cooperatives and technical alternatives like local mesh networks. The possibility that these radical forms of social organization may be subject to cooptation by the actors Zuboff describes never appears in the book. Instead, Zuboff appears to mistranslate theories of the subject that locate agency above or below the level of the individual to political acquiescence to a program of total social control. Without taking the step considering the political potential in ‘hive-like’ social organization, Zuboff’s corrective falls back on notions of individual rights and protections and is unable to imagine a new kind of collective action that moves beyond both individualism and behaviorism. This failure, for instance, skews Zuboff’s arguments toward the familiar ground of data protection as a solution rather than toward the more radical stances of refusal, which question data collection in the first place.

    Zuboff’s world is flat. It is a world in which there are Big Others that suck up an undifferentiated public’s data, Others whose objective is to mold our behavior and steal our free will. In this version of flatness, what was once described positively is now described negatively, as if we had collectively turned a rosy-colored smooth world flat black. Yet, how collective is this experience? How will it play out if the solutions we provide rely on bracketing out the question of what kinds of people and communities are afforded the chance to build thicker walls? This calls forth a deeper issue than simply that of a lack of inclusion of other voices in Zuboff’s account. After all, perhaps fixing the surveillance issue through the kinds of rights to sanctuary that Zuboff suggests would also fix the issue for those who are not usually conceived of as mainstream consumers.

    Except, historical examples ranging from Simone Browne’s explication of surveillance and slavery in Dark Matters to Achille Mbembe’s articulation of necropolitcs teach us that consumer protection is a thin filament on which to hang protection for all from overweaning surveillance apparati–corporate or otherwise. One could easily imagine a world where the privacy rights of well-heeled Americans are protected, but those of others continue to be violated. To reference one pertinent example, companies who are banking on monetizing data through a contractual relationship where individuals sell the data that they themselves own are simultaneously banking on those who need to sell their data to make money. In other words, as legal scholar Stacy-Ann Elvy notes (2017), in a personal data economy low-income consumers will be incentivized to sell their data without much concern for the conditions of sale, even while those who are well-off will have the means to avoid these incentives, resulting in the illusion of individual control and uneven access to privacy determined by degrees of socioeconomic vulnerability. These individuals will also be exposed to a greater degree of risk that their information will not stay secure.

    Simone Browne demonstrates that what we understand as surveillance was developed on and through black bodies, and that these populations of slaves and ex-slaves have developed strategies of avoiding detection, which she calls dark sousveillance. As Browne notes, “routing the study of contemporary surveillance” through the histories of “black enslavement and captivity opens up the possibility for fugitive acts of escape” even while it shows that the normative surveillance of white bodies was built on long histories of experimentations with black bodies (Browne 2015, 164). Achille Mbembe’s scholarship on necropolitics was developed through the insight that some life becomes killable, or in Jasbir Puar’s (2017) memorable phrasing, maimable, at the same time that other life is propagated. Mbembe proposes “necropolitcs” to describe “death worlds” where “death” not life, “is the space where freedom and negotiation happen” where “vast populations are subjected to conditions of life conferring on them the status of living dead” (Mbembe 2003, 40). The right to sanctuary appears to short circuit the spaces where life has already been configured as available for expropriation through perpetual wounding. Crucial to both Browne and Mbembe’s arguments is the insight that the study of the uneven harms of surveillance concomitantly surfaces the tactics of opposition and the archives of the world that provide alternative models of refuge outside the contractual property relationship evoked across the pages of Surveillance Capitalism.

    All those considered outside the ambit of individualized rights, including those in territories marked by extrajudicial measures, those deemed illegal, those perennially under threat, those who while at work are unprotected, those in unseen workplaces, and those simply unable to exercise rights to privacy due to law or circumstance, have little place in Zuboff’s analysis. One only has to think of Kashmir, and the access that people with no ties to this place will now have to building houses there, to begin to grasp the contested politics of home-building.[2] Without an acknowledgement of the limits of both the critique of surveillance capitalism and the agents of its proposed solutions, it seems this otherwise promising book will reach the usual audiences and have the usual effect of shoring up some peoples’ and places’ rights even while making the rest of the world and its populations available for experiments in data appropriation.

    _____

    Sareeta Amrute is Associate Professor of Anthropology at the University of Washington. Her scholarship focuses on contemporary capitalism and ways of working, and particularly on the ways race and class are revisited and remade in sites of new economy work, such as coding and software economies. She is the author of the book Encoding Race, Encoding Class: Indian IT Workers in Berlin (Duke University Press, 2016) and recently published the article “Of Techno-Ethics and Techno-Affects” in Feminist Review.

    Back to the essay

    _____

    Notes

    [1] Friedman (2005) attributes this phrase to Nandan Nilekani, then Co-Chair, of Indian Tech company Infosys (and subsequently Chair of the Unique Identification Authority of India).

    [2] Until 2019, Articles 370 and 35A of the Indian Constitution granted the territories of Jammu and Kashmir special status, which allowed the state to keep on it’s books laws restricting who could buy land and property in Kashmir by allowing the territories to define who counted as a permanent resident.. After the abrogation of Article 370, rumors swirled that the rich from Delhi and elsewhere would now be able to purchase holiday homes in the area. See e.g. Devansh Sharma, “All You Need to Know about Buying Property in Jammu and Kashmir“; Parvaiz Bukhari, “Myth No 1 about Article 370: It Prevents Indians from Buying Land in Kashmir.”

    _____

    Works Cited

    • Browne, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Durham, NC: Duke University Press.
    • Elvy, Stacy-Ann. 2017. “Paying for Privacy and the Personal Data Economy.” Columbia Law Review 117:6 (Oct). 1369-1460.
    • Friedman, Thomas. 2005. The World Is Flat: A Brief History of the Twenty-First Century. New York: Farrar, Straus and Giroux.
    • Mbembe, Achille. 2003. “Necropolitics.” Public Culture 15:1 (Winter). 11-40.
    • Mbembe, Achille. 2019. Necropolitics. Durham, NC: Duke University Press.
    • Puar, Jasbir K. 2017. The Right to Maim: Debility, Capacity, Disability. Durham, NC: Duke University Press.

     

  • David Newhoff —  The Harms of Digital Tech and Tech Law (Review of Goldberg, Nobody’s Victim: Fighting Psychos, Stalkers, Pervs, and Trolls)

    David Newhoff — The Harms of Digital Tech and Tech Law (Review of Goldberg, Nobody’s Victim: Fighting Psychos, Stalkers, Pervs, and Trolls)

    a review of Carrie Goldberg (with Jeannine Amber), Nobody’s Victim: Fighting Psychos, Stalkers, Pervs, and Trolls (Plume, 2019)

    by David Newhoff

    ~

    During an exchange on my blog in 2014 with an individual named Anonymous—it must have been a very popular baby name at some point—I was told, “Yes, yes, David, show us on the doll where the Internet touched you, because we all know that all evil comes from there.”  That discussion was in context to the internet industry’s anti-copyright agenda, but the smugness of the response, lurking behind a concealed identity while making an eye-rolling allusion to sexual assault, is characteristic of the tech-bro culture that dismisses any conversation about the darker aspects of digital life.  In fact, I am fairly sure it was the same Anonymous who decided that I had “failed the free speech test” because I wrote encouragingly about the prospect of making the conduct generally referred to as “revenge porn” a federal crime.

    Those old exchanges, conducted in the safety of the abstract, came rushing into the foreground while I read attorney Carrie Goldberg’s Nobody’s Victim:  Fighting Psychos, Stalkers, Pervs, and Trolls, because Goldberg and her colleagues do not address conduct like “revenge porn” in the abstract: they deal with it as a tangible and terrifying reality.  It is at her Brooklyn law firm where the victims of that crime (and other forms of harassment and abuse) arrive shattered, frightened and suicidally desperate to escape the hell their lives have become—often with the push of a button.  These are people who can show us exactly how and where the “internet touched” them, and Goldberg’s book is a harrowing tutorial in the various ways online platforms provide opportunity, motive, sanctuary, and even profit for individuals who purposely choose to destroy other human beings.

    Nobody’s Victim reads like an anthology of short thriller/horror stories but for the fact that each of the terrorized protagonists is a real person, and far too many of them are children.  These infuriating anecdotes are interwoven with the story of Goldberg’s own transformation from a young woman nearly destroyed by predatory men to become, as she puts it, the attorney she needed when she was in trouble.  The result is both an inspiring narrative of personal triumph over adversity and a rigorous critique of our inadequate legal framework, which needlessly exacerbates the suffering of people targeted by life-threatening attacks—attacks that were simply not possible before the internet as we know it.

    Covering a lot of ground—from stalking to sextortion—Goldberg tells the stories of her archetypal clients, along with her own jaw-dropping experiences, in a voice that pairs the discipline of a lawyer with the passion of a crusader. “We can be the army to take these motherfuckers down,” her introduction concludes, and “What happened to you matters,” is the mantra of her epilogue.  It is clear that the central message she wants to convey is one of empowerment for the constituency she represents, but the details are chilling to say the least.

    Anyone anywhere can have his or her life torn apart by remote control—i.e. via the web.  All the malefactor really needs is basic computer skills, a little too much time on his hands, and a profoundly broken moral compass.  Psychos, stalkers, pervs, trolls, and assholes are all specific types of criminals in the “Carrie Goldberg Taxonomy of Offenders.”  For instance, the ex-boyfriend who uploads non-consensual intimate images to a revenge-porn site is a psycho, while the site operator, profiting off the misery of others, is an asshole.

    As Goldberg notes in Chapter 6, by the year 2014, there were about 3,000 websites dedicated to hosting revenge porn.  That is a hell of a lot of guys willing to expose their ex-girlfriends to a range of potential trauma—these include public humiliation, job loss, relationship damage, sexual assault, PTSD, and suicide—simply because their partner broke off the relationship.  This volume of men engaging in revenge porn does seem to imply that the existence of the technology itself becomes a motive or rationale for the conduct, but that is perhaps a subject to explore in another post.

    One theme that comes through loud and clear for me in Nobody’s Victim—particularly in context to the editorial scope of my blog—is that the individual conduct of the psychos, et al is only slightly less maddening than our systemic failure to protect the victims.  As a cyber-policy matter, that means the chronic misinterpretation of Section 230 of the Communications Decency Act as a speech-right protection and a blanket liability shield for online service providers.

    Taking on Section 230

    Goldberg’s most high-profile client, Matthew Herrick, was the target of a disgruntled ex-boyfriend named Juan Carlos Gutierrez, who tried, via the gay dating app Grindr, to get Herrick at least raped, if not murdered.  By creating several Grindr accounts designed to impersonate Herrick, Gutierrez posted invitations to seek him out for rough, “rape-fantasy” sex, including messages that any protests to stop should be taken as “part of the game.”  Hundreds of men swarmed into Herrick’s life for more than a year—appearing at his home and work, often becoming verbally or physically aggressive upon discovering that he was not offering what they were looking for.

    With Goldberg’s help, Herrick succeeded in getting Gutierrez convicted on felony charges, but what they could never obtain was even the most basic form of assistance from Grindr.  You might think it would be at least common courtesy for an internet business to remove accounts that falsely claim to be you—particularly when those accounts are being used to facilitate criminal threats to your safety and livelihood.  In fact, the smaller dating app Gutierrez had been using called Scruff eagerly and sympathetically complied with Herrick’s plea for help.  But Grindr told him to fuck off by saying, “There’s nothing we can do.”

    Herrick, through Goldberg, sued Grindr for “negligence, deceptive business practices and false advertising, intentional and negligent infliction of emotional distress, failure to warn, and negligent misrepresentation.”  They lost in both the District Court and in the Second Circuit Court of Appeals, principally because most courts continue to read Section 230 of the CDA as absolute immunity for online service providers.  This cognitive dissonance, which chooses to ignore the fact that a matter like Herrick’s plight is wholly unrelated to free speech, is emphasized in an amicus brief that the Electronic Frontier Foundation (EFF) filed in the Second Circuit appeal on behalf of Grindr:

    Intermediaries allow Internet users to connect easily with family and friends, follow the news, share opinions and personal experiences, create and share art, and debate politics. Appellant’s efforts to circumvent Section 230’s protections undermine Congress’s goal of encouraging open platforms and robust online speech.

    Isn’t that pretty?  But what the fuck has any of it got to do with using internet technologies to impersonate someone; to commit libel, slander, or defamation in his/her name; to deploy violent people (or in some cases SWAT teams) against a private individual; or to get someone fired or arrested—and all for the perpetrator’s amusement, vengeance, or profit?  None of that conduct is remotely protected by the speech right, and all of it—all of it—infringes the speech rights and other civil liberties of the victims.  Perhaps most absurdly, organizations like EFF choose to overlook the fact that the first right being denied to someone in Herrick’s predicament is the right to safely access all those invaluable activities enabled by online “intermediaries.”

    No, Grindr did not commit those crimes, but let’s be real.  What was Herrick asking Grindr to do?  Remove the conduits through which crimes were being committed against him—online accounts pretending to be him.  Scruff complied, and I didn’t feel a tremor in the free speech right, did you?   If we truly cannot make a legal distinction between Herrick’s circumstances and all that frilly bullshit the EFF likes to repeat ad nauseum, then, we are clearly too stupid to reap the benefits of the internet while mitigating its harms.

    Suffice to say, a fight over Section 230 is indeed brewing.  As it heats up, Silicon Valley will marshal its seemingly endless resources to defend the status quo, and they will carpet bomb the public with messages that any change to this law will be an existential threat to the internet as we know it.  There is some truth to that, of course, but the internet as we know it needs a lot of work.  Meanwhile, if anyone is going to win against Big Tech’s juggernaut on this issue, it will be thanks to the leadership of (mostly) women like Carrie Goldberg, her colleagues, and her clients.

    It is an unfortunate axiom that policy rarely changes without some constituency suffering harm for a period of time; and those are exactly the people whose stories Goldberg is in a position to tell—in court, in Congress, and to the public.  If you read Nobody’s Victim and still insist, like my friend Anonymous, this is all a theoretical debate about anomalous cases, largely mooted by the speech right, there’s a pretty good chance you’re an asshole—if not a psycho, stalker, perv, or troll.  And that clock you hear ticking is actually the sound of Carrie Goldberg’s signature high heels heading your way.

    _____

    David Newhoff is a filmmaker, writer, and communications consultant, and an activist for artist’s rights, especially as they pertain to the erosion of copyright by digital technology companies. He is writing a book about copyright due out in Fall 2020. He writes about these issue frequently as @illusionofmore on Twitter and on the blog The Illusion of More, on which an earlier version of this review first appeared.

    Back to the essay

  • Audrey Watters — Education Technology and The Age of Surveillance Capitalism (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    Audrey Watters — Education Technology and The Age of Surveillance Capitalism (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    a review of Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019)

    by Audrey Watters

    ~

    The future of education is technological. Necessarily so.

    Or that’s what the proponents of ed-tech would want you to believe. In order to prepare students for the future, the practices of teaching and learning – indeed the whole notion of “school” – must embrace tech-centered courseware and curriculum. Education must adopt not only the products but the values of the high tech industry. It must conform to the demands for efficiency, speed, scale.

    To resist technology, therefore, is to undermine students’ opportunities. To resist technology is to deny students’ their future.

    Or so the story goes.

    Shoshana Zuboff weaves a very different tale in her book The Age of Surveillance Capitalism. Its subtitle, The Fight for a Human Future at the New Frontier of Power, underscores her argument that the acquiescence to new digital technologies is detrimental to our futures. These technologies foreclose rather than foster future possibilities.

    And that sure seems plausible, what with our social media profiles being scrutinized to adjudicate our immigration status, our fitness trackers being monitored to determine our insurance rates, our reading and viewing habits being manipulated by black-box algorithms, our devices listening in and nudging us as the world seems to totter towards totalitarianism.

    We have known for some time now that tech companies extract massive amounts of data from us in order to run (and ostensibly improve) their services. But increasingly, Zuboff contends, these companies are now using our data for much more than that: to shape and modify and predict our behavior – “‘treatments’ or ‘data pellets’ that select good behaviors,” as one ed-tech executive described it to Zuboff. She calls this “behavioral surplus,” a concept that is fundamental to surveillance capitalism, which she argues is a new form of political, economic, and social power that has emerged from the “internet of everything.”

    Zuboff draws in part on the work of B. F. Skinner to make her case – his work on behavioral modification of animals, obviously, but also his larger theories about behavioral and social engineering, best articulated perhaps in his novel Walden Two and in his most controversial book Beyond Freedom and Dignity. By shaping our behaviors – through nudges and rewards “data pellets” and the like – technologies circumscribe our ability to make decisions. They impede our “right to the future tense,” Zuboff contends.

    Google and Facebook are paradigmatic here, and Zuboff argues that the former was instrumental in discovering the value of behavioral surplus when it began, circa 2003, using user data to fine-tune ad targeting and to make predictions about which ads users would click on. More clicks, of course, led to more revenue, and behavioral surplus became a new and dominant business model, at first for digital advertisers like Google and Facebook but shortly thereafter for all sorts of companies in all sorts of industries.

    And that includes ed-tech, of course – most obviously in predictive analytics software that promises to identify struggling students (such as Civitas Learning) and in behavior management software that’s aimed at fostering “a positive school culture” (like ClassDojo).

    Google and Facebook, whose executives are clearly the villains of Zuboff’s book, have keen interests in the education market too. The former is much more overt, no doubt, with its Google Suite product offerings and its ubiquitous corporate evangelism. But the latter shouldn’t be ignored, even if it’s seen as simply a consumer-facing product. Mark Zuckerberg is an active education technology investor; Facebook has “learning communities” called Facebook Education; and the company’s engineers helped to build the personalized learning platform for the charter school chain Summit Schools. The kinds of data extraction and behavioral modification that Zuboff identifies as central to surveillance capitalism are part of Google and Facebook’s education efforts, even if laws like COPPA prevent these firms from monetizing the products directly through advertising.

    Despite these companies’ influence in education, despite Zuboff’s reliance on B. F. Skinner’s behaviorist theories, and despite her insistence that surveillance capitalists are poised to dominate the future of work – not as a division of labor but as a division of learning – Zuboff has nothing much to say about how education technologies specifically might operate as a key lever in this new form of social and political power that she has identified. (The quotation above from the “data pellet” fellow notwithstanding.)

    Of course, I never expect people to write about ed-tech, despite the importance of the field historically to the development of computing and Internet technologies or the theories underpinning them. (B. F. Skinner is certainly a case in point.) Intertwined with the notion that “the future of education is necessarily technological” is the idea that the past and present of education are utterly pre-industrial, and that digital technologies must be used to reshape education (and education technologies) – this rather than recognizing the long, long history of education technologies and the ways in which these have shaped what today’s digital technologies generally have become.

    As Zuboff relates the history of surveillance capitalism, she contends that it constitutes a break from previous forms of capitalism (forms that Zuboff seems to suggest were actually quite benign). I don’t buy it. She claims she can pinpoint this break to a specific moment and a particular set of actors, positing that the origin of this new system was Google’s development of AdSense. She does describe a number of other factors at play in the early 2000s that led to the rise of surveillance capitalism: notably, a post–9/11 climate in which the US government was willing to overlook growing privacy concerns about digital technologies and to use them instead to surveil the population in order to predict and prevent terrorism. And there are other threads she traces as well: neoliberalism and the pressures to privatize public institutions and deregulate private ones; individualization and the demands (socially and economically) of consumerism; and behaviorism and Skinner’s theories of operant conditioning and social engineering. While Zuboff does talk at length about how we got here, the “here” of surveillance capitalism, she argues, is a radically new place with new markets and new socioeconomic arrangements:

    the competitive dynamics of these new markets drive surveillance capitalists to acquire ever-more-predictive sources of behavioral surplus: our voices, personalities, and emotions. Eventually, surveillance capitalists discovered that the most-predictive behavioral data come from intervening in the state of play in order to nudge, coax, tune, and herd behavior toward profitable outcomes. Competitive pressures produced this shift, in which automated machine processes not only know our behavior but also shape our behavior at scale. With this reorientation from knowledge to power, it is no longer enough to automate information flows about us; the goal now is to automate us. In this phase of surveillance capitalism’s evolution, the means of production are subordinated to an increasingly complex and comprehensive ‘means of behavioral modification.’ In this way, surveillance capitalism births a new species of power that I call instrumentarianism. Instrumentarian power knows and shapes human behavior toward others’ ends. Instead of armaments and armies, it works its will through the automated medium of an increasingly ubiquitous computational architecture of ‘smart’ networked devices, things, and spaces.

    As this passage indicates, Zuboff believes (but never states outright) that a Marxist analysis of capitalism is no longer sufficient. And this is incredibly important as it means, for example, that her framework does not address how labor has changed under surveillance capitalism. Because even with the centrality of data extraction and analysis to this new system, there is still work. There are still workers. There is still class and plenty of room for an analysis of class, digital work, and high tech consumerism. Labor – digital or otherwise – remains in conflict with capital. The Age of Surveillance Capitalism as Evgeny Morozov’s lengthy review in The Baffler puts it, might succeed as “a warning against ‘surveillance dataism,’” but largely fails as a theory of capitalism.

    Yet the book, while ignoring education technology, might be at its most useful in helping further a criticism of education technology in just those terms: as surveillance technologies, relying on data extraction and behavior modification. (That’s not to say that education technology criticism shouldn’t develop a much more rigorous analysis of labor. Good grief.)

    As Zuboff points out, B. F. Skinner “imagined a pervasive ‘technology of behavior’” that would transform all of society but that, at the very least he hoped, would transform education. Today’s corporations might be better equipped to deliver technologies of behavior at scale, but this was already a big business in the 1950s and 1960s. Skinner’s ideas did not only exist in the fantasy of Walden Two. Nor did they operate solely in the psych lab. Behavioral engineering was central to the development of teaching machines; and despite the story that somehow, after Chomsky denounced Skinner in the pages of The New York Review of Books, that no one “did behaviorism” any longer, it remained integral to much of educational computing on into the 1970s and 1980s.

    And on and on and on – a more solid through line than the all-of-a-suddenness that Zuboff narrates for the birth of surveillance capitalism. Personalized learning – the kind hyped these days by Mark Zuckerberg and many others in Silicon Valley – is just the latest version of Skinner’s behavioral technology. Personalized learning relies on data extraction and analysis; it urges and rewards students and promises everyone will reach “mastery.” It gives the illusion of freedom and autonomy perhaps – at least in its name; but personalized learning is fundamentally about conditioning and control.

    “I suggest that we now face the moment in history,” Zuboff writes, “when the elemental right to the future tense is endangered by a panvasive digital architecture of behavior modification owned and operated by surveillance capital, necessitated by its economic imperatives, and driven by its laws of motion, all for the sake of its guaranteed outcomes.” I’m not so sure that surveillance capitalists are assured of guaranteed outcomes. The manipulation of platforms like Google and Facebook by white supremacists demonstrates that it’s not just the tech companies who are wielding this architecture to their own ends.

    Nevertheless, those who work in and work with education technology need to confront and resist this architecture – the “surveillance dataism,” to borrow Morozov’s phrase – even if (especially if) the outcomes promised are purportedly “for the good of the student.”

    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines, forthcoming from The MIT Press. She maintains the widely-read Hack Education blog, on which earlier version of this piece first appeared. and writes frequently for The b2o Review Digital Studies section on digital technology and education.

    Back to the essay

  • Zachary Loeb — Hashtags Lean to the Right (Review of Schradie, The Revolution that Wasn’t: How Digital Activism Favors Conservatives)

    Zachary Loeb — Hashtags Lean to the Right (Review of Schradie, The Revolution that Wasn’t: How Digital Activism Favors Conservatives)

    a review of Jen Schradie,The Revolution that Wasn’t: How Digital Activism Favors Conservatives (Harvard University Press, 2019)

    by Zachary Loeb

    ~

    Despite the oft-repeated, and rather questionable, trope that social media is biased against conservatives; and beyond the attention that has been lavished on tech-savvy left-aligned movements (such as Occupy!) in recent years—this does not necessarily mean that social media is of greater use to the left. It may be quite the opposite. This is a topic that documentary filmmaker, activist and sociologist Jen Schradie explores in depth in her excellent and important book The Revolution That Wasn’t: How Digital Activism Favors Conservatism. Engaging with the political objectives of activists on the left and the right, Schradie’s book considers the political values that are reified in the technical systems themselves and the ways in which those values more closely align with the aims of conservative groups. Furthermore, Schradie emphasizes the socio-economic factors that allow particular groups to successfully harness high-tech tools, thereby demonstrating how digital activism reinforces the power of those who are already enjoying a fair amount of power. Rather than suggesting that high-tech tools have somehow been stolen from the left by the right, The Revolution That Wasn’t argues that these were not the left’s tools in the first place.

    The background against which Schradie’s analysis unfolds is the state of North Carolina in the years after 2011. Generally seen as a “red state,” North Carolina had flipped blue for Barack Obama in 2008, leading to the state being increasingly seen as a battleground. Even though the state was starting to take on a purplish color, North Carolina was still home to a deeply entrenched conservativism that was reflected (and still is reflected) in many aspects of the state’s laws, and in the legacy of racist segregation that is still felt in the state. Though the Occupy! movement lingers in the background of Schradie’s account, her focus is on struggles in North Carolina around unionization, the rapid growth of the Tea Party, and the emergence of the “Moral Monday” movement which inspired protests across the state (starting in 2013). While many considerations of digital activism have focused on hip young activists festooned with piercings, hacker skills, and copies of The Coming Insurrection—the central characters of Schradie’s book are members of the labor movement, campus activists, Tea Party members, Preppers, people associated with “Patriot” groups, as well as a smattering of paid organizers working for large organizations. And though Schradie is closely attuned to the impact that financial resources have within activist movements, she pushes back against the “astroturf” accusation that is sometimes aimed at right-wing activists, arguing that the groups she observed on both the right and the left reflected genuine populist movements.

    There is a great deal of specificity to Schradie’s study, and many of the things that Schradie observes are particular to the context of North Carolina, but the broader lessons regarding political ideology and activism are widely applicable. In looking at the political landscape in North Carolina, Schradie carefully observes the various groups that were active around the unionization issue, and pays close attention to the ways in which digital tools were used in these groups’ activism. The levels of digital savviness vary across the political groups, and most of the groups demonstrate at least some engagement with digital tools; however, some groups embraced the affordances of digital tools to a much greater extent than others. And where Schradie’s book makes its essential intervention is not simply in showing these differing levels of digital use, but in explaining why. For one of the core observations of Schradie’s account of North Carolina, is that it was not the left-leaning groups, but the right-leaning groups who were able to make the most out of digital tools. It’s a point which, to a large degree, runs counter to general narratives on the left (and possibly also the right) about digital activism.

    In considering digital activism in North Carolina, Schradie highlights the “uneven digital terrain that largely abandoned left working-class groups while placing right-wing reformist groups at the forefront of digital activism” (Schradie, 7). In mapping out this terrain, Schradie emphasizes three factors that were pivotal in tilting this ground, namely class, organization, and ideology. Taken independently of one another, each of these three factors provides valuable insight into the challenges posed by digital activism, but taken together they allow for a clear assessment of the ways that digital activism (and digital tools themselves) favor conservatives. It is an analysis that requires some careful wading into definitions (the different ways that right and left groups define things like “freedom” really matters), but these three factors make it clear that “rather than offering a quick technological fix to repair our broken democracy, the advent of digital activism has simply ended up reproducing, and in some cases, intensifying, preexisting power imbalances” (Schradie, 7).

    Considering that the core campaign revolves around unionization, it should not particularly be a surprise that class is a major issue in Schradie’s analysis. Digital evangelists have frequently suggested that high-tech tools allow for the swift breaking down of class barriers by providing powerful tools (and informational access) to more and more people—but the North Carolinian case demonstrates the ways in which class endures. Much of this has to do with the persistence of the digital divide, something which can easily be overlooked by onlookers (and academics) who have grown accustomed to digital tools. Schradie points to the presence of “four constraints” that have a pivotal impact on the class aspect of digital activism: “Access, Skills, Empowerment, and Time” (or ASETs for short; Schradie, 61). “Access” points to the most widely understood part of the digital divide, the way in which some people simply do not have a reliable and routine way of getting ahold of and/or using digital tools—it’s hard to build a strong movement online, when many of your members have trouble getting online. This in turn reverberates with “Skills,” as those who have less access to digital tools often lack the know-how that develops from using those tools—not everyone knows how to craft a Facebook post, or how best to make use of hashtags on Twitter. While digital tools have often been praised precisely for the ways in which they empower users, this empowerment is often not felt by those lacking access and skills, leading many individuals from working-class groups to see “digital activism as something ‘other people’ do” (Schradie, 64). And though it may be the easiest factor to overlook, engaging in digital activism requires Time, something which is harder to come by for individuals working multiple jobs (especially of the sort with bosses that do not want to see any workers using phones at work).

    When placed against the class backgrounds of the various activist groups considered in the book, the ASETs framework clearly sets up a situation in which conservative activists had the advantage. What Schradie found was “not just a question of the old catching up with the young, but of the poor never being able to catch up with the rich” (Schradie, 79), as the more financially secure conservative activists simply had more ASETs than the working-class activists on the left. And though the right-wing activists skewed older than the left-wing activists, they proved quite capable of learning to use new high-tech tools. Furthermore, an extremely important aspect here is that the working-class activists (given their economic precariousness) had more to lose from engaging in digital activism—the conservative retiree will be much less worried about losing their job, than the garbage truck driver interested in unionizing.

    Though the ASETs echo throughout the entirety of Schradie’s account, “Time” plays an essential connective role in the shift from matters of class to matters of organization. Contrary to the way in which the Internet has often been praised for invigorating horizontal movements (such as Occupy!), the activist groups in North Carolina attest to the ways in which old bureaucratic and infrastructural tools are still essential. Or, to put it another way, if the various ASETs are viewed as resources, then having a sufficient quantity of all four is key to maintaining an organization. This meant that groups with hierarchical structures, clear divisions of labor, and more staff (be these committed volunteers or paid workers) were better equipped to exploit the affordances of digital tools.

    Importantly, this was not entirely one-sided. Tea Party groups were able to tap into funding and training from larger networks of right-wing organizations, but national unions and civil rights organizations were also able to support left-wing groups. In terms of organization, the overwhelming bias is less pronounced in terms of a right/left dichotomy and more a reflection of a clash between reformist/radical groups. When it came to organization the bias was towards “reformist” groups (right and left) that replicated present power structures and worked within the already existing social systems; the groups that lose out here tend to be the ones that more fully eschew hierarchy (an example of this being student activists). Though digital democracy can still be “participatory, pluralist, and personalized,” Schradie’s analysis demonstrates how “the internet over the long-term favored centralized activism over connective action; hierarchy over horizontalism; bureaucratic positions over networked persons” (Schradie, 134). Thus, the importance of organization, demonstrates not how digital tools allowed for a new “participatory democracy” but rather how standard hierarchical techniques continue to be key for groups wanting to participate in democracy.

    Beyond class and organization (insofar as it is truly possible to get past either), the ideology of activists on the left and activists on the right has a profound influence on how these groups use digital tools. For it isn’t the case that the left and the right try to use the Internet for the exact same purpose. Schradie captures this as a difference between pursuing fairness (the left), and freedom (the right)—this largely consisted of left-wing groups seeking a “fairer” allocation of societal power, while those on the right defined “freedom” largely in terms of protecting the allocation of power already enjoyed by these conservative activists. Believing that they had been shut out by the “liberal media,” many conservatives flocked to and celebrated digital tools as a way of getting out “the Truth,” their “digital practices were unequivocally focused on information” (Schradie, 167). As a way of disseminating information, to other people already in possession of ASETs, digital means provided right-wing activists with powerful tools for getting around traditional media gatekeepers. While activists on the left certainly used digital tools for spreading information, their use of the internet tended to be focused more heavily on organizing: on bringing people together in order to advocate for change. Further complicating things for the left is that Schradie found there to be less unity amongst leftist groups in contrast to the relative hegemony found on the right. Comparing the intersection of ideological agendas with digital tools, Schradie is forthright in stating, “the internet was simply more useful to conservatives who could broadcast propaganda and less effective for progressives who wanted to organize people” (Schradie, 223).

    Much of the way that digital activism has been discussed by the press, and by academics, has advanced a narrative that frames digital activism as enhancing participatory democracy. In these standard tales (which often ground themselves in accounts of the origins of the internet that place heavy emphasis on the counterculture), the heroes of digital activism are usually young leftists. Yet, as Schradie argues, “to fully explain digital activism in this era, we need to take off our digital-tinted glasses” (Schradie, 259). Removing such glasses reveals the way in which they have too often focused attention on the spectacular efforts of some movements, while overlooking the steady work of others—thus, driving more attention to groups like Occupy!, than to the buildup of right-wing groups. And looking at the state of digital activism through clearer eyes reveals many aspects of digital life that are obvious, yet which are continually forgotten, such as the fact that “the internet is a tool that favors people with more money and power, often leaving those without resources in the dust” (Schradie, 269). The example of North Carolina shows that groups on the left and the right are all making use of the Internet, but it is not just a matter of some groups having more ASETs, it is also the fact that the high-tech tools of digital activism favor certain types of values and aims better than others. And, as Schradie argues throughout her book, those tend to be the causes and aims of conservative activists.

    Despite the revolutionary veneer with which the Internet has frequently been painted, “the reality is that throughout history, communications tools that seemed to offer new voices are eventually owned or controlled by those with more resources. They eventually are used to consolidate power, rather than to smash it into pieces and redistribute it” (Schradie, 25). The question with which activists, particularly those on the left, need to wrestle is not just whether or not the Internet is living up to its emancipatory potential—but whether or not it ever really had that potential in the first place.

    * * *

    In an iconic photograph from 1948, a jubilant Harry S. Truman holds aloft a copy of The Chicago Daily Tribune emblazoned with the headline “Dewey Beats Truman.” Despite the polls having predicted that Dewey would be victorious, when the votes were counted Truman had been sent back to the White House and the Democrats took control of the House and the Senate. An echo of this moment occurred some sixty-eight years later, though there was no comparable photo of Donald Trump smirking while holding up a newspaper carrying the headline “Clinton Beats Trump.” In the aftermath of Trump’s victory pundits ate crow in a daze, pollsters sought to defend their own credibility by emphasizing that their models had never actually said that there was no chance of a Trump victory, and even some in Trump’s circle seemed stunned by his victory.

    As shock turned to resignation, the search for explanations and scapegoats began in earnest. Democrats blamed Russian hackers, voter suppression, the media’s obsession with Trump, left-wing voters who didn’t fall in line, and James Comey; while Republicans claimed that the shock was simply proof that the media was out of touch with the voters. Yet, Republicans and Democrats seemed to at least agree on one thing: to understand Trump’s victory, it was necessary to think about social media. Granted, Republicans and Democrats were divided on whether this was a matter of giving credit or assigning blame. On the one hand, Trump had been able to effectively use Twitter to directly engage with his fan base; on the other hand, platforms like Facebook had been flooded with disinformation that spread rapidly through the online ecosystem. It did not take long for representatives, including executives, from the various social media companies to find themselves called before Congress, where these figures were alternately grilled about supposed bias against conservatives on their platforms, and taken to task for how their platforms had been so easily manipulated into helping Trump win election.

    If the tech companies were only finding themselves summoned before Congress it would have been bad enough, but they were also facing frustrated employees, as well as disgruntled users, and the word “techlash” was being used to describe the wave of mounting frustration with these companies. Certainly, unease with the power and influence of the tech titans had been growing for years. Cambridge Analytica was hardly the first tech scandal. Yet much of that earlier displeasure was tempered by an overwhelmingly optimistic attitude towards the tech giants, as though the industry’s problematic excesses were indicative of growing pains as opposed to being signs of intrinsic anti-democratic (small d) biases. There were many critics of the tech industry before the arrival of the “techlash,” but they were liable to find themselves denounced as Luddites if they failed to show sufficient fealty to the tech companies. From company CEOs to an adoring tech press to numerous technophilic academics, in the years prior to the 2016 election smart phones and social media were hailed for their liberating and democratizing potential. Videos shot on smart phone cameras and uploaded to YouTube, political gatherings organized on Facebook, activist campaigns turning into mass movements thanks to hashtags—all had been treated as proof positive that high tech tools were breaking apart the old hierarchies and ushering in a new era of high-tech horizontal politics.

    Alas, the 2016 election was the rock against which many of these high-tech hopes crashed.

    And though there are many strands contributing to the “techlash,” it is hard to make sense of this reaction without seeing it in relation to Trump’s victory. Users of Facebook and Twitter had been frustrated with those platforms before, but at the core of the “techlash” has been a certain sense of betrayal. How could Facebook have done this? Why was Twitter allowing Trump to break its own terms of service on a daily basis? Why was Microsoft partnering with ICE? How come YouTube’s recommendation algorithms always seemed to suggest far-right content?

    To state it plainly: it wasn’t supposed to be this way.

    But what if it was? And what if it had always been?

    In a 1985 interview with MIT’s newspaper The Tech, the computer scientist and social critic, Joseph Weizenbaum had some blunt words about the ways in which computers had impacted society, telling his interviewer: “I think the computer has from the beginning been a fundamentally conservative force. It has made possible the saving of institutions pretty much as they were, which otherwise might have had to be changed” (ben-Aaron, 1985). This was not a new position for Weizenbaum; he had largely articulated the same idea in his 1976 book Computer Power and Human Reason, wherein he had pushed back at those he termed the “artificial intelligentsia” and the other digital evangelists of his day. Articulating his thoughts to the interviewer from The Tech, Weizenbaum raised further concerns about the close links between the military and computer work at MIT, and cast doubt on the real usefulness of computers for society—couching his dire fears in the social critic’s common defense “I hope I’m wrong” (ben-Aaron, 1985). Alas, as the decades passed, Weizenbaum unfortunately felt he had been right. When he turned his critical gaze to the internet in a 2006 interview, he decried the “flood of disinformation,” while noting “it just isn’t true that everyone has access to the so-called Information age” (Weizenbaum and Wendt 2015, 44-45).

    Weizenbaum was hardly the only critic to have looked askance at the growing importance that was placed on computers during the 20th century. Indeed, Weizenbaum’s work was heavily influenced by that of his friend and fellow social critic Lewis Mumford who had gone so far as to identify the computer as the prototypical example of “authoritarian” technology (even suggesting that it was the rebirth of the “sun god” in technical form). Yet, societies that are in love with their high-tech gadgets, and which often consider technological progress and societal progress to be synonymous, generally have rather little time for such critics. When times are good, such social critics are safely quarantined to the fringes of academic discourse (and completely ignored within broader society), but when things get rocky they have their woebegone revenge by being proven right.

    All of which is to say, that thinkers like Weizenbaum and Mumford would almost certainly agree with The Revolution That Wasn’t. However, they would probably not be surprised by it. After all, The Revolution That Wasn’t is a confirmation that we are today living in the world about which previous generations of critics warned. Indeed, if there is one criticism to be made of Schradie’s work, it is that the book could have benefited by more deeply grounding its analysis in the longstanding critiques of technology that have been made by the likes of Weizenbaum, Mumford, and quite a few other scholars and critics. Jo Freeman and Langdon Winner are both mentioned, but it’s important to emphasize that many social critics warned about the conservative biases of computers long before Trump got a Twitter account, and long before Mark Zuckerberg was born. Our widespread refusal to heed these warnings, and the tendency to mock those issuing these warnings as Luddites, technophobes, and prophets of doom, is arguably a fundamental cause of the present state of affairs which Schradie so aptly describes.

    With The Revolution That Wasn’t, Jen Schradie has made a vital intervention in current discussions (inside the academy and amongst activists) regarding the politics of social media. Eschewing a polemical tone, which refuses to sing the praises of social media or to condemn it outright, Schradie provides a measured assessment that addresses the way in which social media is actually being used by activists of varying political stripes—with a careful emphasis on the successes these groups have enjoyed. There is a certain extent to which Schradie’s argument, and some of her conclusions, represent a jarring contrast to much of the literature that has framed social media as being a particular boon to left-wing activists. Yet, Schradie’s book highlights with disarming detail the ways in which a desire (on the part of left-leaning individuals) to believe that the Internet favors people on the left has been a sort of ideological blinder that has prevented them from fully coming to terms with how the Internet has re-entrenched the dominant powers in society.

    What Schradie’s book reveals is that “the internet did not wipe out barriers to activism; it just reflected them, and even at times exacerbated existing power differences” (Schradie, 245). Schradie allows the activists on both sides to speak in their own words, taking seriously their claims about what they were doing. And while the book is closely anchored in the context of a particular struggle in North Carolina, the analytical tools that Schradie develops (such as the ASET framework, and the tripartite emphasis on class/organization/ideology) allow Schradie’s conclusions to be mapped onto other social movements and struggles.

    While the research that went into The Revolution That Wasn’t clearly predates the election of Donald Trump, and though he is not a main character in the book, the 45th president lurks in the background of the book (or perhaps just in the reader’s mind). Had Trump lost the election, every part of Schradie’s analysis would be just as accurate and biting; however, those seeking to defend social media tools as inherently liberating would probably not be finding themselves on the defensive today (a position that most of them were never expecting themselves to be in). Yet, what makes Schradie’s account so important, is that the book is not simply concerned with whether or not particular movements used digital tools; rather, Schradie is able to step back to consider the degree to which the use of social media tools has been effective in fulfilling the political aims of the various groups. Yes, Occupy! might have made canny use of hashtags (and, if one wants to be generous one can say that it helped inject the discussion of inequality back into American politics), but nearly ten years later the wealth gap is continuing to grow. For all of the hopeful luster that has often surrounded digital tools, Schradie’s book shows the way in which these tools have just placed a fresh coat of paint on the same old status quo—even if this coat of paint is shiny and silvery.

    As the technophiles scramble to rescue the belief that the Internet is inherently democratizing, The Revolution That Wasn’t takes its place amongst a growing body of critical works that are willing to challenge the utopian aura that has been built up around the Internet. While it must be emphasized, as the earlier allusion to Weizenbaum shows, that there have been thinkers criticizing computers and the Internet for as long as there have been computers and the Internet—of late there has been an important expansion of such critical works. There is not the space here to offer an exhaustive account of all of the critical scholarship being conducted, but it is worthwhile to mention some exemplary recent works. Safiya Umoja Noble’s Algorithms of Oppression provides an essential examination of the ways in which societal biases, particularly about race and gender, are reinforced by search engines. The recent work on the “New Jim Code” by Ruha Benjamin as seen in such works as Race After Technology, and the Captivating Technology volume she edited, foreground the ways in which technological systems reinforce white supremacy. The work of Virginia Eubanks, both Digital Dead End (whose concerns make it likely the most important precursor to Schradie’s book) and her more recent Automating Inequality, discuss the ways in which high tech systems are used to police and control the impoverished. Examinations of e-waste (such as Jennifer Gabry’s Digital Rubbish) and infrastructure (such as Nicole Starosielski’s The Undersea Network, and Tung-Hui Hu’s A Prehistory of the Cloud) point to the ways in which colonial legacies are still very much alive in today’s high tech systems. While the internationalist sheen that is often ascribed to digital media is carefully deconstructed in works like Ramesh Srnivasan’s Whose Global Village? Works like Meredith Broussard’s Artificial Unintelligence and Shoshana Zuboff’s Age of Surveillance Capitalism raise deep questions about the overall politics of digital technology. And, with its deep analysis of the way that race and class are intertwined with digital access and digital activism, The Revolution That Wasn’t deserves a place amongst such works.

    What much of this recent scholarship has emphasized is that technology is never neutral. And while this may be a point which is accepted wisdom amongst scholars in these relevant fields, these works (and scholars) have taken great care to make this point to the broader public. It is not just that tools can be used for good, or for bad—but that tools have particular biases built into them. Pretending those biases aren’t there, doesn’t make them go away. Kranzberg’s Laws asserted that technology is not good, or bad, or neutral—but when one moves away from talking about technology to particular technologies, it is quite important to be able to say that certain technologies may actually be bad. This is a particular problem when one wants to consider things like activism. There has always been something asinine to the tactic of mocking activists pushing for social change while using devices created by massive multinational corporations (as the well-known comic by Matt Bors notes); however, the reason that this mockery is so often repeated is that it has a kernel of troubling truth to it. After all, there is something a little discomforting about using a device running on minerals mined in horrendous conditions, which was assembled in a sweatshop, and which will one day go on to be poisonous e-waste—for organizing a union drive.

    Matt Bors, detail from "Mister Gotcha" (2016)
    Matt Bors, detail from “Mister Gotcha” (2016)

    Or, to put it slightly differently, when we think about the democratizing potential of technology, to what extent are we privileging those who get to use (and discard) these devices, over those whose labor goes into producing them? That activists may believe that they are using a given device or platform for “good” purposes, does not mean that the device itself is actually good. And this is a tension Schradie gets at when she observes that “instead of a revolutionary participatory tool, the internet just happened to be the dominant communication tool at the time of my research and simply became normalized into the groups’ organizing repertoire” (Schradie, 133). Of course, activists (of varying political stripes) are making use of the communication tools that are available to them and widely used in society. But just because activists use a particular communication tool, doesn’t mean that they should fall in love with it.

    This is not in any way to call activists using these tools hypocritical, but it is a further reminder of the ways in which high-tech tools inscribe their users within the very systems they may be seeking to change. And this is certainly a problem that Schradie’s book raises, as she notes that one of the reasons conservative values get a bump from digital tools is that these conservatives are generally already the happy beneficiaries of the systems that created these tools. Scholarship on digital activism has considered the ideologies of various technologically engaged groups before, and there have been many strong works produced on hackers and open source activists, but often the emphasis has been placed on the ideologies of the activists without enough consideration being given to the ways in which the technical tools themselves embody certain political values (an excellent example of a work that truly considers activists picking their tools based on the values of those tools is Christina Dunbar-Hester’s Low Power to the People). Schradie’s focus on ideology is particularly useful here, as it helps to draw attention to the way in which various groups’ ideologies map onto or come into conflict with the ideologies that these technical systems already embody. What makes Schradie’s book so important is not just its account of how activists use technologies, but its recognition that these technologies are also inherently political.

    Yet the thorny question that undergirds much of the present discourse around computers and digital tools remains “what do we do if, instead of democratizing society, these tools are doing just the opposite?” And this question just becomes tougher the further down you go: if the problem is just Facebook, you can pose solutions such as regulation and breaking it up; however, if the problem is that digital society rests on a foundation of violent extraction, insatiable lust for energy, and rampant surveillance, solutions are less easily available. People have become so accustomed to thinking that these technologies are fundamentally democratic that they are loathe to believe analyses, such as Mumford’s, that they are instead authoritarian by nature.

    While reports of a “techlash” may be overstated, it is clear that at the present moment it is permissible to be a bit more critical of particular technologies and the tech giants. However, there is still a fair amount of hesitance about going so far as to suggest that maybe there’s just something inherently problematic about computers and the Internet. After decades of being told that the Internet is emancipatory, many people remain committed to this belief, even in the face of mounting evidence to the contrary. Trump’s election may have placed some significant cracks in the dominant faith in these digital devices, but suggesting that the problem goes deeper than Facebook or Amazon is still treated as heretical. Nevertheless, it is a matter that is becoming harder and harder to avoid. For it is increasingly clear that it is not a matter of whether or not these devices can be used for this or that political cause, but of the overarching politics of these devices themselves. It is not just that digital activism favors conservatism, but as Weizenbaum observed decades ago, that “the computer has from the beginning been a fundamentally conservative force.”

    With The Revolution That Wasn’t, Jen Schradie has written an essential contribution to current conversations around not only the use of technology for political purposes, but also about the politics of technology. As an account of left-wing and right-wing activists, Schradie’s book is a worthwhile consideration of the ways that various activists use these tools. Yet where this, altogether excellent, work really stands out is in the ways in which it highlights the politics that are embedded and reified by high-tech tools. Schradie is certainly not suggesting that activists abandon their devices—in so far as these are the dominant communication tools at present, activists have little choice but to use them—but this book puts forth a nuanced argument about the need for activists to really think critically about whether they’re using digital tools, or whether the digital tools are using them.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • ben-Aaron, Diana. 1985. “Weizenbaum Examines Computers and Society.” The Tech (Apr 9).
    • Weizenbaum, Joseph, and Gunna Wendt. 2015. Islands in the Cyberstream: Seeking Havens of Reason in a Programmed Society. Duluth, MN: Litwin Books.
  • R. Joshua Scannell — Architectures of Managerial Triumphalism (Review of Benjamin Bratton, The Stack: On Software and Sovereignty)

    R. Joshua Scannell — Architectures of Managerial Triumphalism (Review of Benjamin Bratton, The Stack: On Software and Sovereignty)

    A review of Benjamin Bratton, The Stack: On Software and Sovereignty (MIT Press Press, 2016)

    by R. Joshua Scannell

    The Stack

    Benjamin Bratton’s The Stack: On Software and Sovereignty is an often brilliant and regularly exasperating book. It is a diagnosis of the epochal changes in the relations between software, sovereignty, climate, and capital that underwrite the contemporary condition of digital capitalism and geopolitics.  Anybody who is interested in thinking through the imbrication of digital technology with governance ought to read The Stack. There are many arguments that are useful or interesting. But reading it is an endeavor. Sprawling out across 502 densely packed pages, The Stack is nominally a “design brief” for the future. I don’t know that I understand that characterization, no matter how many times I read this tome.

    The Stack is chockablock with schematic abstractions. They make sense intuitively or cumulatively without ever clearly coming into focus. This seems to be a deliberate strategy. Early in the book, Bratton describes The Stack–the titular “accidental megastructure” of “planetary computation” that has effectively broken and redesigned, well, everything–as “a blur.” He claims that

    Only a blur provides an accurate picture of what is going on now and to come…Our description of a system in advance of its appearance maps what we can see but cannot articulate, on the one hand, versus what we know to articulate but cannot yet see, on the other. (14)

    This is also an accurate description of the prevailing sensation one feels working through the text. As Ian Bogost wrote in his review of The Stack for Critical Inquiry, reading the book feels “intense—meandering and severe but also stimulating and surprising. After a while, it was also a bit overwhelming. I’ll take the blame for that—I am not necessarily built for Bratton’s level and volume of scholarly intensity.” I agree on all fronts.

    Bratton’s inarguable premise is that the various computational technologies that collectively define the early decades of the 21st century—smart grids, cloud platforms, mobile apps, smart cities, the Internet of Things, automation—are not analytically separable. They are often literally interconnected but, more to the point, they combine to produce a governing architecture that has subsumed older calculative technologies like the nation state, the liberal subject, the human, and the natural. Bratton calls this “accidental megastructure” The Stack.

    Bratton argues that The Stack is composed of six “layers,” the earth, the cloud, the city, the address, the interface, and the user. They all indicate more or less what one might expect, but with a counterintuitive (and often Speculative Realist) twist. The earth is the earth but is also a calculation machine. The cloud is “the cloud” but as a chthonic structure of distributed networks and nodal points that reorganize sovereign power and body forth quasi-feudal corporate sovereignties. The City is, well, cities, but not necessarily territorially bounded, formally recognized, or composed of human users. Users are also usually not human. They’re just as often robots or AI scripts. Really they can be anything that works up and down the layers, interacting with platforms (which can be governments) and routed through addresses (which are “every ‘thing’ that can be computed” including “individual units of life, loaded shipping containers, mobile devices, locations of datum in databases, input and output events and enveloped entities of all size and character” [192], etc.).

    Each layer is richly thought through and described, though it’s often unclear whether the “layer” in question is “real” or a useful conceptual envelope or both or neither. That distinction is generally untenable, and Bratton would almost certainly reject the dichotomy between the “real” and the “metaphorical.” But it isn’t irrelevant for this project. He argues early on that, contra Marxist thought that understands the state metaphorically as a machine, The Stack is a “machine-as-the-state.” That’s both metaphorical and not. There really are machines that exert sovereign power, and there are plenty of humans in state apparatuses that work for machines. But there aren’t, really, machines that are states. Right?

    Moments like these, when The Stack’s concepts productively destabilize given categories (like the state) that have never been coherent enough to justify their power are when the book is at its most compelling. And many of the counterintuitive moves that Bratton makes start and end with real, important insights. For instance, the insistence on the absolute materiality, and the absolute earthiness of The Stack and all of its operations leads Bratton to a thoroughgoing and categorical rejection of the prevailing “idiot language” that frames digital technology as though it exists in a literal “cloud,” or some sort of ethereal “virtual” that is not coincident with the “real” world. Instead, in The Stack, every point of contact between every layer is a material event that transduces and transforms everything else. To this end, he inverts Latour’s famous dictum that there is no global, only local. Instead, The Stack as planetary megastructure means that there is only global. The local is a dead letter. This is an anthropocene geography in which an electron, somewhere, is always firing because a fossil fuel is burning somewhere else. But it is also a post-anthropocene geography because humans are not The Stack’s primary users. The planet itself is a calculation machine, and it is agnostic about human life. So, there is a hybrid sovereignty: The Stack is a “nomos of the earth” in which humans are an afterthought.

    A Design for What?

    Bratton is at his conceptual best when he is at his weirdest. Cyclonopedic (Negarestani 2008) passages in which the planet slowly morphs into something like HP Lovecraft and HR Geiger’s imaginations fucking in a Peter Thiel fever dream are much more interesting (read: horrifying) than the often perfunctory “real life” examples from “real world” geopolitical trauma, like “The First Sino-Google War of 2009.” But this leads to one of the most obvious shortcomings of the text. It is supposedly a “design brief,” but it’s not clear what or who it is a design brief for.

    For Bratton, design

    means the structuring of the world in reaction to an accelerated decay and in projective anticipation of a condition that is now only the ghostliest of a virtual present tense. This is a design for accommodating (or refusing to accommodate) the post-whatever-is-melting-into-air and prototyping for pre-what-comes-next: a strategic, groping navigation (however helpless) of the punctuations that bridge between these two. (354)

    Design, then, and not theory, because Bratton’s Stack is a speculative document. Given the bewildering and potentially apocalyptic conditions of the present, he wants to extrapolate outwards. What are the heterotopias-to-come? What are the constraints? What are the possibilities? Sounding a familiar frustration with the strictures of academic labor, he argues that this moment requires something more than diagnosis and critique. Rather,

    the process by which sovereignty is made more plural becomes a matter of producing more than discoursing: more about pushing, pulling, clicking, eating, modeling, stacking, prototyping, subtracting, regulating, restoring, optimizing, leaving alone, splicing, gardening and evacuating than about reading, examining, insisting, rethinking, reminding, knowing full-well, enacting, finding problematic, and urging. (303)

    No doubt. And, not that I don’t share the frustration, but I wonder what a highly technical, 500-page diagnosis of the contemporary state of software and sovereignty published and distributed by an academic press and written for an academic audience is if not discoursing? It seems unlikely that it can serve as a blueprint for any actually-existing power brokers, even though its insights are tremendous. At the risk of sounding cynical, calling The Stack a “design brief” seems like a preemptive move to liberate Bratton from having to seriously engage with the different critical traditions that work to make sense of the world as it is in order to demand something better. This allows for a certain amount of intellectual play that can sometimes feel exhilarating but can just as often read as a dodge—as a way of escaping the ethical and political stakes that inhere in critique.

    That is an important elision for a text that is explicitly trying to imagine the geopolitics of the future. Bratton seems to pose The Stack from a nebulous “Left” position that is equally disdainful of the sort of “Folk Politics” that Srnicek and Williams (2015) so loathe and the accelerationist tinge of the Speculative Realists with whom he seems spiritually aligned. This sense of rootlessness sometimes works in Bratton’s favor. There are long stretches in which his cherry picking and remixing ideas from across a bewildering array of schools of thought yields real insights. But just as often, the “design brief” characterization seems to be a way out of thinking the implications of the conjuncture through to their conclusion. There is a breeziness about how Bratton poses futures-as-thought-experiments that is troubling.

    For instance, in thinking through the potential impacts of the capacity to measure planetary processes in real time, Bratton suggests that producing a sensible world is not only a process of generalizing measurement and representation. He argues that

    the sensibility of the world might be distributed or organized, made infrastructural, and activated to become part of how the landscape understands itself and narrates itself. It is not only a diagnostic image then; it is a tool for geo-politics in formation, emerging from the parametric multiplication and algorithmic conjugation of our surplus projections of worlds to come, perhaps in mimetic accordance with one explicit utopian conception or another, and perhaps not. Nevertheless, the decision between what is and is not governable may arise as much from what the model computational image cannot do as much as what it can. (301, emphasis added)

    Reading this, I wanted to know: What explicit utopian project is he thinking about? What are the implications of it going one way and not another? Why mimetic? What does the last bit about what is and is not governable mean? Or, more to the point: who and what is going to get killed if it goes one way and not another? There are a great many instances like this over the course of the book. At the precise moment where analysis might inform an understanding of where The Stack is taking us, Bratton bows out. He’s set down the stakes, and given a couple of ideas about what might happen. I guess that’s what a design brief is meant to do.

    Another example, this time concerning the necessity of geoengineering for solving what appears to be an ever-more-imminent climatic auto-apocalypse:

    The good news is that we know for certain that short-term “geoengineering” is not only possible but in a way inevitable, but how so? How and by whom does it go, and unfortunately for us the answer (perhaps) must arrive before we can properly articulate the question. For the darker scenarios, macroeconomics completes its metamorphosis into ecophagy, as the discovery of market failures becomes simultaneously the discovery of limits of planetary sinks (e.g., carbon, heat, waste, entropy, populist politics) and vice versa; The Stack becomes our dakhma. The shared condition, if there is one, is the mutual unspeakability and unrecognizability that occupies the seat once reserved for Kantian cosmopolitanism, now just a pre-event reception for a collective death that we will actually be able to witness and experience. (354, emphasis added)

    Setting aside the point that it is not at all clear to me that geoengineering is an inevitable or even appropriate (Crist 2017) way out of the anthropocene (or capitalocene? (Moore 2016)) crisis, if the answer for “how and by whom does it go” is to arrive before the question can be properly articulated, then the stack-to-come starts looking a lot like a sort of planetary dictatorship of, well of who? Google? Mark Zuckerberg? In-Q-Tel? Y Combinator? And what exactly is the “populist politics” that sits in the Latourian litany alongside carbon, heat, waste, and entropy as a full “planetary sink”? Does that mean Trump, and all the other globally ascendant right wing “populists?” Or does it mean “populist politics” in the Jonathan Chait sense that can’t differentiate between left and right and therefore sees both political projects as equally dismissible? Does populism include any politics that centers the needs and demands of the public? What are the commitments in this dichotomy? I suppose The Stack wouldn’t particularly care about these sorts of questions. But a human writing a 500-page playbook so that other humans might better understand the world-to-come might be expected to. After all, a choice between geoengineering or collective death might be what the human population of the planet is facing (and for most of the planet’s species, and for a great many of the planet’s human societies, already eliminated or dragged down the road towards it during the current mass extinction, there is no choice), but such a binary doesn’t make for much of a design spec.

    One final example, this time on what the political subject of the stack-to-come ought to look like:

    We…require, as I have laid out, a redefinition of the political subject in relation to the real operations of the User, one that is based not on homo economicus, parliamentary liberalism, poststructuralist linguistic reduction, or the will to secede into the moral safety of individual privacy and withdrawn from coercion. Instead, this definition should focus on composing and elevating sites of governance from the immediate, suturing interfacial material between subjects, in the stitches and the traces and the folds of interaction between bodies and things at a distance, congealing into different networks demanding very different kinds of platform sovereignty.

    If “poststructuralist linguistic reduction” is on the same plane as “parliamentary liberalism” or “homo economicus” as one among several prevailing ideas of the contemporary “political subject,” then I am fairly certain that we are in the realm of academic “theory” rather than geopolitical “design.” The more immediate point is that I do understand what the terms that we ought to abandon mean, and agree that they need to go. But I don’t understand what the redefined political subject looks like. Again, if this is “theory,” then that sort of hand waving is unfortunately often to be expected. But if it’s a design brief—even a speculative one—for the transforming nature of sovereignty and governance, then I would hope for some more clarity on what political subjectivity looks like in The Stack-To-Come.

    Or, and this is really the point, I want The Stack to tell me something more about how The Stack participates in the production and extractable circulation of populations marked for death and debility (Puar 2017). And I want to know what, exactly, is so conceptually radical about pointing out that human beings are not at the center of the planetary systems that are driving transformations in geopolitics and sovereignty. After all, hasn’t that been exactly the precondition for the emergence of The Stack? This accidental megastructure born out of the ruthless expansions of digitally driven capitalism is not just working to transform the relationship between “human” and sovereignty. The condition of its emergence is precisely that most planetary homo sapiens are not human, and are therefore disposable and disposited towards premature death. The Stack might be “our” dhakma, if we’re speaking generically as a sort of planetary humanism that cannot but be read as white—or, more accurately, “capacitated.” But the systematic construction of human stratification along lines of race, gender, sex, and ability as precondition for capitalist emergence freights the stack with a more ancient, and ignored, calculus: that of the logistical work that shuttles humans between bodies, cargo, and capital. It is, in other words, the product of an older planetary death machine: what Fred Moten and Stefano Harney (2013) call the “logistics in the hold” that makes The Stack hum along.

    The tenor of much of The Stack is redolent of managerial triumphalism. The possibility of apocalypse is always minimized. Bratton offers, a number of times, that he’s optimistic about the future. He is disdainful of the most stringent left critics of Silicon Valley, and he thinks that we’ll probably be able to trust to our engineers and institutions to work out The Stack’s world-destroying kinks. He sounds invested, in other words, in a rhetorical-political mode of thought that, for now, seems to have died on November 9, 2016. So it is not surprising that Bratton opens the book with an anecdote about Hillary Clinton’s vision of the future of world governance.

    The Stack begins with a reference to then-Secretary of State Clinton’s 2013 farewell address to the Council on Foreign Relations. In that speech, Clinton argued that the future of international governance requires a “new architecture for this new world, more Frank Gehry than formal Greek.” Unlike the Athenian Agora, which could be held up by “a few strong columns,” contemporary transnational politics is too complicated to rely on stolid architecture, and instead must make use of the type of modular assemblage that “at first might appear haphazard, but in fact, [is] highly intentional and sophisticated” that makes Gehry famous. Bratton interprets her argument as a “half-formed question, what is the architecture of the emergent geopolitics of this software society? What alignments, components, foundations, and apertures?” (Bratton 2016, 13).

    For Clinton, future governance must make a choice between Gehry and Agora. The Gehry future is that of the seemingly “haphazard” but “highly intentional and sophisticated” interlocking treaties, non-governmental organizations, super and supra-state technocratic actors working together to coordinate the disparate interests of states and corporations in the service of the smooth circulation of capital across a planetary logistics network. On the other side, a world order held up by “a few strong pillars”—by implication the status quo after the collapse of the Soviet Union, a transnational sovereign apparatus anchored by the United States. The glaring absence in this dichotomy is democracy—or rather its assumed subsumption into American nationalism. Clinton’s Gehry future is a system of government whose machinations are by design opaque to those that would be governed, but whose beneficence is guaranteed by the good will of the powerful. The Agora—the fountainhead of slaveholder democracy—is metaphorically reduced to its pillars, particularly the United States and NATO. Not unlike ancient Athens, it’s democracy as empire.

    There is something darkly prophetic of the collapse of the Clintonian world vision, and perversely apposite in Clinton’s rhetorical move to supplant as the proper metaphor for future government Gehry for the Agora. It is unclear why a megalomaniacal corporate starchitecture firm that robs public treasuries blind and facilitates tremendous labor exploitation ought to be the future for which the planet strives.

    For better or for worse, The Stack is a book about Clinton. As a “design brief,” it works from a set of ideas about how to understand and govern the relationship between software and sovereignty that were strongly intertwined with the Clinton-Obama political project. That means, abysmally, that it is now also about Trump. And Trump hangs synechdochally over theoretical provocations for what is to be done now that tech has killed the nation-state’s “Westphalian Loop.” This was a knotty question when the book went to press in February 2016 and Gehry seemed ascendant. Now that the Extreme Center’s (Ali 2015) project of tying neoliberal capitalism to non-democratic structures of technocratic governance appears to be collapsing across the planet, Clinton’s “half-formed question” is even knottier. If we’re living through the demise of the Westphalian nation state, then it’s sounding one hell of a murderous death rattle.

    Gehry or Agora?

    In the brief period between July 21st and November 8 2016, when the United States’ cognoscenti convinced itself that another Clinton regime was inevitable, there was a neatly ordered expectation of how “pragmatic” future governance under a prolonged Democratic regime would work. In the main, the public could look forward to another eight years sunken in a “Gehry-like” neoliberal surround subtended by the technocratic managerialism of the Democratic Party’s right edge. And, while for most of the country and planet, that arrangement didn’t portend much to look forward to, it was at least not explicitly nihilistic in its outlook. The focus on management, and on the deliberate dismantling of the nation state as the primary site of governance in favor of the mesh of transnational agencies and organizations that composed 21st century neoliberalism’s star actants meant that a number of questions about how the world would be arranged were left unsettled.

    By end of election week, that future had fractured. The unprecedented amateurishness, decrypted racism, and incomparable misogyny of the Trump campaign portended an administration that most thought couldn’t, or at the very least shouldn’t, be trusted with the enormous power of the American executive. This stood in contrast to Obama, and (perhaps to a lesser extent) to Clinton, who were assumed to be reasonable stewards. This paradoxically helps demonstrate just how much the “rule of law” and governance by administrative norms that theoretically underlie the liberal national state had already deteriorated under Obama and his immediate predecessors—a deterioration that was in many ways made feasible by the innovations of the digital technology sector. As many have pointed out, the command-and-control prerogatives that Obama claimed for the expansion of executive power depended essentially on the public perception of his personal character.

    The American people, for instance, could trust planetary drone warfare because Obama claimed to personally vet our secret kill list, and promised to be deliberate and reasonable about its targets. Of course, Obama is merely the most publicly visible part of a kill-chain that puts this discretionary power over life and death in the hands of the executive. The kill-chain is dependent on the power of, and sovereign faith in, digital surveillance and analytics technologies. Obama’s kill-chain, in short, runs on the capacities of an American warfare state—distributed at nodal points across the crust of the earth, and up its Van Allen belts—to read planetary chemical, territorial, and biopolitical fluxes and fluctuations as translatable data that can be packet switched into a binary apparatus of life and death. This is the calculus that Obama conjures when he defines those mobile data points that concatenate into human beings as as “baseball cards” that constitute a “continuing, imminent threat to the American people.” It is the work of planetary sovereignty that rationalizes and capacitates the murderous “fix” and “finish” of the drone program.

    In other words, Obama’s personal aura and eminent reasonableness legitimated an essentially unaccountable and non-localizable network of black sites and black ops (Paglen 2009, 2010) that loops backwards and forwards across the drone program’s horizontal regimes of national sovereignty and vertical regimes of cosmic sovereignty. It is, to use Clinton’s framework, a very Frank Gehry power structure. Donald Trump’s election didn’t transform these power dynamics. Instead, his personal qualities made the work of planetary computation in the service of sovereign power to kill suddenly seem dangerous or, perhaps better: unreasonable. Whether President Donald Trump would be so scrupulous as his predecessor in determining the list of humans fit for eradication was (formally speaking) a mystery, but practically a foregone conclusion. But in both presidents’ cases, the dichotomies between global and local, subject and sovereign, human and non-human that are meant to underwrite the nation state’s rights and responsibilities to act are fundamentally blurred.

    Likewise, Obama’s federal imprimatur transformed the transparently disturbing decision to pursue mass distribution of privately manufactured surveillance technology – Taser’s police-worn body cameras, for instance – as a reasonable policy response to America’s dependence on heavily armed paramilitary forces to maintain white supremacy and crush the poor. Under Obama and Eric Holder, American liberals broadly trusted that digital criminal justice technologies were crucial for building a better, more responsive, and more responsible justice system. With Jeff Sessions in charge of the Department of Justice, the idea that the technologies that Obama’s Presidential Task Force on 21st Century Policing lauded as crucial for achieving the “transparency” needed to “build community trust” between historically oppressed groups and the police remained plausible instruments of progressive reform suddenly seemed absurd. Predictive policing, ubiquitous smart camera surveillance, and quantitative risk assessments sounded less like a guarantee of civil rights and more like a guarantee of civil rights violations under a president that lauds extrajudicial police power. Trump goes out of his way to confirm these civil libertarian fears, such as when he told Long Island law enforcement that “laws are stacked against you. We’re changing those laws. In the meantime, we need judges for the simplest thing — things that you should be able to do without a judge.”

    But, perhaps more to the point, the rollout of these technologies, like the rollouts of the drone program, formalized a transformation in the mechanics of sovereign power that had long been underway. Stripped of the sales pitch and abstracted from the constitutional formalism that ordinarily sets the parameters for discussions of “public safety” technologies, what digital policing technologies do is flatten out the lived and living environment into a computational field. Police-worn body cameras quickly traverse the institutional terrain from a tool meant to secure civil rights against abusive officers into an artificially intelligent weapon that flags facial structures that match with outstanding warrants, that calculates changes in enframed bodily comportment to determine imminent threat to the officer-user, and that captures the observed social field as  data privately owned by the public safety industry’s weapons manufacturers. Sovereignty, in this case, travels up and down a Stack of interoperative calculative procedures, with state sanction and human action just another data point in the proper administration of quasi-state violence. After all, it is Axon (formerly Taser), and not a government that controls the servers that their body cams draw on to make real-time assessments of human danger. The state sanctions a human officer’s violence, but the decision-making apparatus that situates the violence is private, and inhuman. Inevitably, the drone war and carceral capitalism collapse into one another, as drones are outfitted with AI designed to identify crowd “violence” from the sky, a vertical parallax to pair with the officer-user’s body worn camera.

    Trump’s election seemed to show with a clarity that had hitherto been unavailable for many that wedding the American security apparatus’ planetary sovereignty to twenty years of unchecked libertarian technological triumphalism (even, or especially if in the service of liberal principles like disruption, innovation, efficiency, transparency, convenience, and generally “making the world a better place”) might, in fact, be dangerous. When the Clinton-Obama project collapsed, its assumption that the intertwining of private and state sector digital technologies inherently improves American democracy and economy, and increases individual safety and security looked absurd. The shock of Trump’s election, quickly and self-servingly blamed on Russian agents and Facebook, transformed Silicon Valley’s broadly shared Prometheanism into interrogations into the industry’s infrastructural corrosive toxicity, and its deleterious effect on the liberal national state.  If tech would ever come to Jesus, the end of 2016 would have had to be the moment. It did not.

    A few days after Trump won election I found myself a fly on the wall in a meeting with mid-level executives for one of the world’s largest technology companies (“The Company”). We were ostensibly brainstorming how to make The Cloud a force for “global good,” but Trump’s ascendancy and all its authoritarian implications made the supposed benefits of cloud computing—efficiency, accessibility, brain-shattering storage capacity—suddenly terrifying. Instead of setting about the dubious task of imagining how a transnational corporation’s efforts to leverage the gatekeeping power over access to the data of millions, and the private control over real-time identification technology (among other things) into heavily monetized semi-feudal quasi-sovereign power could be Globally Good, we talked about Trump.

    The Company’s reps worried that, Peter Thiel excepted, tech didn’t have anybody near enough to Trump’s miasmatic fog to sniff out the administration’s intentions. It was Clinton, after all, who saw the future in global information systems. Trump, as we were all so fond of pointing out, didn’t even use a computer. Unlike Clinton, the extent of Trump’s mania for surveillance and despotism was mysterious, if predictable. Nobody knew just how many people of color the administration had in its crosshairs, and The Company reps suggested that the tech world wasn’t sure how complicit it wanted to be in Trump’s explicitly totalitarian project. The execs extemporized on how fundamental the principles of democratic and republican government were to The Company, how committed they were to privacy, and how dangerous the present conjuncture was. As the meeting ground on, reason slowly asphyxiated on a self-evidently implausible bait hook: that it was now both the responsibility and appointed role of American capital, and particularly of the robber barons of Platform Capitalism (Srnicek 2016), to protect Americans from the fascistic grappling of American government. Silicon Valley was going to lead the #resistance against the very state surveillance and overreach that it capacitated, and The Company would lead Silicon Valley. That was the note on which the meeting adjourned.

    That’s not how things have played out. A month after that meeting, on December 14, 2016, almost all of Silicon Valley’s largest players sat down at Trump’s technology roundtable. Explaining themselves to an aghast (if credulous) public, tech’s titans argued that it was their goal to steer the new chief executive of American empire towards a maximally tractable gallimaufry of power. This argument, plus over one hundred companies’ decision to sign an amici curiae brief opposing Trump’s first attempt at a travel ban aimed at Muslims, seemed to publicly signal that Silicon Valley was prepared to #resist the most high-profile degradations of contemporary Republican government. But, in April 2017, Gizmodo inevitably reported that those same companies that appointed themselves the front line of defense against depraved executive overreach in fact quietly supported the new Republican president before he took office. The blog found that almost every major concern in the Valley donated tremendously to the Trump administration’s Presidential Inaugural Committee, which was impaneled to plan his sparsely attended inaugural parties. The Company alone donated half a million dollars. Only two tech firms donated more. It seemed an odd way to #resist.

    What struck me during the meeting was how weird it was that executives honestly believed a major transnational corporation would lead the political resistance against a president committed to the unfettered ability of American capital to do whatever it wants. What struck me afterward was how easily the boundaries between software and sovereignty blurred. The Company’s executives assumed, ad hoc, that their operation had the power to halt or severely hamper the illiberal policy priorities of government. By contrast, it’s hard to imagine mid-level General Motors executives imagining that they have the capacity or responsibility to safeguard the rights and privileges of the republic. Except in an indirect way, selling cars doesn’t have much to do with the health of state and civil society. But state and civil society is precisely what Silicon Valley has privatized, monetized, and re-sold to the public. But even “state and civil society” is not quite enough. What Silicon Valley endeavors to produce is, pace Bratton, a planetary simulation as prime mover. The goal of digital technology conglomerates is not only to streamline the formal and administrative roles and responsibilities of the state, or to recreate the mythical meeting houses of the public sphere online. Platform capital has as its target the informational infrastructure that makes living on earth seem to make sense, to be sensible. And in that context, it’s commonsensical to imagine software as sovereignty.

    And this is the bind that will return us to The Stack. After one and a half relentless years of the Trump presidency, and a ceaseless torrent of public scandals concerning tech companies’ abuse of power, the technocratic managerial optimism that underwrote Clinton’s speech has come to a grinding halt. For the time being, at least, the “seemingly haphazard yet highly intentional and sophisticated” governance structures that Clinton envisioned are not working as they have been pitched. At the same time, the cavalcade of revelations about the depths that technology companies plumb in order to extract value from a polluted public has led many to shed delusions about the ethical or progressive bona fides of an industry built on a collective devotion to Ayn Rand. Silicon Valley is happy to facilitate authoritarianism and Nazism, to drive unprecedented crises of homelessness, to systematically undermine any glimmer of dignity in human labor, to thoroughly toxify public discourse, to entrench and expand carceral capitalism so long as doing so expands the platform, attracts advertising and venture capital, and increases market valuation. As Bratton points out, that’s not a particularly Californian Ideology. It’s The Stack, both Gehry and Agora.

    _____

    R. Joshua Scannell holds a PhD in Sociology from the CUNY Graduate Center. He teaches sociology and women’s, gender, and sexuality studies at Hunter College, and is currently researching the political economic relations between predictive policing programs and urban informatics systems. He is the author of Cities: Unauthorized Resistance and Uncertain Sovereignty in the Urban World (Paradigm/Routledge, 2012).

    Back to the essay

    _____

    Works Cited

    • Ali, Tariq. 2015. The Extreme Center: A Warning. London: Verso
    • Crist, Eileen. 2016. “On the Poverty of Our Nomenclature.” In Anthropocene or Capitalocene: Nature, History, and the Crisis of Capitalism, edited by Jason W. Moore, 14-33. Oakland: PM Press
    • Harney, Stefano, and Fred Moten. 2013. The Undercommons: Fugitive Planning and Black Study. Brooklyn: Autonomedia.
    • Moore, Jason W. 2016. “Anthropocene or Capitolocene? Nature, History, and the Crisis of Capitalism.” In Anthropocene or Capitalocene: Nature, History, and the Crisis of Capitalism, edited by Jason W. Moore, 1-13. Oakland: PM Press
    • Negarestani, Reza. 2008. Cyclonopedia: Complicity with Anonymous Materials. Melbourne: re.press
    • Paglen, Trevor. 2009. Blank Spots on the Map: The Dark Geography of the Pentagon’s Secrert World. Boston: Dutton Adult
    • Paglen, Trevor. 2010. Invisible: Covert Operations and Classified Landscapes. Reading: Aperture Press
    • Puar, Jasbir. 2017. The Right to Maim: Debility, Capacity, Disability. Durham: Duke University Press
    • Srnicek, Nick. 2016. Platform Capitalism. Boston: Polity Press
    • Srnicek, Nick, and Alex Williams. 2016. Inventing the Future: Postcapitalism and a World Without Work. London: Verso.