Reviews and analysis of scholarly books about digital technology and culture, as well as of articles, legal proceedings, videos, social media, digital humanities projects, and other emerging digital forms, offered from a humanist perspective, in which our primary intellectual commitment is to the deeply embedded texts, figures, themes, and politics that constitute human culture, regardless of the medium in which they occur.

  • Caddie Alford–Witnessing Corecore as an Epideictic Call to Care

    Caddie Alford–Witnessing Corecore as an Epideictic Call to Care

    This text is published as part of a special b2o issue titled “Critique as Care”, edited by Norberto Gomez, Frankie Mastrangelo, Jonathan Nichols, and Paul Robertson, and published in honor of our b2o and b2 colleague and friend, the late David Golumbia.

    Witnessing Corecore as an Epideictic Call to Care

    Caddie Alford 

    It’s early January 2023. You’re on TikTok. Comedian Bo Burnham’s song “Microwave Popcorn” has become viral audio.[1] Banking on that virality, user @sebastianvalencia.mp4’s video starts with Burnham’s unmistakable “I put the,” but before “packet on the glass” plays, the video skips to another “Microwave Popcorn” video, then another, then another, and then a high-pitched buzzing gets louder over a scrolling blur of videos from a “sad Family Guy edits” playlist. The video cuts to black and the words “Wake up” appear. Some lilting piano notes begin to play over a shot from the 1998 film The Truman Show, which transitions into an interview with comedian Hasan Minhaj saying, “The internet and technology created an idea of infinity. And the reason why life is beautiful is because it is fundamentally limited.” That quote tees up animations from the 2008 Disney Pixar film WALL-E of humans glued to devices and media, spliced with clips of actual humans walking around head down with their phones in front of them. Mark Zuckerberg’s infamous 2021 brand change announcement starts playing—“Today, we’re gonna talk about the Metaverse”—but only long enough to cut back in time to his somber and shaky 2018 testimony to Congress: “it’s clear now that we didn’t do enough to prevent these tools from being used for harm as well.” More from Minhaj’s interview rounds everything out before cutting back to black. The video—this stew of obvious yet idiosyncratic contrasts—has 2.1 million likes.

    Screenshots of two cuts in sequential order from @sebastianvalencia.mp4’s corecore TikTok video, 2023.

    The above style of editing on TikTok is called “corecore,” which is a genre that dramatizes a fraught relationship with TikTok’s -cores, or suffix tags that index micro aesthetics. Corecores elicit the platformization of feeling—“vibes”—ranging from poignant to maudlin. Corecore edits can be disorienting like other odd editing trends, but they’re somewhat outside trend categories. Since #corecore is, as post-disciplinary duo Y7 explain, “a category defined by the very act of categorization,” there isn’t the same “immediate implication of an aesthetic” as you might see with a weirdcore edit or meme, which is an aesthetic that consistently pulls from early internet graphics (Y7 2023). No, corecore “took trends and trending as its subject” (Y7 2023).

    On January 1, 2021, user @masonoelle may have been the first to create a corecore with, as always, a fairly slapdash compilation of clips on “the climate crisis (polar ice caps melting, deforestation, major flooding), critiques of the United States Army, and the oversaturation of media” (Mendez 2023). By late 2022, corecore videos had amassed enough of a following and recognition that journalists started reporting on it. Kiernan Press-Reynolds’ account became the blueprint, describing the “anti-trend” trend of corecore as an “algorithmically-generated craze that boils down to an amorphous intangible “vibe,” a free-floating aesthetic with no roots outside TikTok” (Press-Reynolds 2022). Cultural critics, journalists, and everyday TikTok users have debated whether corecore was a profound aesthetic intervention or just elementary shitposting.[2] Press-Reynolds notes that the discourse surrounding corecore has almost been more interesting than the videos themselves: “people argue corecore is more than memes: it’s a politically charged art movement critical of capitalism and technology’s atomizing effect on society. The other camp says the videos are all about surreal humor and vibes; the amorphous essence of subjective interpretation; intangible emotions” (Press-Reynolds 2023). Corecore videos often make either niche sense or too much polemic sense.

    Eventually, as is the way of all trends on TikTok, corecore slowly ran its course. Offshoots like #hopecore emerged and they, too, ran their course. Both gave way in late 2024 to a mutt aesthetic—“hopelesscore”—which is known for depicting negative, anti-social, and/or depressive quote animations via fonts and over visuals typically associated with motivation, like footage of a sunset at a beach.[3] As I develop in this essay, corecore gave way to a significant lifeworld, full of substantial audience interest as well as aesthetic appropriation. This ongoing lifeworld suggests that corecore is less a question of signification, or a question of “content” and meaning, and more a rhetorical question of effect and reception: a critical mass of these unruly “anti-trend” aesthetics indicate rhetorical heft and cultural significance at a time when the future of TikTok in the US both as a platform and a political topos remains unclear.

    This interdisciplinary essay draws from rhetorical studies, tech reporting, and media studies to argue that corecore could be productively thought of as a contemporary version of the epideictic, which is the rhetorical genre of praising or blaming. In Debra Hawhee’s words, the main objective of the epideictic is “to render explicit something already known, and then to intensify preexisting commitments” (2023, 27). One of the three Aristotelian genres of rhetoric, the epideictic is demonstrative and often ceremonial oratory. The epideictic helps shape “the basic codes of value and belief by which a society or culture lives” (Walker 2000, 9). Common scenes for the epideictic are funerals, weddings, roasts, holidays, and so on. A typical example is someone giving a retirement speech for their colleague. By collectively honoring their colleague’s attributes and past actions, the retirement speech also solidifies that specific community’s (surface-level) majority values vis-à-vis work, career paths, expressions of collegiality, and so on. The conventions and strategies of the epideictic genre are in the service of all parties walking away feeling at least affirmed in their convictions.

    For the purposes of this essay, however, I am most interested in the connections between the epideictic and the act of witnessing. Hawhee defines witnessing as “weighty assertions of material presence that lay bare injustices and demand a reckoning” (2023, 8). Witnessing “foregrounds justice and morality,” so “keeping witnessing front and center” is crucial for addressing ecological breakdown (2023, 8). She uses the word “keeping” there deliberately, to be in line with arguments that “identify witnessing as the defining act of our time” (2023, 154). In the face of ecocide and the widespread denial of that ecocide, the act of witnessing is an increasingly salient process by which to respond to—and emphasize—precarity. Witnessing is front of mind for scholars responding to precarity. Current scholarship like Michael Richardson’s Nonhuman Witnessing: War, Data, and Ecology After the End of the World (2024), for instance, expands the scope of traditional articulations of witnessing to include nonhuman witnesses like image recognition systems. In this conception, witnessing is a relational project that necessarily exceeds the “capacity to “know” inherited from Western epistemologies” (2024, 8). The genre of the epideictic both ritualizes and mobilizes that relational project through memorializing, capturing, eulogizing, and bearing responsibility,[4] all of which are essential to witnessing.

    Each corecore can be thought of as a witnessing because each corecore is an attempt to make “a core out of the collective consciousness” (Townsend 2023). Aesthetics scholar Mitch Therieau comments that the videos ‘“have a sheen of smoothness and detachment, but it’s like people are screaming underneath”’ (Glossop 2023). Corecore is the dark, ironic -core, documenting and making salient the ecocidal fallout from, in corecore’s POV, digitality: “racial capitalism” (Robinson 1983; Kelley 2017), techno-solutionism (Kneese 2023), cyberlibertarianism (Golumbia 2024), and all the other asymmetries that inhere in how the internet and digital technologies have been both symptoms and drivers of crisis.

    And still, while corecore videos are tender, often scrambled efforts, reframing corecore as epideictic witnessing reveals a key yet obscured component: platforms. For this special issue “Critique as Care” in memory of my dearly missed former colleague David Golumbia, “to hold space,” as the CFP asked, “for simultaneity and contradiction,” I want to posit that the opposite of corecore—TikTok LIVE and its commercial livestream program—is a window into how the platform witnesses us witnessing it through such “disobedient aesthetics” as corecore.[5] After all, it would not be a piece in honor of Golumbia without an interrogation of the antidemocratic politics of the technics themselves. I felt strongly about critiquing TikTok LIVE when I wrote this piece in 2024, but in 2025, after the law banning TikTok did not go into effect and US users received not one, but two notifications of shameless propaganda, I feel compelled. Through an analysis of livestream by way of leaked documents, reporting, and outputs, I will suggest that TikTok witnessed corecore through what Anna Munster and Adrian Mackenzie term “platform seeing” (2019), or a platform’s modality of perception “produced through the distributive events and technocultural processes performed by, on and as image collections are engaged by deep learning assemblages” (2019, 10). Through these assemblages, TikTok observed corecore and continues to turn those values back onto themselves. Moreover, these platform-seeing assemblages will always bear witness and therefore always absorb and warp user-generated epideictic truths, which confirms the need to protect platformed epideictic witnessing. In this essay, I articulate the epideictic functionality of aesthetic interventions to claim that they are acts of witnessing. Ultimately, in doing so, I reach for connections between a praxis of care, critique, and scholarly witnessing.

    The Epideictic Witnessing of Corecore: Fatigue

    Writing about the development of twenty-first century art and performance as they’ve been shaped by digital technology, Claire Bishop states that there are new conditions of spectatorship (2024, 4). Bishop attends to those terrains through examining how attention has been historically and culturally defined as a normative value and practice in relation to art and artistic interventions.[6] Just like media and technologies, we know that art structures ways of seeing that support and run counter to dominant expectations for how to express and cultivate aesthetic taste.[7] “Spectatorial conventions” form from repeated interactions with artistic strategies, “individual inclinations, and unforeseen contextual eventualities” (2024, 35). With this appreciation for the rhetorical contingencies of mediated and distributed attention, Bishop questions whether there is a hierarchical difference between attentional modes. She cites dance theorist André Lepecki (2016) who argues that the “spectators” of social media are more passive than “witnesses:” “only the witness sees the whole performance and is embodied and emotionally in touch with what they are seeing” (2024, 79). With fluctuating conditions of spectatorship, however, it is just not that simple.

    Within digitality, hierarchical paradigms of observing do not apply, if they ever did. Social media users are constantly toggling between modes of spectating, from platform specific modes to occasion specific modes, through and beside interfaces. We are at once spectators and Lepecki’s witnesses: such distinctions break down in participatory publics where we are all performing and negotiating multiple appeals to ethos, only for algorithmic visibility filtering to displace or gather views. While detachment is a part of these modalities, social media spectatorship is not reducible to detachment or distraction.

    Part of why I was drawn to put the epideictic into conversation with corecore and witnessing is that it offers a figuration of spectating that anticipates these fluid conditions of spectatorship. Intriguingly, the figuration that the epideictic offers is related to theory and theorizing. As Sharon Crowley explains, in ancient Greek the verb theorein meant ‘“to observe from afar”; it refers to someone sitting in the topmost row of the theater. A theorist is the spectator who is most distant from the scene being enacted on stage and whose body is thus in one sense the least involved in the production but who nonetheless affects and is affected by it” (2006, 27). The implication is that distance—distraction, perhaps, or mediation—does not necessarily entail a lessened audience experience. This hybrid, bodily, and slightly detached theorein was precisely what was expected from epideictic audiences. Christine Oravec confirms that theoria—observation—was the “function assigned to the epideictic audience” (1976, 164). The epideictic audience were there to receive the disclosed values and unearthed truths from rhetorics of display: “theoroi means one who looks at, views, beholds, contemplates, speculates, or theorizes. These various translations indicate a kind of insight or power of generalization, as well as a passive viewing” (1976, 164). There were three different varieties of theoria and all invoked a journey: Andrea Wilson Nightingale explains that “the first two involved pilgrimages to religious oracles or festivals and, in the third, the theoros travelled abroad as a researcher or tourist” (2001, 29). Distance and detachment are crucial to all three versions of what Nightingale calls these “envoys” of meaning. Audiences for the epideictic weren’t given an immediate call to arms so much as primed to feel—the warm camaraderie from mutual recognition, certainly, but also an appreciation, both analytic and intuitive, for the artistry of what they were observing.

    Scores of scholars have pointed out that the epideictic is a unique and slippery force—it compels engagement with its strange temporality, for instance[8]—but I mainly want to focus on its connection to aesthetics. Dale Sullivan accounts for at least four purposes of the epideictic: “preservation, education, celebration, and aesthetic creation” (1993, 116). Each of these purposes require attention to style; the epideictic rhetor was expected to use “many kinds of amplification” and magnification (Aristotle 1368a). In fact, part of the audience’s job in fulfilling theorein was to observe the rhetor’s skill: was the rhetoric effective at being affective? The audience was invited to “respond to the speech itself as an aesthetic object” (Oravec 1976, 168) by opening themselves up to “the sensory qualities of the speech itself” (Oravec 1976, 163)—the qualities and strategies that most stimulate “through the senses” (Oravec 1976, 171). This nexus of disinterested detachment, sensitized senses, and judgment speaks to a lineage of aesthetics as sensory persuasion, in the doubled passive and active act of beholding: as Matthew Fuller and Eyal Weizman elaborate, “aesthetics is not only about sensation or receiving information understood as a passive act; it is also about perception, the making sense of what is sensed” (2021, 34). Sensory persuasion encompasses how epideictic amplification makes values and revelations matter.

    In sum, the epideictic aims to surface commitments by creating an occasion wherein audiences re-view these commitments through aesthetic sense-making. Aesthetic sense-making is a significant modality for uncovering value paradigms even as they potentially emerge from, or refuse, hegemonic value paradigms. The tension from that relationality produces ambient anxieties and the aesthetic sense-making of the contemporary epideictic are how we might witness those anxieties. Platforms are indeed technologies of control as well as extractive systems—a “hellscape of dreary stimuli”—and still, user-generated epideictic efforts—“an oasis of unthinking vibes” (Press-Reynolds 2022)—bring to light misdeeds and unease.

    In response to that “hellscape,” many of us have no other recourse but to bear witness. And in the context of TikTok as in the tradition of the epideictic, bearing witness will always be aestheticized. For example, the main rhetorical strategy across corecore videos is aesthetic juxtaposition. Take this popular corecore video, bookmarked 266 thousand times. It begins with a kid being asked about how much money they want to make when they grow up to which they respond they want to help people feel okay. That innocence influences the viewer to receive every other clip as evidence that we are not, in fact, OK: sped-up footage of a traffic intersection, Ryan Gosling’s character in Blade Runner 2049 screaming, a row of elderly people monotonously pressing slot machines in a casino, and a violent crowd pushing into some big box retail store. The drone of an organ pad orchestrates a melancholic vibe.

    The comments on this corecore, as with many corecores, express mutuality—a chorus of users commenting “real” or “thank you” or “this is why…”—because in the truest sense of the epideictic everyone gathered and compelled to receive the display enters a “timeless, consubstantial space carved out by their mutual contemplation of reality” (Sullivan 1993, 128). Although the phrasing of “consubstantial space” might imply a flattening of difference, Jodie Nicotra clarifies that platformed epideictic “does not issue from and to an already-constituted community; rather, by virtue of a process, it enacts a community” (Nicotra 2016). The corecore contrasts are aesthetic stimulants that work to unravel a new-old value, some heretofore muted or jumbled realization on the tip of our tongues. Even with all the alterity of a shifting online “audience,” corecore edits initiate aesthetic sense-making that discover, over and over, one particularly salient shared truth: fatigue.

    Fatigue sounds about right because it is right. Broadly, Sianne Ngai notes, “aesthetic experience has been transformed by the hypercommodified, information-saturated, performance-driven conditions of late capitalism” (Ngai 2015, 1). As a result, aesthetic categories and aestheticization are, as McKenzie Wark summarizes, “in-between play and labor, and they signal an era in which work becomes play and play becomes work” (2020, 16). The imperative to self-optimize while negotiating an overwhelming lack of boundaries, infrastructure, trust—the list goes on—is exhausting. Indiscriminate monetization levels all content, and that leveling is traumatizing when political and economic hierarchies could not be more pronounced in most contexts. The constant transmission of Black trauma through the “trope” and “trap” of what Legacy Russell calls the “Black meme” remains especially unbearable (2024, 8). And for a while we spoke to and out of this despair, relying on what Nathan Schneider terms “affective voice,” or the feeling that you are speaking truth to power, which platforms purposefully confuse with “effective voice,” or the actual “instrumental power to change something” (2024, 20). But given years of outrage and never seeing much happen, years of hyper-algorithmic feeds that prioritize hot takes amid the capitalist fracturing of communities and relationships, we’re now plagued with, to borrow from Kate Lindsay, “opinion fatigue:” users are increasingly making “the choice to opt out or otherwise radically alter how they post their thoughts online” (Lindsay 2023). Lindsay speculates that context collapse has been a part of this shift because “Public opinion around a topic can shift but is then sometimes retroactively applied to internet opinions formed long before this new consensus” (Lindsay 2023). It’s all too much. We’re tired.

    The aesthetic collisions of efforts like corecore inclined us to witness this ambient anxiety. It’s not that the young and the online are sensitive, triggered by every politically incorrect message. Not even close. Their fatigue is an existential kind of fatigue. Witnessing this fatigue—displaying and holding this fatigue in common—should have been the start of us coming together to agree on one simple point: never again will we let tech companies perform historical reenactments of feudalisms at the expense of our health, our environment, our institutions, our democracies—again, the list goes on. And while fatigue doesn’t seem like the most effective tool for profound witnessing, I’m reminded of Tamika L. Carey’s 2023 Feminisms and Rhetorics Conference keynote in which she draws from Black feminist thought and narratives to trace and reimagine the concept of fatigue. Carey argues that “conversations about fatigue invite us to refine our approaches to listening, to deepen our understanding of relationships, and to invest in reparative practices” (2023, 3). Fatigue, Carey points out, can be marshalled into a resistant form of impatience, or a productive refusal to participate in harmful practices and systems. Fatigue can help us find an in-road into repair: Carey perceives the potential to allow fatigue to orient praxis toward restorative justice, rest, and community-oriented self-care. Witnessing fatigue—really coming to terms with what this fatigue means and how it was wrought—might have been the first rhetorical step toward emancipation from Big Tech. The problem is, they witnessed us witnessing fatigue and they also said: never again.

    The Platform Witnessing of Corecore: Engagement

    In October 2024, the public was given a rare window into internal TikTok research findings and communications, including information about the degree of effectiveness of remedial measures, how the app more than appeals to young users, content regulation practices, and so on. Fourteen attorneys general led an investigation into TikTok; attendant lawsuits from more than a dozen states claim that the app knowingly hooks children and younger users. Each lawsuit contained redactions due to confidentiality agreements with TikTok. However, the lawsuit filed by the Kentucky Attorney’s General used digital redactions that Kentucky Public Radio could read. These redactions “appeared to primarily quote and summarize findings from internal TikTok documents and communications” (Goodman 2024).

    These documents say the quiet part out loud. TikTok’s own research “states that “compulsive usage correlates with a slew of negative mental health effects like loss of analytical skills, memory formation, contextual thinking, conversational depth, empathy, and increased anxiety”’ (Allyn et al. 2024). The NPR report continues: the time limit tool, which lets parents set daily screen time limits, was not implemented to help teens reduce their time on the app. TikTok was curious whether the tool could, in their words, improve “public trust”’ (Allyn et al. 2024). Kentucky investigators also found that TikTok made changes to their algorithm to address ‘“a high volume of…not attractive subjects”’ (Allyn et al. 2024). The algorithm had been retooled to boost content from creators the company deemed attractive. TikTok’s content moderation is faulty and inconsistent. They rely on artificial intelligence for the first go around and human moderators come in “only if the video has a certain amount of views” (Allyn et al. 2024). Internally, TikTok acknowledges “substantial “leakage” rates of violating content that’s not removed. Those leakage rates include: 35.71% of “Normalization of Pedophilia;” 33.33% of “Minor Sexual Solicitation;” 39.13% of “Minor Physical Abuse;” 30.36% of “leading minors off platform;” 50% of “Glorification of Minor Sexual Assault;” and “100% of “Fetishizing Minors” (Allyn et al. 2024). And yet, a presentation for top company officials “revealed that an internal document “instructed moderators to not take action on reports on underage users unless their bio specifically states they are 13 or younger” (Allyn et al. 2024). An unnamed TikTok executive said the reason kids are on TikTok is because the app’s algorithm is so powerful that it “keeps them from “sleep, and eating, and moving around the room, and looking at someone in the eyes” (Allyn et al. 2024).

    The technicity of this platform—how it moderates and curates content, how its algorithm (micro)manages what users encounter, and how the interface is designed to prioritize video and deprioritize everything else, including context—is a technicity inseparable from cyberlibertarianism in that those logics have afforded this technicity just as much as this technicity furthers those logics. Golumbia specifies that cyberlibertarianism is not a coherent dogma: just like fascism, many of its tenants and appeals are contradictory. Cyberlibertarianism is, however, a useful concept for identifying doctrine based on “anti-democracy” and pro-corporate foundations (2024, 16): a cyberlibertarian faith in tech wants to reconfigure “social and cultural phenomena into free market terms” (2024, 36) so that it can do away with democratic institutions, expertise, and governments even while claiming such ideals as “democratization,” “community,” “voice,” “access,” and “engagement” (2024, 46). Golumbia explains that this rhetoric looks both ways: “we seem to be talking about copyright, freedom of speech, or the “democratization” of information or some technology. But if we listen closely, we hear a different conversation that questions our right and ability to govern ourselves” (xxiii). Are the conditions on TikTok, for example, democratic if its algorithm places users into “‘filter bubbles’ after 30 minutes of use in one sitting”’ (Allyn et al. 2024)? Can we claim democratic conditions after “As a result of President Trump’s efforts, TikTok is back in the U.S.!” was broadcast on every US TikTok user’s interface?[9]

    As I see it, cyberlibertarianism is of a piece of other naming projects that attempt to capture how digitality promotes a deregulated market that will somehow take care of hate speech, disinformation, doxing, AI sludge—everything. Schneider, for instance, argues that the design of platforms is feudalistic because the politics of this design increasingly nudges users “toward autocratic or oligarchic forms of community governance” while simultaneously profiting off their habits and behaviors (2024, 44). I also think of Damien Smith Pfister and Misti Yang’s conceptualization of technoliberalism (2018), which they define as a governing rationality in which digital technologies assume complete democratic and epistemic power to siphon technical expertise and resources while jettisoning democratic opportunities for deliberation. While these concepts have precise histories and trajectories, all three illuminate digitality’s translation of democratic principles into economic imperatives and concentrations of power.

    The technicity of TikTok is a product of this twisted cyberlibertarianism x feudalism x technoliberalism collab. It’s the same collab that corecore bore witness to, but it’s also the same collab that witnessed and reabsorbed corecore. Both Hawhee and Richardson note in their work on witnessing that in both senses of the word “arts and acts of witnessing, fortified with the clarifying power of insistence that they gathered over the course of the last century, are expanding to include nonhumans as well as humans” (Hawhee 2024, 4). Witnessing is not a singular project, but something that multiple agents enact. “The human viewpoint,” Joanna Zylinska reminds us, is “precisely a viewpoint”—one of and through many (Zylinska 2023, 129). The transformation and industrialization of vision during the twentieth century turned “vision” into what it is in the twenty-first century: “machine-based process” (Zylinska 2023, 10). Platforms like TikTok “see” through what Munster and Mackenzie call “observation events” that are “distributed throughout and across devices, hardware, human agents and artificial networked architectures such as deep learning networks” (2019, 5). Even without humans and even without datasets of visuals, platforms deploy observation to collect, process, and analyze data. These “observation events” bear witness, a form of “computational spectatorship” (Heras 2019, 180).

    Corecore edits were perceived by platform observation assemblages. Composites of cylberlibertarian-feudal-technoliberal logics repurposed corecore creations into acts of platform witnessing. The fruits of the original epideictic witnessing—the value of really dwelling with what collective fatigue might mean, for instance—were seen for what they were only to be absorbed to serve antithetical purposes. As one of the original corecore creators wrote on an Instagram story: “The whole point of this stuff is to create something that can’t be categorized, commodified, made into clickbait, or moderated—something immune to the functions of control that dictate the content we consume and the ideas we are allowed to hold” (Mendez 2023). Although the effects of creating, witnessing, engaging, and circulating corecore can’t all be commodified, these acts of witnessing were still subject to platform seeing. The closest existing theorization of platformed epideictic is Nicotra’s in which she attends to the architecture of mid 2000s Twitter to argue that “epideictic acts of public shaming demonstrate the inexorably technological nature of all rhetorical acts—that the technologies are not separate or supplemental to the rhetorical acts, but are rather co-constitutive” (Nicotra 2016). Attention to technologies is the reason Nicotra refashioned the epideictic, turning what was mostly considered a rhetorical genre into a potential. Unfortunately, the algorithmic systems of platform architectures “tam[e] potential into probability” (Richardson 2024, 87).

    If corecore presents one end of a spectrum of TikTok content—as radical as the moderation is going to allow—the opposite end of that same spectrum is TikTok Live and its livestream program, which is widely experienced as the “unregulated underbelly of the app” (Press-Reynolds 2023). For example, Forbes reporter Alexandra S. Levine released a damning account of TikTok Live in 2022—“How TikTok Live Became ‘A Strip Club Filled with 15-Year-Olds”—exposing how the livestream function has enabled predatory behaviors toward vulnerable users. Live is “one of the darker manifestations of the gig economy to date” (Press-Reynolds 2023). Creators want “gifts”—money—and TikTok doesn’t care how they earn that money because TikTok will take a huge cut from every transaction. Press-Reynolds explains that this structure is different from a structure like Twitch where creators build up a fanbase. Fanbases can be built on TikTok, too, but mostly live-streaming creators just throw everything under the kitchen sink to “hook viewers and coax donations” (Press-Reynolds 2023). It is no accident that these “donations” are designed to look like things rather than money: hearts, cars, flowers, animals…many of them are AI slop, from “money gun” for 500 coins to “naughty chicken” for 299 coins. Viewers buy and give these “gifts” for all kinds of reasons, but you can see how the habit of giving could result in chemical responses: will the creator acknowledge me if I send a gift? What about now? What if I send a gift to this creator? Livestream banks on a tempting—and sometimes expensive—mode of parasociality.

    Since creators do receive some funds from “gifts,” the BBC reported in 2022 that displaced people and families in Syrian camps were begging for hours at a time on livestream. This begging created a mini economy, with people in the middle supported by “live agencies” in China working directly with TikTok to help unblock accounts while the agents in the middle take a cut of the profits by providing streaming equipment (Gelbart et al. 2022). BBC monitored gift streams of $1,000 an hour, but creators only received a fraction. The reporters note that “TikTok said it would take prompt action against “exploitative begging” (Gelbart et al. 2022), diverting attention away from the real problem. 

    Users have wildly different experiences on Live, which has produced a variety of what Motahhare Eslami et al. term “folk theories” (2016), or sense-making narratives that social media users form from their experiences on black-boxed platforms. On one YouTube video about the “dark side” of Lives, a user comments that they’ve “seen other types of streams, where a man forces a disabled man who lives in what looks like a hut, to dance to tiktok audios in a dress.”[10] Another writes about one that was streaming, without context, a baby with macrocephaly. One person confirms in the comment section: “I’m from Syria, and yes the situation there is very very very rough, money, jobs, food, water, and electricity are in very very short supply.”[11] In the subreddit r/changemyview, a user writes a post titled “TikTok’s live feature is immoral. It gets clicks by putting disabled people on the feed like animals at a zoo.”[12] This “folk theory” is an attempt to bear witness to what they’re observing. However, someone responded, “this isn’t how that works; the application you were speaking about tends to display content that associates to your previous history/what it thinks you may have interest in.” Someone else writes: “On my TikTok all my lives are musicians and anime cosplayers.”[13] While this particular subreddit is designed to expand and often correct the original poster, such countering and sometimes moralizing of “folk theories” from other users is part of why “disobedient aesthetics” like corecore edits are so vital: they provide another layer of mediation to “folk theories,” toward honoring the ambiguities of platformed living. The long and short of it is that no one has any real idea about how Live works, in general and for other people: it’s sometimes neat (musicians playing the piano) and sometimes cozy (work from home employees inviting body doubling). It’s also unexpected (“Yea this shit is hella weird, i saw one where some guy was just slowly peeling away boiled eggs and kept spamming “tap tap tap tap thank you thank you send gift”) or gives dystopic vibes (“I keep seeing one [sic] with people laying on clinical beds, rocking side to side. My mind takes me to weird places”).[14] Given the structure, we cannot control where we will end up. As one TikTok official stated in the redacted documents, “a major challenge with Live business is that the content that gets the highest engagement may not be the content we want on our platform” (Allyn et al. 2024). Sexualization of teens…refugees begging…babies with macrocephaly—I think I’ve seen this corecore before.

    Corecore used aesthetic juxtapositions to reveal fatigue with Big Tech platformization. Those aesthetic collisions intended aestheticism—a more sensitive orientation—through the shock of dissonance and layers of mediation. TikTok used platform seeing to digest these aesthetic collisions, spitting them back out as more monetized livestreams. Those events, however, intended anesthetizing, or the kind of numbing that keeps you transferring funds and doomscrolling. TikTok’s livestreams took the chaotic user-generated epideictic witnessing of fatigue and forced it to become a witnessing of the Big Tech value of engagement. In a turn of events that writes itself, Tim Cook announced the “newest iPad Air” in March 2025 by showing a mock-up “trend report” on, you guessed it, #corecore.[15] Years later and Big Tech continues to commodify what was never meant to be commodified.

    Scholarly Witnessing: Care

    The 2024 Oxford Word of the Year was “brain rot,” defined as “the supposed deterioration of a person’s mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging.”[16] At a moment in time when platforms are rolling out AI features that no one asked for and that no one is really ready for, only for AI generated images to become “evidence” of falsehoods,[17] “brain rot” encapsulates a growing, but ironicized concern about the content we’re taking in. “Technolibertarian notions that technologies are value neutral and that information wants to be free,” Jonathan Carter and Misti Yang emphasize, position “the general intellect as a boundless frontier to be exploited” (2023, 367). For “brain rot” to get chosen as the 2024 Oxford Word of the Year means that there is now a much broader recognition of that exploitation. Since aesthetics stick around longer than trends, we’re surrounded by the remnants of witnessing that were unceremoniously churned into revenue streams. Where epideictic content like corecore might have rhetorically positioned us as observers—theoroi—social media rhetorics like the functionality of LIVE position us to rot.

    This special issue asked us to bring nuance to critique—to perform scholarly critique from a place of care or caring even while actively discrediting computational solutionism, as Golumbia stressed time and time again. Critique as care is my effort to come to terms with the original display of corecore for what users wanted it to be, not for how the algorithmic systems witnessed and twisted them. Critique as care, then, is an articulation of the scholarly version of witnessing that can bear out from observing—theorizing—user-generated rhetorics as meaningful attempts to navigate unfair power dynamics. By attending to corecore, I extend theories of epideictic rhetoric to better accommodate platformization and its effects on rhetorical acts. By forwarding “platform seeing,” I think alongside Richardson’s question: “If algorithms are themselves witnessing, making knowledge, and forging worlds of their own design, what might it mean to witness their workings?” (2024, 81). In calling attention to leaked documents demonstrating TikTok’s internal culture and praxis, I take seriously Shannon Vallor’s provocation that if Big Tech ultimately remakes our world in its image, scholars might pay for our “habit of epistemic caution with our lives and our children’s futures” (2024, 162). By that she does not mean to undermine best practices for responsible scholarship as much as she means to encourage scholars to, once ready, inhabit force, passion, and courage—to just say, “this is cyberlibertarianism.” And it is fucked. Romeo García and coauthors echo that provocation, writing that “scholars are also guilty—sometimes unconsciously—of re-subjecting those they write and think about to the same epistemic violence they wish to trace, critique, and/or unsettle” (2024, 294). It is not, they write, that the scholar is “the observer merely observing.” Rather, “because the scholar engages in human work (wording) and human projects (worlding), they are indeed active actor-agents who have the capacity to engage in doings otherwise” (2024, 294).

    Now would be a good time to quote Golumbia’s close friend, George Justice, who wrote the forward for Cyberlibertarianism. Justice calls Golumbia the “most optimistic pessimist you could ever meet” (2024, ix). He goes on to say that the pages of Cyberlibertarianism “are dark in their insistence that the technologies we deploy in nearly all aspects of our lives have been built on fundamentally antidemocratic, antihuman premises,” and yet the “richness of his thought betrays an essentially hopeful belief in powers of the human mind to contemplate, understand, and attempt to change the world for the better” (2024, ix-x). As a scholar, I have not always understood that you can do both: you can hold these systems accountable and you can still be curious. You can practice sound citational politics and you can hone a unique voice and you can seek traditional venues and you can innovate. Something I have always appreciated about rhetorical training is that it exercises your capacity to find nuance, but in the past that training has prevented me from also finding certainties. I came to Virginia Commonwealth University in 2018, attempting to start a book project that was curious—not certain—about what was happening to opinions vis-à-vis social media. Golumbia, on the other hand, had just published a  article earlier that year titled “Social Media Has Hijacked Our Brains and Threatens Global Democracy.”[18] He had already predicted brain rot.

    I’m reminded of that expression of two ships passing in the night. But I eventually arrived to a place still informed by care, but very certain that things were as bad as Golumbia had known them to be. My last correspondence with him was to thank him for a talk he did on fascisms and to send a book review I had just written of a rhetorical studies collection on fascism. He was thrilled that I was doing research and teaching about these topics. He wrote, “I would love to talk some of these things over when we both have a free second…,” because while fascism is certain, scholarly care is boundless.

    Caddie Alford (she/her/hers) is associate professor of rhetoric and writing at Virginia Commonwealth University. She is a digital rhetoric scholar whose interdisciplinary research examines emergent forms of information, communication, and sociality. Her recent book—Entitled Opinions: Doxa After Digitality—addresses social media rhetorics by creating an affirmative theory of opinions to identify and repurpose a spectrum of truths. Some of her work has appeared in The Quarterly Journal of SpeechRhetoric Review; and enculturation.

    Bibliography

    Allyn, Bobby, Sylvia Goodman, and Dara Kerr. 2024. “TikTok Executives Know about App’s Effect on Teens, Lawsuit Documents Allege.” NPR, October 22, 2024. https://www.npr.org/2024/10/11/g-s1-27676/tiktok-redacted-documents-in-teen-safety-lawsuit-revealed.

    Allyn, Bobby, Sylvia Goodman, and Dara Kerr. 2024. “Inside the TikTok Documents: Stripping Teens and Boosting ‘Attractive’ People.” NPR, October 16, 2024. https://www.npr.org/2024/10/12/g-s1-28040/teens-tiktok-addiction-lawsuit-investigation-documents.

    Aristotle. On Rhetoric. Translated by George Kennedy. 1991. Oxford: Oxford University Press.

    Bishop, Claire. 2024. Disordered Attention: How We Look at Art and Performance Today. Verso.

    Carey, Tamika L. 2023. “The Uses of Fatigue: Invitations, Impatience, and Investments.” Keynote Address, Feminisms and Rhetorics Conference. https://cfshrc.org/article/the-uses-of-fatigue-invitations-impatience-and-investments/.

    Carter, Jonathan S. and Misti Yang. 2023. “Sophie vs. the Machine: Neo-Luddism as Response to Technical-Colonial Corruption of the General Intellect.” Rhetoric Society Quarterly, 53:3: 366-378. https://doi.org/10.1080/02773945.2023.2200699.

    Crowley, Sharon. 2006. Toward A Civil Discourse: Rhetoric and Fundamentalism. Pittsburgh: University of Pittsburgh Press.

    Eslami, Motahhare, Karrie Karaholios, Christian Sandvig, Kristen Vaccaro, Aimee Rickman, Kevin Hamilton, Alex Kirlik. 2016. “First I “Like” it, then I Hide it: Folk Theories of Social Feeds.” In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). Association for Computing Machinery, New York, NY, USA, 2371–2382. https://doi.org/10.1145/2858036.2858494.

    Fuller, Matthew and Eyal Weizman. 2021. Investigative Aesthetics: Conflicts and Commons in the Politics of Truth. Verso.

    García, Romeo, Jenna Zan, Muath Qadous, Mitzi Ceballos, Keith L. McDonald, and Sabita Bastakoti. 2024. “Collective Rewor(l)ding in the Wreckage of Hauntings and Haunting Situations.” In The Routledge Handbook of Rhetoric and Power. Edited by Nathan Crick. Routledge. 293-310.

    Gelbart, Hannah, Mamdouh Akbiek, and Ziad Al-Qattan. 2022. “TikTok Profits from Livestreams of Families Begging.” BBC, October 11, 2022. https://www.bbc.com/news/world-63213567.

    Glossup, Ella. 2023. “Corecore is the Screaming-Into-Void TikTok Trend We Deserve.” Vice, January 23, 2023. https://www.vice.com/en/article/corecore-tiktok-trend-explained/.

    Golumbia, David. 2024. Cyberlibertarianism: The Right-Wing Politics of Digital Technology. Minneapolis: University of Minnesota Press.

    Goodman, Sylvia. 2024. “AG Coleman Sues TikTok, Says Internal Documents Show Company Knowingly Addicted KY Youth.” Kentucky Public Radio, October 9, 2024. https://www.lpm.org/news/2024-10-09/ag-coleman-sues-tiktok-says-internal-documents-show-company-knowingly-addicted-ky-youth.

    Hawhee, Debra. 2023. A Sense of Urgency: How the Climate Change is Changing Rhetoric. Chicago: University of Chicago Press.

    Heras, Daniel Chávez. 2019. “Spectacular Machinery and Encrypted Spectatorship.” Machine Feeling, 8(1), 170-182. https://doi.org/10.7146/aprja.v8i1.115423.

    Kelley, Robin D. G. 2017. “What Did Cedric Robinson Mean by Racial Capitalism?” Boston Review, January 12, 2017. https://www.bostonreview.net/articles/robin-d-g-kelley-introduction-race-capitalism-justice/.

    Kneese, Tamara. 2023. Death Glitch: How Techno-Solutionism Fails Us in this Life and Beyond. New Haven: Yale University Press.

    Lepecki, André. 2016. Singularities. Dance in the Age of Performance. London: Routledge. 

    Levine, Alexandra S. 2022. “How TikTok Live Became ‘A Strip Club Filled with 15-Year-Olds.” Forbes, April 27, 2022. https://www.forbes.com/sites/alexandralevine/2022/04/27/how-tiktok-live-became-a-strip-club-filled-with-15-year-olds/.

    Lindsay, Kate. 2023. “Is it Time to Embrace “Opinion Fatigue”?” Bustle, August 8, 2023. https://www.bustle.com/entertainment/online-takes-twitter-debates-opinion-fatigue.

    MacKenzie, Adrian and Anna Munster. 2019. “Platform Seeing: Image Ensembles and Their Invisualities.” Theory, Culture & Society, 36(5), 3-22. https://doi.org/10.1177/0263276419847508

    Mendez, Moises II. 2023. “What to Know About Corecore, the Latest Aesthetic Taking Over.” Time, January 20, 2023. https://time.com/6248637/corecore-tiktok-aesthetic/.

    Nayyar, Rhea. 2023. “What Does TikTok’s “Corecore” Have to Do with Dada?” Hyperallergic, January 26, 2023. https://hyperallergic.com/795957/what-does-tiktoks-corecore-have-to-do-with-dada/.

    Ngai, Sianne. 2015. Our Aesthetic Categories: Zany, Cute, Interesting. Cambridge: Harvard University Press.

    Nicotra, Jodie. 2016. “Disgust, Distributed: Virtual Public Shaming as Epideictic Assemblage.” Enculturation, July 6, 2016. https://enculturation.net/disgust-distributed.

    Nightingale, Andrea Wilson. 2001. “On Wandering and Wondering: ‘Theôria’ in Greek Philosophy and Culture.” Arion: A Journal of Humanities and the Classics 9, no. 2: 23–58. http://www.jstor.org/stable/20163840.

    Oravec, Christine. 1976. “‘Observation’ in Aristotle’s Theory of Epideictic.” Philosophy & Rhetoric 9, no. 3: 162–74. http://www.jstor.org/stable/40236982.

    Ore, Ersula J. 2019. Lynching: Violence, Rhetoric, and American Identity. Oxford: University Press of Mississippi.

    Pfister, Damien Smith and Misti Yang. 2018. “Five Theses on Technoliberalism and the Networked Public Sphere.” Communication and the Public3(3), 247-262. https://doi.org/10.1177/2057047318794963

    Press-Reynolds, Kieran. 2022. “This is Corecore (We’re not Kidding).” Nobells, November 29, 2022. https://nobells.blog/corecore/.

    Press-Reynolds, Kieran. 2023. “Is Corecore Radical Art or Gibberish Shitposts?” Nobells, January 20, 2023. https://nobells.blog/what-is-corecore/.

    Press-Reynolds, Kieran. 2023. “I Spent All Night on TikTok Live, and Discovered a Wasteland of Clickbait, Scams, and Other Oddities. It got Stranger and Darker by the Hour.” Business Insider, February 22, 2023. https://www.businessinsider.com/tiktok-live-all-night-clickbait-grifts-scams-sleep-streamers-twitch-2023-2.

    Richardson, Michael. 2024. Nonhuman Witnessing: War, Data, and Ecology After the End of the World. Durham: Duke University Press.

    Robinson, Cedric J. 1983. Black Marxism: The Making of the Black Radical Tradition. Chapel Hill: University of North Carolina Press.

    Russell, Legacy. 2024. Black Meme: A History of the Images that Make Us. Verso.

    Schneider, Nathan. 2024. Governable Spaces: Democratic Design for Online Life. Berkeley: University of California Press.

    Sheard, Cynthia Miecznikowski. 1996. “The Public Value of Epideictic Rhetoric.” College English 58, no. 7: 765–94. https://doi.org/10.2307/378414.

    Sullivan, Dale L. 1993. “The Ethos of Epideictic Encounter.” Philosophy & Rhetoric 26, no. 2: 113–33. http://www.jstor.org/stable/40237759.

    Townsend, Chase. 2024. “Explaining Corecore: How TikTok’s Newest Trend may be a Genuine Gen-Z Art Form.” Mashable, January 14, 2023. https://mashable.com/article/explaining-corecore-tiktok https://mashable.com/article/explaining-corecore-tiktok.

    Vallor, Shannon. 2024. The AI Mirror: How to Reclaim our Humanity in an Age of Machine Thinking. Oxford: Oxford University Press.

    Vallor, Shannon. 2024. “The Danger of Superhuman AI is not What You Think.” Noema, May 23, 2024. https://www.noemamag.com/the-danger-of-superhuman-ai-is-not-what-you-think/.

    Vivian, Bradford J. 2012. “Up from Memory: Epideictic Forgetting in Booker T. Washington’s Cotton States Exposition Address.” Philosophy & Rhetoric 45, no. 2: 189-212. https://muse.jhu.edu/article/475261.

    Walker, Jeffrey. 2000. Rhetoric and Poetics in Antiquity. Oxford: Oxford University Press.

    Wark, McKenzie. 2020. Sensoria: Thinkers for the Twenty-First Century. Verso.

    Y7. 2023. “A Postmortem of #corecore.” Flash Art, Summer 2023. https://flash—art.com/article/corecore/.

    Zylinska, Joanna. 2023. The Perception Machine: Our Photographic Future Between the Eye and AI. Boston: MIT Press.

    [1] Creators were making lip dub videos with the moment in the song when Burnham stages an increasingly frustrated dialogue with himself: 

    I put the packet on the glass (What glass?)
    The little glass dish in the microwave (Got it)
    I close the door (Which door?)
    The door to the microwave! What is wrong with you?!

    [2] “To me, Corecore’s “aesthetic” reads as an art school freshman’s first found-footage project in Adobe Premiere Pro (no, I’m not projecting) presented with the societal dread induced from doom-scrolling on one’s phone at 2am after one too many bong rips on a weeknight (again, not projecting …)” (Nayyar 2023).

    [3] For a smart analysis of hopelesscore, see Adam Aleksic’s 2025 substack essay, “How Hopelesscore Became even More Hopeless.” https://etymology.substack.com/p/how-hopelesscore-became-even-more.

    [4] Bradford Vivian confirms that witnessing as a mode of communication and rhetorical goal is “generally epideictic in nature” (2012, 191). And as Hawhee writes: “the documentary work endemic to the epideictic genre, in short, serves the rhetorical purpose of witnessing” (2023, 28).

    [5] Borrowing apt language here from Anthony Stagliano’s Disobedient Aesthetics: Surveillance, Bodies, Control (Alabama University Press, 2024).

    [6] The book’s main project “aims to move beyond the moralizing binary of attention/distraction, to dispense with attention’s economic framing to jettison plenitudinous modern attention as an impossible ideal, and to rethink contemporary spectatorship as neither good nor bad but perpetually hybrid and collective” (2024, 35).

    [7] To start, John Berger Ways of Seeing (1972).

    [8] As Cynthia Miecznikowski Sheard articulates, “By bringing together images of both the real—what is or at least appears to be—and the fictive or imaginary—what might be—epideictic discourse allows speaker and audience to envision possible, new, or at least different worlds” (1996, 770).

    [9] Quote is from the second notification that US TikTok users received on January 19.

    [10] https://www.youtube.com/watch?v=s5Cb6bznQYI&t=319s

    [11] https://www.youtube.com/watch?v=s5Cb6bznQYI&t=319s

    [12]https://www.reddit.com/r/changemyview/comments/p95po4/cmv_tiktoks_live_feature_is_immoral_it_gets/

    [13] https://www.reddit.com/r/changemyview/comments/p95po4/cmv_tiktoks_live_feature_is_immoral_it_gets/

    [14] https://www.youtube.com/watch?v=s5Cb6bznQYI&t=319s

    [15] https://x.com/tim_cook/status/1896951716517662999

    [16] https://corp.oup.com/news/brain-rot-named-oxford-word-of-the-year-2024/

    [17] https://www.rollingstone.com/culture/culture-news/ai-girl-maga-hurricane-helene-1235125285/

    [18] https://www.vice.com/en/article/social-media-threatens-global-democracy/ 

  • Alexander R. Galloway–The Uses of Disorder (A Review of David Golumbia’s Cyberlibertarianism)

    Alexander R. Galloway–The Uses of Disorder (A Review of David Golumbia’s Cyberlibertarianism)

    The Uses of Disorder: A Review of David Golumbia’s Cyberlibertarianism

    Alexander R. Galloway

    Does disorder have a politics? I suspect it must. It has a history, to be sure. Disorder is quite old, in fact, primeval even, the very precondition for the primeval, evident around the world in ancient notions of chaos, strife, or cosmic confusion. But does disorder have a politics as well? As an organizing principle, disorder achieved a certain coherence during the 1990s. In those years technology evangelists penned books with titles like Out of Control (the machines are in a state of disorder, but we like it), and The Cathedral and the Bazaar (disorderly souk good, well-ordered Canterbury bad).[1] The avant argument in those years focused on a radical deregulation of all things, a kind of full-stack libertarianism in which machines and organisms could, and should, self-organize without recourse to rule or law. Far from corroding political cohesion, as it did for Thomas Hobbes and any number of other political theorists, disorder began to be understood in a more positive sense, as the essential precondition for a liberated politics. Or as the late David Golumbia writes in Cyberlibertarianism, the computer society of the twentieth and early twenty-first centuries culminated in “the view that ‘centralized authority’ and ‘bureaucracy’ are somehow emblematic of concentrated power, whereas ‘distributed’ and ‘nonhierarchical’ systems oppose that power.”[2] And, further, Golumbia argues that much of the energy for these kinds of political judgements stemmed from a characteristically ring-wing impulse, namely a conservative reaction to the specter of central planning in socialist and communist societies and the concomitant endorsement of deregulation and the neutering of state power more generally. Isaiah Berlin’s notion of negative liberty had eclipsed all other conceptions of freedom; many prominent authors and technologists seemed to agree that positive liberty was only ever a path to destruction.[3] Or as Friedrich Hayek put it already in 1944, any form of positive, conscious imposition of order would inevitably follow “the road to serfdom.”[4] Liberty would thus thrive not from rational order, but from a carefully tended form of disorder.

    Ceci tuera cela, wrote Victor Hugo. “This will kill that. Books will topple buildings … printing will kill architecture.”[5] As Golumbia discusses in his Chapter 4, cyberlibertarians frequently use the analogy of Gutenberg when speculating on the revolutionary politics of new digital technologies. The Internet will transform society, cyberlibertarians argue, by doing away with all the old hierarchies and gatekeepers, much as the printing press once weakened the clergy’s monopoly over the Good News. It’s a real historical transformation, perhaps, but the phrase is also meant to work as a metaphor. This will kill that. Computers will topple buildings. And it’s even more precise than this. Computers do away with the very concept of “building,” cyberlibertarians argue, because computers are inherently disruptive of hierarchies and institutions. Computers perform a kind of un-building, a deconstruction of all hitherto existing constructions. Or as Jacques Derrida once divulged with a refreshing candor, “[i]f there had been no computer, deconstruction could never have happened.”[6] The cyberlibertarians say something similar: behold the modern computer; in its wake are dissolved all the old hierarchies of Western culture.

    Should we believe all this, this specific rhetoric of disorder? I, for one, don’t. And neither did Golumbia. I don’t believe Hayek. And if I were to believe Derrida, I doubt that he himself understood the consequences of such a pronouncement.[7] However I’m compelled to stay with the logic of disorder, at least for a while, given how disorder has colored so much of contemporary life. The disorder is real, I maintain, even if one should be skeptical about the rhetoric of liberation accompanying it. By the end I hope to convince you that disorder is not the general unraveling of order, but in fact an alternative system of order, and thus a clearly articulable form of political power.

    In other words, what the tech evangelists got wrong, and what Golumbia got right, was that this new chaotic infrastructure, this new anarchy of flesh and ferrite, did not signal a generalized relaxation of order and organization, but in fact constituted a new system of management just as robust as any of the old hierarchies. (Tellingly, Gilles Deleuze once labeled the burgeoning computer epoch a “society of control”, not a society of liberty or justice.[8]) Particularly formative for me in arriving at this opinion were books like Branden Hookway’s Pandemonium, an unclassifiable text from 1999 devoted to the “environment of ‘all demons,’” understood through “chaotically activated surfaces, a swirl of constant motion, even brutal ubiquitous insurrection … a sort of diabolic friction between heaven and earth.”[9] What Hookway helped me understand was that the new pandemonium of the marketplace didn’t so much forestall the new serfdom as inaugurate a new type of subordination, even if the shape of that new subordination did not resemble Winston Smith kneeling underneath the supersized face of Big Brother. The new subordination was somehow “free” and self-driving, in that all participating agents within the system (each machine, each person) were obligated to induce their own subsidiary statuses within a swirl of contingent encounters. Forget about rugged individualism, everyone seemed content just being a beta. Capitalism had entered its cuck phase. There’s a permanent pecking order, and the market bull is always ahead of you.[10]

    This logic of disorder is baked into computer networks. For example, computer protocols like Transmission Control Protocol (TCP) and Internet Protocol (IP) were designed to be open, free, and flexible, not rigid or tyrannical. And indeed they are! If there is a tyranny, it’s a tyranny stemming from the absence of tyranny. Today’s protocols claim to lack any kind of central authority. Of course this is a convenient myth, as new kinds of authorities emerge precisely from an environment bent on excluding authority. Network protocols have de jure authorities in the various international standards bodies such as the IEEE (Institute of Electrical and Electronics Engineers). Networks also have de facto authorities in the small number of behemoth nodes that claim an inordinate percentage of network throughput and computing power. Look up how much of the Internet runs on Amazon Web Services alone; you might be shocked at the result. But the argument goes further than that. Even at the point of breaking up all the monopolies and structurally removing all anti-markets, the control society would remain. Even if we managed to expropriate every billionaire, and got all markets to hum with zero friction, the beautiful disorder of control society would remain. It’s all just random variations of values in an enormous planetary spreadsheet; it’s all just arbitrage within a disorderly parade. Or to borrow the language of psychoanalysis, today’s cyberlibertarians are classic hysterics. They desperately strive to undermine order, while also propping up a new technical regime (if only for the purposes of further undermining it).

    Is disorder the best word to describe this? Might disorganization work better? I am trying to put my finger on a specific phenomenon that is old but has accelerated over the last several decades. I see it as characteristically American, from my vantage at least, a phenomenon that combines different tendencies from disorganization and decentralization, to anti-authoritarianism and anti-foundationalism. What ties these tendencies together is a generalized skepticism toward institutions, fueled by a fundamental belief in the power of the individual paired with a skepticism toward others, a skepticism that frequently blossoms into outright contempt. In America, and the American West in particular, these tendencies are inextricable from racism and xenophobia within the political sphere. The wars against American Indians in the Black Hills or the Chiricahua Mountains are not so remote from today’s wars on the homeless in Grants Pass or the Tenderloin. In what Richard Barbrook and Andy Cameron termed “the Californian Ideology,” technology itself might embody these same kinds of carceral exclusions, taking advantage of technologies of disorder to promulgate a structure of mutual contempt, thereby furthering an institution that undermines all institutions.[11]

    One of the winners out of all of this has been Ayn Rand, a mediocre novelist who left Soviet Russia for Hollywood America, and whose name is now permanently associated with cyberlibertarianism. During a revealing segment near the start of his BBC documentary All Watched Over by Machines of Loving Grace, filmmaker Adam Curtis chronicled how Silicon Valley has paid homage to Ayn Rand time and again, from the tech entrepreneurs who have christened their daughters Ayn and their sons Rand, to the various consultancies and finance companies with names like Fountainhead or the Galt Group. The profusion of Rands is dizzying: all those CEO sprouts named Rand, Ayn Rand herself, but also the libertarian Rand Paul, mixed together with white papers published by the RAND Corporation.[12]

    Curiously, the 1960s counterculture both helped and hindered these developments. Mobilizing a kind of tactical ambiguity, the counterculture both proposed any number of tech-centric utopias, while also experimenting with versions of pastoral communalism that many of the new corporate magnates inherently despised. Golumbia has resolved the ambiguity by highlighting the many rightward tendencies while remaining unconvinced of their putative leftward potential.[13] Hence in Golumbia’s account, tech institutions like the WELL and Wired Magazine, along with figures like Steward Brand and John Perry Barlow, are all harbingers of a creeping conservatism, not innervating indicators of a liberated future.

    In resolving the ambiguity, Golumbia assembles a mountain of evidence. Going person by person, he shows that a sizable number of figures from the computer revolution were either proud right-wingers, or ought to be labelled as such by virtue of their affection for Atlas-Shrugged-style libertarianism. By contrast, to enumerate all of the bona fide socialists or communists among the raft of hackers and computer scientists driving cyberculture would produce a very short list indeed. Beyond individual personalities, Golumbia also shows that many of the cherished organizations within cyberculture (such as Wikipedia or the Electronic Frontier Foundation), along with many of the Internet-oriented campaigns of recent years (such as the 2012 campaign against SOPA and PIPA legislation), typically act in the service of new-economy titans in Silicon Valley at the expense of old economy dinosaurs like Hollywood. Whoever wins, we lose.

    Many readers will blanch at the details; I even did in certain places, given my interest in open-source software, the hacker community, and other aspects of cyberculture targeted by Golumbia as inherently right-leaning. I fear some readers will simply discard his argument out of hand, not wishing to have their base assumptions tested, as happened with Golumbia’s previous work on the right-wing politics behind cryptocurrencies.[14] In listening to some of the lectures and podcast appearances he gave before his death, Golumbia was actively concerned about the mismatch between the evidence offered and its reception by both his supporters and critics. With patience and composure, but clearly exasperated, Golumbia would frequently note how the Internet elicits a disproportionate amount of goodwill, despite all of its negative and even reactionary tendencies. I just don’t know how much more evidence you need, Golumbia has lamented in different ways on different occasions. In fact, Golumbia was clever enough to scrutinize his own rhetorical disadvantage, ultimately adducing this disadvantage as a piece of the argument itself. According to him, cyberculture entails an inversion between evidence and belief. Hence it is entirely possible for Golumbia’s readers to accept his evidence on rational terms, while also stubbornly believing the opposite. Yes, Golumbia is correct about bitcoin, but I still want to get rich off crypto…. Yes, he’s correct about Google, but I still love my Gmail account. This mechanism of disavowal — yes, but still — allows cyberculture to appear progressive on the surface, while also promulgating reactionary politics at its core.[15] It is a classic instance of ideological inversion. The very thing that users desire is also the thing that undermines them. Or as Golumbia reminds his readers on page after page: beware of geeks bearing gifts!

    Indeed, the question of gifts sits at the heart of many of these debates. Economists and legal theorists frequently talk about the advent of the so-called gift economy rooted in the wide circulation of free content online. And one of the founding principles of the open-source movement turns on a claim about gifts, or the lack thereof. Since the 1990s, computer scientist Richard Stallman has been one of the most visible figures in the free and open-source software movement. A technical genius, Stallman is notable for his ability to write compellingly about the topic, while also evangelizing through lectures and other public appearances. Perhaps the most widely quoted passage from Stallman has to do with distinguishing between two senses of the word “free.” “‘[F]ree software’ is a matter of liberty, not price,” Stallman has insisted. “To understand the concept, you should think of ‘free’ as in ‘free speech,’ not as in ‘free beer.’”[16] I recall hearing this line many times during the first Internet boom of the late 1990s. Free speech, yes; free beer no — that was the essence of liberated software according to Stallman and his ilk. It always struck me as misguided. But I guess the ebullience of those years made it feel too petty to debate the point. So at the risk of exposing myself to ridicule, let me be crystal clear today: If we are stuck with Stallman’s perhaps artificial binary, the truly progressive position would obviously be the second option, free beer! One must insist on this. Stallman was devilishly clever to constrain our choices to only these two terms, given how such a framing inherently mocks the progressive position as outrageous and frivolous, like those old conservative tabloids that would satirize the workers’ movement as wanting to make the streets flow with champagne. A sense of liberty is paramount within any healthy society. But the left has always understood freedom through the prism of justice, hence not freedom for freedom’s sake, but rather free social services, free public commons, free health care, free education, and, above all, freedom from the tyranny of private property. Or as Golumbia explains in plain terms: “The roots of open source do not emerge from Marx. Instead, they are more in line with anarcho-capitalists like Murray Rothbard and David Friedman.”[17] (I often wonder whether left tech even exists at all. Has humanity yet invented a communist computer?) To repeat, Richard Stallman made the wrong choice regarding freedom, and the success of his error has negatively influenced the history of software and computing for the last three decades. Or, to cede the thread back to Golumbia, Stallman was wrong, but, more importantly, his being wrong was evidence of his right-wing tendencies.

    On this point Golumbia benefits from a certain slippage between political monikers. While many of the antagonists in his book are libertarians, in fact a good portion of them would better be described as mainline liberals, and indeed label themselves as such. The key for Golumbia is to insist on defining liberal in the traditional Lockean sense of individual liberty, private property, and market capitalism, rather than how the label tends to be used in the contemporary vernacular (as a loose synonym for NPR, Volvos, and Elizabeth Warren). Golumbia does this effectively in the book. Yet I sometimes found myself having to rehearse every step of the argument in order for the point to land. Many of Golumbia’s readers will readily agree that Elon Musk and Peter Thiel are political reactionaries; but the proposal is more labored when it comes to Cory Doctorow or danah boyd.

    That Musk has described himself as an anarcho-capitalist complicates the discussion a great deal.[18] If Musk is an anarchist too, then, yuck, I will decline association. And yet, while conspiratorial thinking is no doubt enjoyable, particularly when it means putting capitalism in the sights, there’s no anarchist conspiracy taking place in corporate boardrooms, alas. The “anarcho” in anarcho-capitalism is a misnomer of the highest order; Musk & Co. are not anarchists in any real sense of the term. As Golumbia explains with more patience than I could ever muster, anarcho-capitalists do not adopt any of the various principles of political anarchism such as radical equality via the removal of social hierarchy, a rejection of representation in favor of local decision making and communization, Peter Kropotkin’s mutual aid contra Charles Darwin’s survival of the fittest. (Enumerating the principles of anarchism is absurd of course, at least superficially; those made nervous by the listing of principles might prefer to think in terms of tendencies or practices, identified merely to facilitate the very contingency of anarchism, its essential disorder.) And yet so many tech entrepreneurs want to fly the black flag. Do these titans of industry know more than they let on? Do they admit to themselves that capitalism is a corrosive force in society? Is “anarchism” just a sexier word for “disruption” (which itself was a sexy word for all the daily depravities of capitalism)? I already know Marx and Engels’s lamentations on the disruptive forces of bourgeois society, “[a]ll that is solid melts into air, all that is holy is profaned,” and yet I have to hear it again from a bunch of cheery entrepreneurs?[19]

    Here’s a guiding principle to help stay mentally balanced: less Tim May and more Todd May. I screw up the names myself sometimes. Todd, the leftist philosopher who first made his mark thirty years ago with a lean and punchy book about political anarchism[20]; Tim, the cypherpunk engineer and low-skill author of the “Crypto Anarchist Manifesto” about techno anarchism. (“A specter is haunting the modern world, the specter of crypto anarchy. … Arise, you have nothing to lose but your barbed wire fences!”[21]) Is it even worth complaining about Tim’s amateurish allusions when the ideas driving them are so repulsive? As Golumbia diligently documents in his book, Tim was virulently bigoted against Blacks, Jews, and Latinos. Golumbia reproduces some of the offending passages in his book — I won’t cite them myself; the quotations are not worth your eyes — but Golumbia’s true task was to show readers exactly why Tim’s bigotry paired so easily with his libertarianism.

    I suspect that the fatal flaw of cyberlibertarianism has been to value formal determinations over political ones. By formal determinations I mean the adoption of tools and techniques selected for their specific shape and arrangement rather than due to the political realities they engender. Hence the cyberlibertarian values of openness over closedness, distribution contra centralization, the horizontal instead of the vertical, flows rather than fetters, and rhizomes not trees. Yet the disparaged second terms in this list of pairs are often politically valuable, even necessary. For example, closedness is necessary for privacy, and centralization helps with economies of scale.

    Here we may also definitively untangle the unfortunately intimate relationship forged between libertarianism and anarchism in recent decades. Anarchists want a different shape, that’s true, but a shape that directly informs a series of political desires such as mutual aid, anti-racism, collapsing the hierarchy of representation, and so on. Whereas libertarians use a superficial form of anarchism as camouflage to hide what amounts to cynical egoism: I’m an anti-foundationalist because I just don’t want to pay taxes.[22]

    What gets overlooked by cyberlibertarians, and frankly by many others including some proper anarchists, is that these arrangements (horizontality, openness, free flows) constitute a system of order like any other. Fickle and free liquidity furnishes no exemption from organization. My dear anarchist comrades—at least some of them–ought to be admonished on this point and this point alone, namely for continuing to believe that anti-foundationalism doesn’t entail its own autonomous form of power and organization. In the end, anarchism is not so much the absence of government or the annihilation of a foundation — arkhe plus the alpha privative — as it is the adoption of a specific set of formal and political virtues (virtues which just so happen to resist established domination and undermine centralized authority). This is part of why disorder has a politics, precisely because it has no inherent political valence, and thus can and should become a site of struggle.

    If Golumbia overlooked anything in this long and thorough book it was no doubt the domain of artificial intelligence. I imagine he omitted any serious discussion of AI due to practical concerns; it’s a massive topic and would have compounded the book’s scope and length significantly. Yet AI fits the book’s primary thesis. Current generation AI is inherently cyberlibertarian because it requires enormous stockpiles of putatively free data, unfettered by regulations around copyright or restrictions over ownership. The fair use doctrine has been mobilized as a new form of corporate theft of common resources, so that “free speech” will now alchemically transform into a glass of “free beer,” but only for those culling the data and gulping its value. Marx wrote about “primitive accumulation” at the end of Capital, vol. 1; AI is just the latest wave of this type of accumulation by dispossession.[23] (In fact the theft of value from unpaid micro labor is only the most egregious violation in a long list that should also include the squandering of vast amounts of energy and natural resources. If the overdeveloped nations of the world weren’t hastening climate catastrophe fast enough, we’ve also invented robots to burn additional fossil fuels on our behalf. Suicide by proxy.)

    Here too we see a new order forged from disorder. I mean that very literally. Certain types of AI, diffusion models in particular, explicitly use randomness and other entropic phenomena during the process of image generation. Discrete rationality is sterile and deterministic, alas; it may only transcend itself via excursions into more fertile lands, what information scientists call latent space and what Deleuze called the virtual. Computers have long served as a special tool to leverage the distinction between the orderly and the disorderly, to form a bridge between these two domains. And the richest source of disorder is us, the users.

    Hannes Bajohr has brilliantly described AI image generation in terms of an “operative ekphrasis,” that is, the use of textual description to summon an image into existence, as Homer did with the shield of Achilles.[24] Everything inside a computer is text, after all, or at least a series of alphanumeric tokens (represented primarily as integers and ultimately as discrete changes in electrical voltage). There are no images inside the beige box, even when it outputs a synthetic picture. And yet the machine accepts commands (i.e. textual prompts) that actualize a single image from out of the near infinity of possible alternatives.

    Disorder in technical systems was first defined by one Ludwig Boltzmann. However, “no Boltzmann without Shannon,” as Friedrich Kittler once insisted.[25] The historical causality appears to be reversed. But is it? I imagine Kittler to have meant that the existence of Ludwig Boltzmann in 1877 lead naturally to the existence of Claude Shannon in 1948. And this is no doubt true. The mathematical definition of entropy found in Boltzmann was directly deployed, mimicked even, by Shannon in his definition of information.[26] In other words, disorder has a history, and it has a politics, but it also has a technology. And this disorder technology, this entropy technology, has been central to cyberlibertarianism from the outset. Encryption technology, the killer app of cyberlibertarianism, is simply unthinkable without technologies of disorder, specifically the ability to fabricate high-quality random numbers and the (practical) inability to calculate the factors of large integers. So AI synchronizes with Golumbia’s theme due to the rhetorics of liberation surrounding the extraction of value. But also through the more proximate connection of AI diffusion models that map between low entropy images and high entropy images.

    This is what I will retain most from Golumbia’s final work, that while the bobbles and trinkets invented by cyberlibertarians in Silicon Valley, Bangalore, or Shenzhen are touted for their ability to disorder the order of things, such disorder is ultimately a distraction. So here at the exit, let’s exit the concept entirely. Instead I prefer to insist that the apparent disorder at the heart of cyberlibertarianism, along with the apparent anarchy at the heart of anarcho-capitalism, are merely new forms of order. And this new order is also a clearly articulable form of power.

    Alexander R. Galloway is a writer and computer programmer working on issues in philosophy, technology, and theories of mediation.  

    [1]    See Kevin Kelly, Out of Control: The New Biology of Machines, Social Systems and the Economic World (New York: Basic Books, 1994) and Eric S. Raymond, The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary (Sebastopol, CA: O’Reilly Media, 1999). Interestingly, “cathedral,” as a proper name for the structure of power, has also been mobilized by neo-reactionary authors like Curtis Yarvin.

    [2]    David Golumbia, Cyberlibertarianism: The Right-Wing Politics of Digital Technology (Minneapolis: University of Minnesota Press, 2024), 62. Golumbia credits Langdon Winner for coining the term cyberlibertarianism in his essay “Cyberlibertarian Myths and the Prospects for Community,” ACM SIGCAS Computers and Society 27, no. 3 (September, 1997): 14-19.

    [3]    On negative liberty (the removal of freedom’s fetters) and positive liberty (the assertion of free conditions), see Berlin’s 1958 lecture “Two Concepts of Liberty” in Isaiah Berlin, Liberty (London: Oxford University Press, 2002), 166-217.

    [4]    Friedrich Hayek, The Road to Serfdom (Chicago: University of Chicago Press, 1944).

    [5]    Victor Hugo, Notre-Dame de Paris (Paris: Gallimard, 1973), 244-245.

    [6]    As Friedrich Kittler reported in an interview: “I was very pleased that Jacques Derrida, during a recent visit to the university in Siegen, actually uttered the sentence (after some questioning): ‘If there had been no computer, deconstruction could never have happened.’” See Friedrich Kittler, “Spooky Electricity: Laurence Rickels Talks with Friedrich Kittler,” Artforum 31, no. 4 (December 1992): 67-70, p. 68.

    [7]    Despite being a devoted user of Macintosh computers, Derrida had no real intellectual engagement with computation during this lifetime. Some notable exceptions exist including Béatrice and Louis Seguin’s interview with Derrida, first published in La Quinzaine Littéraire (August 1996) and later included as chapter three, “The Word Processor,” in Jacques Derrida, Paper Machine, trans. Rachel Bowlby (Stanford: Stanford University Press, 2005), 19-32. I engage the relationship between Derrida and computing at greater length in a forthcoming essay titled “What are the Media that Determine Philosophy?”

    [8]    See Gilles Deleuze, “Postscript on Control Societies” in Negotiations, Martin Joughin, trans. (New York: Columbia University Press, 1990), 177-182.

    [9]    Branden Hookway, Pandemonium: The Rise of Predatory Locales in the Postwar World (New York: Princeton Architectural Press, 1999), 23. Like Golumbia, Hookway also passed away far too young.

    [10]   The urge to rank users into a specific set of hierarchical tiers based on captured data is expertly investigated in two recent works of social science: Marion Fourcade and Kieran Healy, The Ordinal Society (Cambridge: Harvard University Press, 2024); and Cédric Durand, How Silicon Valley Unleashed Techno-feudalism: The Making of the Digital Economy, trans. David Broder (London: Verso, 2024).

    [11]   For more on what he calls carceral systems, by way of Anthony Wilden’s important early text on digital theory, System and Structure, see Seb Franklin’s essay “The Pattern and the Police: Carceral Systems and Structures” in Parapraxis 4 (August, 2024): 109-128.

    [12]   Based in Santa Monica, California, the RAND Corporation was in fact not named after Ayn Rand, but rather as an acronym of “research and development.”

    [13]   Golumbia’s own leftist politics hinged on “the political, the social, and the human,” as he puts it in the book’s epilogue (Cyberlibertarianism, 401). Buy this he means fostering a robust democratic state, supported by strong public institutions and an educated citizenry. Golumbia was anti-fascist because fascism threatens that Whiggish ideal; he was anti-capitalist for the same reason.

    [14]   David Golumbia, The Politics of Bitcoin: Software as Right-Wing Extremism (Minneapolis: University of Minnesota Press, 2016).

    [15]   For more on disavowal as a psychic mechanism see Alenka Zupančič, Disavowal (Cambridge: Polity, 2024). The inverse relation between evidence and belief has also been a part of Slavoj Žižek’s intellectual project for many years. See, inter alia, Slavoj Žižek, The Sublime Object of Ideology (London: Verso, 2009).

    [16]   Richard M. Stallman, Free Software, Free Society: Selected Essays of Richard M. Stallman (Boston: GNU Press, 2002), 43.

    [17]   Golumbia, Cyberlibertarianism, 24.

    [18]   The world’s most visible suck-up, Musk called himself a “utopian anarchist” in a tweet from June 16, 2018, while more recently favoring the descriptors “dark, gothic MAGA” at a Donald Trump rally held on October 27, 2024 at Madison Square Garden. Musk may be MAGA, but, dear reader, he is most certainly not dark, gothic, utopian, or anarchist.

    [19]   Karl Marx and Friedrich Engels, The Communist Manifesto, trans. Samuel Moore (London: Penguin, 2002), 223.

    [20]   Todd May, The Political Philosophy of Poststructuralist Anarchism (University Park: Pennsylvania State University Press, 2021). May’s argument remains relevant for a number of reasons. I will merely highlight one central feature of the book, how May explicitly characterized French poststructuralism as anarchist. As he put it unambiguously in the final sentence of the book, anarchism is the “most lasting … legacy of poststructuralist political thought” (155). Some will quibble over the deals, and a lot depends on how one defines the boundary of French poststructuralism; May’s references are predominantly Michel Foucault, Gilles Deleuze, and Jean-François Lyotard. But there’s no doubt in my mind that French theory, broadly conceived, illustrates the general migration, evident within leftist intellectual circles overall during the last fifty years, away from Leninism and toward anarchism, away from the red and toward the black. In a recent work, Catherine Malabou has also traced the peculiar relationship between philosophy and anarchism, with reference to both France (Emmanuel Levinas, Jacques Derrida, Michel Foucault, and Jacques Rancière) and elsewhere (Aristotle, Reiner Schürmann, and Giorgio Agamben). See Catherine Malabou, Stop Thief!: Anarchism and Philosophy, trans. Carolyn Shread (Cambridge: Polity, 2023), along with an even more recent book on the question of property in the work of anarchist Pierre-Joseph Proudhon, Il n’y a pas eu de Révolution: Réflexions sur la propriété privée, le pouvoir et la condition servile en France (Paris: Rivages, 2024). Golumbia was disappointed by Malabou’s June 14, 2018 statement in Le Monde titled “Cryptomonnaie, stade anarchiste du capitalisme” [“Cryptocurrency–The Anarchist Stage of Capitalism”], where she explained her interest in cryptocurrencies, and specifically her rationale for endorsing John McAfee’s “Declaration of Currency Independence,” a cyberlibertarian document. In fact a number of prominent leftist theorists have expressed an interest in crypto. See, for instance, Brian Massumi, who in his book 99 Theses on the Revaluation of Value: A Postcapitalist Manifesto (Minneapolis: University of Minnesota Press, 2018) proposed a “maximally non-compromising, postblockchain speculative alter-economy” (20).

    [21]   These being the first and last lines of Tim May’s “Crypto Anarchist Manifesto” from 1992 (https://groups.csail.mit.edu/mac/classes/6.805/articles/crypto/cypherpunks/may-crypto-manifesto.html). A whole study could be done on how engineers and entrepreneurs have adopted the literary genre of the political tract, while almost completely inverting the political impulses of the old avant-garde manifestos or revolutionary cahiers de doléances. Another such representative, also discussed by Golumbia, is the “Magna Carta for the Knowledge Age” co-authored by Esther Dyson, George Gilder, George Keyworth, and Alvin Toffler. This new Great Charter of Liberty opened by asserting that “[t]he powers of mind are everywhere ascendant over the brute force of things,” before lapsing into a fairly predictable form of pro-market libertarianism. See Esther Dyson, et al., “Cyberspace and the American Dream: A Magna Carta for the Knowledge Age” (http://www.pff.org/issues-pubs/futureinsights/fi1.2magnacarta.html).

    [22]   Which prompts the necessary inversion: I’m an anarchist, and I do want to pay taxes. In other words, the best kinds of anti-foundationalism revolve around an invigorated sense of responsibility and commitment, not the general dissolving of social bonds. See, for instance, Kristin Ross, The Commune Form: The Transformation of Everyday Life (London: Verso, 2024).

    [23]   In the first new English translation in fifty years, Paul Reitter has opted instead for the phrase “original accumulation,” arguing that it is closer to the original German while also avoiding unnecessary connotations suggested by the word “primitive.” See Karl Marx, Capital: Critique of Political Economy, vol. 1, trans. Paul Reitter (Princeton: Princeton University Press, 2024), 650 and note i on 836-838.

    [24]   See Hannes Bajohr, “Operative Ekphrasis: The Collapse of the Text/Image Distinction in Multimodal AI,” Word and Image 40, no. 2 (2024): 77-90. According to Antonio Somaini, this kind of operative ekphrasis “does not describe pre-existing images but rather generates images by pre-describing them” (Antonio Somaini, “A Questionnaire on Art and Machine Learning,” October 189 [Summer 2024]: 112-120, p. 115).

    [25]   Friedrich Kittler, The Truth of the Technological World: Essays on the Genealogy of Presence, trans. Erik Butler (Stanford: Stanford University Press, 2013), 183.

    [26]   Indeed Kittler was less ambiguous elsewhere: “[Boltzmann’s] entropy formula is mathematically identical to Shannon’s later information formula” (see Friedrich Kittler, Optical Media: Berlin Lectures 1999, trans. Anthony Enns [Cambridge: Polity, 2010], 125). Although “mathematically identical” is imprecise even if Kittler’s sentiment was correct overall: Shannon’s formulation omits the Boltzmann constant, it uses log base two rather than the natural log (in base e), and it entails a summation of probabilities.

  • Marc Kohlbry – The Last Manager (Review of Craig Gent’s Cyberboss: The Rise of Algorithmic Management and the New Struggle for Control at Work)

    Marc Kohlbry – The Last Manager (Review of Craig Gent’s Cyberboss: The Rise of Algorithmic Management and the New Struggle for Control at Work)

    The Last Manager (Review of Craig Gent’s Cyberboss: The Rise of Algorithmic Management and the New Struggle for Control at Work)

    Marc Kohlbry

    The common man wasn’t nearly as grateful as he should be

    for what the engineers and managers had given him.

    —Vonnegut (1952, 220)

    In early 2024, Bill Anderson, CEO of the pharmaceutical giant Bayer, took to the pages of Fortune to announce the end of management. Citing the difficulties posed to sustainable growth by cumbersome workplace bureaucracy and hierarchies, the op-ed details his company’s plan to fundamentally restructure 100,000 positions across its various business units. Under the banner of “Dynamic Shared Ownership,” Bayer will do away with 99% of its 1,362-page corporate handbook in hopes of becoming “as agile and bold as a startup” (Anderson). In its absence, employees will “self-manage,” forming “self-directed teams” endowed with the freedom to select new projects every 90 days and to sign off on one another’s ideas along the way—all “without a manager in sight” (Royle). This “radical reinvention,” he promises, will “liberate our people” and save the company upwards of $2.15 billion, notably by first liberating thousands of middle managers from their employment contracts.[1]

    While Anderson’s op-ed stops short of clarifying exactly how these self-directed teams will cooperate to achieve Bayer’s corporate goals (an equation typically solved by managerial personnel), a more recent New York Times article, “If A.I. Can Do Your Job, Maybe It Can Also Replace Your C.E.O.,” indicates what the means to such ends might be. There, journalist David Streitfeld suggests that emerging technologies, namely those driven by generative AI, stand poised to take over executive decision making by analyzing new markets, discerning trends, and communicating with colleagues. “Dark factories, which are entirely automated,” he ventures, “may soon have a counterpart at the top of the corporation: dark suites” (Streitfeld 2024).

    The rationale behind these structural shifts is simple enough: because middle- and upper-management positions are highly compensated, eliminating them can result in considerable savings for employers. Faced with a growing market bubble[2], companies like OpenAI might see in this an opportunity to deliver shareholder value by developing products capable of carrying out supervisory tasks and of supporting (that baleful corporate euphemism) workers’ efforts to manage themselves. As one former IBM consultant notes, the change delivered by AI in corporations could accordingly be “as great or greater at the higher strategic levels of management as [in] the lower ranks” (Streitfeld 2024). Indeed, for some, replacing managers with algorithms or LLMs appears as common sense: “[s]omeone who is already quite advanced in their career and is already fairly self-motivated may not need a human boss anymore,” intimates Phoebe V. Moore, a professor of management and author of The Quantified Self in Precarity: Work, Technology and What Counts (Routledge, 2017). “In that case, software for self-management can even enhance worker agency” (Streitfeld 2024).

    *

    But what might such changes actually mean for work and those who carry it out? Put otherwise, in taking up the managerial function, are information technologies truly capable of enhancing worker agency? Craig Gent’s Cyberboss: The Rise of Algorithmic Management and the New Struggle for Control at Work—an adaptation of his 2018 doctoral dissertation—responds by offering readers a powerful and timely excavation of how new workplace technologies are in fact making workers of all stripes less free—“not by chance but by design” (Gent 2024, 3). This “design,” Gent conveys across the text’s six chapters, is propped up by “algorithmic management,” “a way of organizing work in which workers are directed, monitored, tracked and assessed—all at once, in real time—by a computer system that doesn’t rely on a human manager to control it” (3-4). In an observation that recalls the motivations behind Bayer’s $2.15 billion experiment in self-management, Gent notes that, “for most practical work, human workers are simply cheaper, more reliable and easier to replace than robots” (4). Managers, it turns out, are not.

    Building on this logic, Cyberboss takes stock of the true consequences of work under algorithmic management, where “poor employment protections, high workloads and advanced technology conspire to create high-turnover jobs that come with a harsh toll of mental and physical exhaustion” (3). By concentrating his attention on these dynamics in the (UK) logistics sector (while acknowledging that they define the gig-economy as a whole), Gent identifies that the core goal of Cyberboss is to—as its epigraph from Mark Fisher would have it—“destroy the appearance of a natural order” held together by algorithmic power.

    In Chapter 1, “The Stakes,” Gent outlines three analytical strokes for fulfilling this goal. In the first, his study seeks to demystify the aforementioned natural order by tracking how contract workers are compelled to work in accordance with “objective” standards set by seemingly infallible calculations and analytics (3). Second, Cyberboss looks to illuminate a crippling blind spot of the contemporary labor movement, whose myopic focus on contract recognition and related concession that a company’s “right to manage” is “its business alone” both foster the unchecked exploitation of “flex” workers that is itself operated by management and workplace technologies (13). Finally, Gent endeavors to counter extant scholarship on algorithmic management in particular and gig-work more generally by identifying the limitations of calls for such technologies to be made more “transparent,” “explainable,” and “human-centered”; emanating from academics and trade unions alike, these demands, he will later insist, fall short by proposing a “technical solution to a political problem” (23).[3]

    According to Gent, what is needed instead is a “political understanding of algorithmic management on its own terms” (8). Yet, “it is important not to disappear into the abstract,” he cautions (23). Instead, “[b]ecause the politics of algorithmic management is so intimately entwined with the organisation of work, it is necessary to show how such workplaces function in practice” (23). To do so, Cyberboss concerns itself “more with discipline and management” than with wages or the effects of material precarity outside of the workplace (11). In sum, Gent explains, “I want to question what the stakes are for workers and work, and what it means for technologies of control and communication to be sites of struggle and contestation” (12). This line of inquiry ultimately permits Cyberboss to generatively account for how “workers are being managed by computers rather than replaced by them” “on the basis of cybernetic feedback loops” (6), then to reveal how workers are fighting back outside of the traditional organizational structures of the labor movement.

    To fully grasp how Gent arrives at these conclusions, it is instructive to read ahead to Cyberboss’s fourth chapter, “Technological Politics.” In this somewhat belated methodological introduction, Gent surveys several dominant views of technology’s relationship to sociopolitical dynamics. Judging “technological determinism,” “social determination” (109), and the “economic view of capitalist innovation” (113) as insufficient, he instead privileges a “theory of technological politics” aimed at accounting for the “political dynamics—in other words, class relations—that are immanent to technology” (114). In lieu of “saying technology is determined by political dynamics,” this perspective focuses on how “the scale, design or organisation of certain technical arrangements can engender technological imperatives that command particular social responses” (115).

    Rather than target capital’s conceptual abstractions, structural dynamics, or technical artifacts, Gent leans on the theory of technological politics to construct a markedly ethnographic methodology (119). “Uncovering the conflictual political interests that have been concealed, circumvented or naturalized,” he argues, “requires empirical investigation with the aim of showing that labour is never completely subordinated to capital” (132). The point, he continues, is not simply to understand relations of power, but to empower workers by unconcealing the “ongoing contingency of class struggle from within work, quite aside from any sweeping structural principles or managerial ideals we might identify” (133). Taking as a point of departure the so-called Trontian inversion—or, the insistence on “the primacy of working-class struggle within the development of capitalism”—Gent grounds Cyberboss in autonomist Marxism’s insights about how “the working class has political agency regardless of the conditions imposed upon it by either capital, the state, or traditional political vehicles such as trade unions or workers’ parties,” all while emphasizing workers’ ability to contest the dominative power of capital (117, 120). Indeed, he maintains, “the indeterminacy of technology […] leaves open the possibility that workers will contest managerial techniques as implemented through specific technologies” (123). With his sights set on the “indeterminacy” of algorithmic management, Gent finds in autonomism not only a political framework that registers how “technology is always subject to ongoing class struggle,” but a set of concepts and tools with which to effectively “develop an account of technology at work” (120).

    Among these, Gent singles out the workers’ inquiry (as developed by Romano Alquati in 1975’s Sulla FIAT) for its ability to trace “class composition,” a move that turns back the clock on post-operaismo by framing ethnography as a means of explicating particular sites of class struggle rather than zooming further out in an attempt to assess the composition of the working class as a whole (137). By centering pickers in UK distribution centers as well as couriers working for similar facilities or on lean platforms, Gent skillfully explicates how “capital and the working class are specific but relational” (130) by analyzing algorithmic management technologies as both “a prism through which to understand contemporary class struggle and an under-studied component of regimes of ‘control’ in contemporary workplaces” (137). Workers’ inquiries form the core of these analyses; however, they also serve as dynamic supplements to Gent’s assessments of the intellectual histories, media objects, and conceptual abstractions that gave rise to and continue to fuel algorithmic management. Woven together, these threads forcefully clarify the concrete fallout of this managerial mode as well as the forms of worker resistance that have risen to counter it.

    A pivotal moment in this approach comes in the third chapter of Cyberboss, “Management,” which details how algorithmic management “combines three key traditions in management thought: the scientific, the humanistic and the cybernetic” (65). Beginning with the observation that management is, “in its essence, a political project” representing “a formal and intentional division of power, information, communication and control in the workplace” (64), Gent momentarily brackets out the question of digital technology to trace the means and ends of the managerial function both historically and in the present. He begins by surveying the tradition of scientific management promoted by Frederick Winslow Taylor at the turn of the twentieth century. Understanding labor power to be a commodity that, when purchased, merely offers the potential for labor, Taylor identified the goal of management as the actualization of that labor power (68). The defining feature of this approach—or Taylorism—would ultimately be its deskilling of manual labor and subsequent separation of the conception and execution of work. This design made managers responsible for “applying detailed measurement to each element of work” (67). Under Taylorism, then, the managerial conception of labor consists of “the knowledge and planning of the labour process, the development of strategy” (76), while its execution is the eventual cooperative labor of workers themselves. Critically, the “knowledge put into the process by managers is initially gleaned” from the workers, which enables supervisors to “generate [the] general rules and targets that will govern the work process” (76-77).

    Gent continues his survey of capitalist management by registering an important corrective made to Taylor’s thinking by Frank Bunker Gilbreth, Lillian Moller Gilbreth, and Elton Mayo. In Gent’s retelling, this shift represents the “humanistic tradition,” which belies the realization that scientific management need indulge “the human factors of the labour process” (73). Together, these thinkers urged managers to concentrate on developing affable, personal relationships with workers in order to create “social units” in the workplace; Mayo, in particular, would identify sociality as workers’ primary motivation (73). This tradition would reach its apogee in the years following World War II, largely thanks to management innovations in Japan led by Yoichi Ueno and later by Taiichi Ohno, the father of Toyotism, lean manufacturing, and “total quality control.” These markedly humanistic upgrades to Taylorism, Gent explains, would break with the idea of a perfectible system by instead insisting on the continuous refinement of the labor process over time, or kaizen in Japanese (81).

    To these interrelated approaches, Gent adds an important third: the “cybernetic tradition,” whose principal feature is “the use of feedback loops to control or steer a complex system (in this case, a workplace)” (6). This name—from which Cyberboss draws its titular prefix—points to both total quality control’s roughly cybernetic nature as well as to the management theories that grew from the seeds sown by Norbert Wiener’s foundational 1948 text, Cybernetics: Or Control and Communication in the Animal and the Machine. “As an adaptive, continuous system,” Gent clarifies, “total quality control retains ideas from scientific management about the reformulation of knowledge to produce targets, but adopts a more holistic, and arguably cybernetic, form: beyond work rates, [it] is concerned with managing work relations through communication and delivering wide control to management through attention to intra- and inter-departmental dynamics” (82). “Cybernetic” management thus creates a company-wide circuitry of feedback-based control (called a “quality-control circle”) through which “management can control various aspects of the work process by communicating with workers and encouraging their cooperation” on the basis of the information gathered from those workers as they carry out their tasks (83-84).

    Further on, Gent turns to certain core thinkers of “cybernetic management” (namely Stafford Beer) to detail how, in viewing the firm as a cybernetic feedback system, this third tradition shirks purely hierarchical control and total knowledge of the work process to instead “exercise control […] by virtue of [managers’] position within communicative flows” (96). To illustrate how this approach underpins “algorithmic” management, Gent observes that the scanning guns used by Amazon pickers mediate a process of informatic feedback. While “the separation of conception and execution persists” and management was “still present [there] in a number of ways,” these Motorola WT4000 scanners would dictate to workers the next item to be picked all while displaying a real-time evaluation of their performance (88-91). Behind the scenes, the interface was tracking and collecting performance data and in turn using that information to adjust the labor process in keeping with established productivity targets. Yet even though (or because) “[n]o one could say how the […] targets were set” (101), each data-driven adjustment appeared as neutral, objective, rational, and, above all, “beyond question” (99-100). Further, while these workers at times would intermittently interact with a human manager, they were more often in communication with “the system,” another name for “a computer database (or databases) that manages stock or order progress, tracks the work of employees, time-stamps activity, calculates performance and assigns new tasks where necessary” (101).

    These ethnographic observations lead Gent to a series of sweeping theoretical conclusions about algorithmic management. In addition to drawing on “the separation of conception and execution advocated by Taylor and the continuous improvement impulse of kaizen,” digital tools such as the scanner mediate “performance-orientated feedback loops” all while introducing “new dimensions to the workplace that force us to rethink what communication and mediation mean at work” (105-106). In the logistics sector, these “new dimensions” weigh on managers and workers alike. For the former, technologies like the scanner maximize worker efficiency by leaving the calculative aspects of conception to algorithms, which frees them up to smooth out any kinks in the labor process through direct communication with workers. For these workers, the execution of labor comes to resemble what Jamie Woodcock calls the “algorithmic panopticon,” or a form of workplace governance without a physical managerial presence wherein interaction with digital interfaces and platforms contributes to a “feeling of being constantly tracked or watched” (107).

    Nevertheless, Gent insists that the control posited by this “algorithmic panopticon” is illusory, a position that he defends across the chapters “Algorithmic Work” and “Algorithmic Management.” In a way, these chapters take up the division of labor proposed by Taylorism, but in reverse: the first focuses more on descriptions and analyses of the execution of labor in the logistics sector, whereas the second concentrates on the means and ends of its conception (though, given Cyberboss’s privileging of workers’ perspectives, there is considerable overlap between the two). These examinations support Gent’s reasoning that “the principal idea governing contemporary logistics is to minimise ‘waste’” by (in part) ensuring that workers are as “productive as humanly possible” (16-17). This is where algorithmic management steps in: by assigning, administrating, measuring, and assessing this work in real time, its technologies

    cover the allocation of work, the direction of the employee towards particular items, the employee’s performance against a pick rate (itself set by the algorithm according to online order traffic), and the direction of supervisors towards workers who fail to meet targets. In such cases, algorithmic tracking and decision-making are either augmenting or replacing the traditional managerial or supervisory function. (26-27)

    Such an approach reduces logistics workers to a series of data points, or information, which is then measured on the basis of productivity targets and surveillance before being “fed back” into the system to maximize efficiency (and discipline workers who underperform) (28).

    To demonstrate how this takes place in the logistics sector, Gent elaborates specific forms of media that render workers as “tools of the algorithmic system” (40). Hardly dead media, the scanner, the barcode (41-42), digitally-augmented goggles (42), and even VR headsets (44) enable management to control individual workers by “transforming each […] into an embodied real-time data tracker” (42). The information gathered by such devices facilitates this determination in two movements. First, it allows “the system” to determine which workers will work and when; “[l]ike the goods in the warehouse, [these] workers are forever just-in-time” (152). Second, once workers are on-site, this same data is used to subject them to circular work patterns (a more concrete feedback loop), the instructions for which are set computationally and displayed on the digital devices themselves (147). “Much of the skill involved in successfully carrying out the work,” Gent goes on to highlight, “boils down to successfully acting on the basis of a digital interface” (149). This has the upshot of focusing workers’ attention on hardware and thus of minimizing communication between them (157). To prevent a total breakdown in sociality, then, the role of actual managers shifts toward “humanistic intervention” capable of ensuring that work (and its improvement) remains a “continuous process” rather than a “goal-oriented sequence” (169, 160). In sum, while “algorithmic management operates within a Taylorist paradigm, it signals a key development in terms of its ability to decentralise the managerial endevour by distributing power across the workforce in a more democratic way, but by way of a digital media infrastructure within which real-time cybernetic feedback loops produce a more generative form of control” (171).

    This new reality, Gent warns in “Guile Against Adversity,” “poses significant issues for how we think about the capacity of workers to exercise agency within the work process” (173): increasingly faced with a system able to “fill gaps in labour” and “redirect work processes to other locations in real time,” “we can no longer rely on forms of political mobilisation that have become our common sense” (176-177). Rather than lay plans for a future political strategy (predicated on unions, strikes, and so forth), this final chapter focuses on contingent employees’ resistance to algorithmic management in “actually existing workplaces” (203). Reframing instances of “everyday resistance” and “organisational misbehavior” (178) as “metic resistance” (201), or “workers’ guile,” Gent highlights “the use of situated wisdom and experiential cunning to seize or subvert, even momentarily, the current of managerial control” (203). But while such acts[4] are capable of “reappropriating personal dignity,” Cyberboss’s author also acknowledges that it remains an open question whether metic resistance—as well as the new forms of worker sociality and “chains of discovery” that make them possible (which he terms “metic commonality”)—can be “scaled up or generealised across a workforce as part of a collective endeavour” (198, 205). Still, “[b]y thinking about resistance in terms of metis,” Gent maintains, “we are forced to consider the situatedness of political action away from ideal types, such as those found in the organiser’s repertoire” (205).

    Beyond its contributions to our understanding of the gig economy and provocative assessments of the emancipatory possibilities available to workers under algorithmic management, Cyberboss also makes crucial interventions in pressing social-scientific conversations at the nexus of political economy and digital culture. Importantly, the text grounds scholarly interest in the logistics sector (as both a site of exploitation and a possible choke point for resistance) in what one might call a social history of algorithmic management—that is, a theorization of technology’s relationship to the capitalist division of labor rooted in actually existing workplaces rather than in idealism, however cogent its political leanings. This perspective informs Gent’s skepticism of “[a] growing number on the left [who] wish to see social movements emerge around the logistics sector” (60), which later leads him to conclude that locating the strategy of “fault lines and weak points” in logistics “falters against the scale of [algorithmic] managerial control” (202). One could certainly be persuaded by this point in taking seriously Cyberboss’s arguments; however, Gent’s account is also a complementary rejoinder to work on this subject by Aaron Benanav, Jasper Bernes, or Søren Mau, who have made incisive points about how, under capitalism, “mobility is power, and means of transportation and communication are weapons.” (Mau 2023, 273). To these more structural positions, Gent lends the voices of workers to reorient conversation toward, in the context of logistical power, what it is that’s to be done—though stops short of leveraging this essential move into an affirmative elaboration of the kinds of broader, coalition-based strategy that will be necessary to effectively fight back.

    Elsewhere, Cyberboss generatively nuances certain of Mau’s arguments about “economic power,” or that which is “not immediately visible or audible as such, but just as brutal, unremitting, and ruthless as violence; an impersonal, abstract, and anonymous form of power immediately embedded in the economic processes themselves rather than tacked onto them in an external manner” (Mau 2023, 4). Importantly for the present purposes, Mau postulates that management, or“[t]he authority of the capitalist within the workplace,” is the mere “form of appearance of the impersonal power of capital”; formulated otherwise, “[t]he despotism of the workplace is nothing but the metamorphosis of the impersonal and abstract compulsion resulting from the intersection of the double separation constitutive of capitalist relations of production” (233, italics in original). By this token, Cyberboss brings Mau’s cursory insights about how the managerial function directs economic power to bear on the question of algorithmic power, in turn pointing the way to a fuller understanding of how technology might mediate “mute compulsion” in the digitized workplace along “multifarious points of communication” (Gent 2024, 170). If “management at work is the primary means by which most people experience the phenomenon of capitalism in daily life” (65), Gent’s arguments suggest that, by replacing managers with technologies aimed primarily at the production of surplus value, algorithmic management presents workers with an experience of the value form that is paradoxically more abstract and concrete than that animated by previous managerial modes—more abstract because it is immaterial and non-human, more concrete because it offers a less mediated experience of the abstracting movements of capital. Here, more concentrated study of the place of technology in capital’s abstract domination of the concrete—particularly in the context of managerial practice—appears as particularly urgent.

    Gent’s study also vitally reasserts the centrality of management for cybernetics (and vice-versa), a point of convergence that is among the most enduring afterlives of the science; indeed, before migrating into anthropology, linguistics, or “French theory,” this model for communications engineering was motivating managerial thought and practice.[5] Read in this context, Cyberboss’s focus on “actually existing workplaces” provides a materialist explanation for the cybernetic underpinnings of contemporary capitalism, a welcome supplement to more recent studies of the ideological impact of the science of communication and control on neoliberal economics.[6] On this basis, Gent’s discussion of cybernetics underscores the need for research into how the science’s successive historical stages—i.e., first-order, second-order, or, later and by extension, systems theory—respectively modified the coordinates of the “scientific” and “humanistic” managerial traditions to which Cyberboss so urgently draws our attention.

    Further afield, Cyberboss should provoke similar (if perhaps uncomfortable) questions for scholars in the humanities. There, it is no secret that “cybernetic” thought has served as inspiration for a range of fields from posthumanism to poststructuralism thanks to the “ontology of unknowability” (Pickering 2010, 23) typically associated with cybernetics’ more reflexive second wave. With this in mind—and following Gent’s lead in registering that this same ontology has enabled capital to actualize labor power since the mid-twentieth century—one wonders about the extent to which some strains of humanities research have been unknowingly trafficking in managerial discourse for what would now amount to decades. Such a line of inquiry is only complicated by the fact that management scientists themselves have long-since realized the use value (and not simply the exchange value) contained in the thought of Michel Foucault (McKinlay and Starkey 1998), Gilles Deleuze (Linstead and Thanem 2007), or Judith Butler (Tyler 2019), as well as in certain fields often assumed to be the exclusive province of humanistic inquiry (de Vaujany et al. 2024).

    *

    Without undercutting its arguments or diluting its many contributions, Cyberboss ends abruptly. A three-page epilogue briefly addresses the topic of artificial intelligence by glossing the 2023 Writers Guild of America (WGA) strike and the demands made therein for the curtailment of AI use across the film industry. “The WGA,” Gent suggests in closing, seems to have understood what “other unions ought to”: “the infeasibility of leaving technology within the realm of corporate decision making” (Gent 2024, 211). Rather than content ourselves with this nod to generative AI, however, we should push Gent’s study further to illuminate the futures of work and component forms of exploitation that these technologies may make possible—some of which are already upon us.

    In early 2024, I received an unexpected job offer. The position—for which I had not applied—was that of an AI model evaluator with a company that I’ll call “Mute.” This was an opportunity, I read with some confusion, to “shape the next generation of AI with [my] expertise” while “enjoying the flexibility to work where and when” I wanted. Puzzled yet intrigued by how I might do so with a background in comparative literature, I accepted.

    It was not long before I discovered that the management of this work (which Mute refers to as “tasking”) was entirely algorithmic. Once I had created an account on the company’s platform and completed the necessary “enablement” modules, the system assigned me to a project, the two pay rates for which (one for “project” tasks and another for “assessment” tasks, themselves indistinguishable in all but name) were both conspicuously lower than the promises of Mute’s initial proposition. Undeterred, I clicked the “Start Tasking” button and a new interface appeared: at the top of this window sits a timer (typically longer than the maximum amount of time a user can be paid for—a detail buried elsewhere in a project description footnote) alongside a reveal of the task type at hand.

    For this first “Rewrite” project, Mute’s system instructed me to evaluate and improve single- and multi-turn LLM (large language model) responses according to the assessment categories of “instruction following,” “concision,” “truthfulness,” “harmfulness,” and the vaguer “satisfaction” (a catch-all for any issues that fail to fit neatly into another category). To move from one stage of a task to the next, I was required to “Verify” my recommendations by fixing any issues flagged by a series of inscrutable, AI-powered plug-ins.[7] Once my suggestions had been approved by these digital managers, I could “Submit” the task for evaluation by an anonymous “Reviewer”—a role, I learned following a later algorithmic reassignment, whose own tasks are evaluated by still other taskers using an identical 1-5 Likert scale. Regardless of one’s project or role, these scores contribute to an “Average Task Score,” which the system then uses to compute pay rates and assign future tasks, projects, and, if one is lucky, “bonus missions.”

    The managerial function animating this division of labor operates horizontally: thanks to the system’s opaque algorithms for allocating and evaluating tasks, each tasker is unknowingly managing the work of another. But this same process is also circular: because their communication occurs textually and is mediated by standardized assessment categories, Likert scores, and other veiled parameters, taskers are transformed into algorithmic managers; cloaked in code, they become indistinguishable from the LLMs they are training (who are training them in turn). This is evident as soon as one completes a first unpaid “enablement” program, at which point they are asked to assign it a rating between 1 and 5. Within this circuit, the question of whether tasks are being evaluated by a human or an AI (or, if they are being used to evaluate the work of one or the other) is thus irrelevant.

    While tasking on Mute is a solitary endeavor, it is still possible to communicate with fellow workers by clicking on a tab labelled “Community.” Doing so will launch an embedded Discourse forum featuring locked threads with updates and reminders about the project(s) to which one is assigned as well as open threads seemingly meant for more casual exchange. Among these is the “Watercooler Thread,” a name that cruelly parodies the social relations that have been dissolved by the machinations of algorithmic management. Indeed, every move on this platform is destined for capture and optimization; in Mute’s “Community,” for instance, taskers’ scrolling and keystrokes are tracked and calculated into a “Trust” level that will determine the threads and features (including personal messaging or the ability to post links) to which they have access.

    This brief window into Mute’s algorithmic management highlights the explanatory power of Cyberboss—even while revealing certain of its limitations. Though not pickers or couriers, for example, Mute’s workers are similarly directed, monitored, tracked, and assessed by non-human agents in real time. And, much as Gent describes of Amazon or Deliveroo, this company’s “system” facilitates the resulting flows of data through feedback loops intended to actualize labor power. In the absence of any discernible human supervisor, however, here AI stands in as the tasker’s last manager, creating an algorithmic panopticon wherein each is the source and subject of an automated mode of control. Alarming, too, is that the sociality shaped by Mute’s platform limits workers’ ability to fight back more comprehensively than a logistics facility might with its scanners and barcodes; for instance, even if taskers dream of staging a slowdown or mass log-off to improve their working conditions, every channel (and water cooler) for metic commonality is already subsumed by the system’s gaze. In difference to the logistical distribution of material goods that Gent analyzes, then, the algorithmic power governing Mute’s immaterial labor is hardly an illusion; rather, it is totalizing, a “quality-control circle” wherein each worker is compelled to (self-)manage.

    On a more speculative level, the possible ends of this self-management complicate Gent’s conclusions about the impossibility of automating the managerial function altogether. In sum, Mute’s evaluators are carrying out the microwork that maintains LLMs’ veneer of “intelligence,”[8] including editing responses for grammar and syntax, identifying and correcting hallucinations, and generating datasets through which such tools might be trained to better respond to contextual and affective cues. The technical name given to this more general form of labor—which, as one of my Mute training modules clarified, draws on taskers’ “human knowledge and expertise” to craft products “far superior to an AI like ChatGPT”—is unmistakably cybernetic: “reinforcement learning from human feedback,” or RLHF.

    But what of the products that these efforts might generate, and to whom could they be sold? After first being contacted by Mute, I quickly gleaned that the company is the labor-supplying subsidiary of a much larger B2B firm who counts OpenAI, Meta, Microsoft, and Nvidia among its clients. To prevent the most recent tech boom from becoming a bust, organizations like these have begun searching for ways to deliver AI solutions to businesses that will ensure model accuracy, secure company and customer data, minimize response bias, and, above all, feel irrefutably human. One path to this is the increasingly popular “retrieval-augmented generation” (RAG) approach, which allows the LLMs that it powers to retrieve information that is external to the datasets upon which they’ve been trained. In theory, AI infrastructures grounded in RAG can provide accurate and secure LLMs tailored to companies’ “enterprise resource planning” (ERP), “customer relationships management” (CRM), and “human capital management” (HCM) systems and needs. Conceivably, then, Bayer could hire Mute’s parent company (or one of its marquee clients) to build a RAG-powered LLM based on customized HCM datasets and the protocols laid out in their 1,362-page corporate handbook. By actualizing the necessary labor power correctly and at scale (e.g., by way of RLHF)[9], Mute’s algorithmic management could present IG Farben’s heir with its own algorithmic means of hiring, onboarding, and directing workers—or, of taking up the role of “humanistic intervention” to communicatively support self-management, much as AI is already doing for those of us on Mute’s lean platform.

    Nevertheless, it would be a mistake to hastily conclude that the algorithms fueling generative AI models and other digital technologies will soon (or ever) be capable of fully automating the role of management. This natural order Gent only partially decodes: because such objects will always require a self-organizing subject[10], we must summon whatever guile we can—together with the lessons of Cyberboss and the future work it should inspire—to collectively resist becoming our own last manager.

    Marc Kohlbry received his PhD in comparative literature from Cornell University in 2022. His research centers on the intersections of literature and media, digital culture, and political economy and has been published or is forthcoming in such journals as Social TextNew Literary History, and Cultural Critique.

    Thanks to Sam Carter for his incisive reading of a previous draft of this essay.

    [1] Shoring up stock valuations by cutting managerial positions is hardly novel, let alone “radical.” In 2023, for instance, middle managers—i.e., nonexecutives who oversee other employees—accounted for almost a third of global layoffs, with notable cost-saving measures being implemented at Meta, Google, Citi, and FedEx (Olivia 2024).

    [2] See Cahn 2024.

    [3] Relevant examples can be found in Rosenblat 2018, Aloisi and de Stefano 2022, and Ajunwa 2024.

    [4] These include “the exploitation of menu options to bring about breaks; the stealing and sharing of supervisors’ codes or computer log-in details; use of the knowledge of what supervisors can and can’t know, and how busy they will be at a given moment, to amuse oneself and create problems for the stock database; defiance of the narrow forms of communication demanded by interfaces; the shared experience invoked in slowing down to 70% of productivity, reasserting workers’ autonomy over performance; [and] the ingenuity of testing new equipment in order to find new ways to subvert it” (Gent 2024, 207).

    [5] For evidence of this, one might look to Norbert Wiener’s comments on industrial management (briefly discussed by Gent 2024, 12) as well as to Beer 1959.

    [6] For examples, see Hancock 2024, Oliva 2016, or Haplern 2022.

    [7] Frequently, these algorithmic management tools claimed that “AI [had been] detected” in an entry, an issue that I could “Agree and Fix” or “Decline”—there is no third option for informing the LLM that it was, in fact, mistaken.

    [8] See also Altenried 2020, Crawford 2021, 53-87.

    [9] At the time of this writing, these efforts—and any resulting profits—still seem out of reach. See Sam Blum’s recent reporting on the subject for Inc.

    [10] Still, the automation of management strikes me as somewhat more feasible than the “proletarianization” of artificial general intelligence (AGI) (Dyer-Whitheford et al. 2019, 135-138). For my thinking on the means by which digital interfaces might facilitate the former process, see Kohlbry 2024.

    References

    Ajunwa, Ifeoma. The Quantified Worker: Law and Technology in the Modern Workplace. Cambridge, UK: Cambridge University Press, 2023.

    Aloisi, Antonio and Valerio de Stefano. Your Boss Is an Algorithm: Artificial Intelligence, Platform Work and Labour. London: Bloomsbury Publishing, 2022.

    Altenried, Moritz. “The platform as factory: Crowdwork and the hidden labour behind artificial intelligence.” Capital & Class, 44:2, 2020: 145-158.

    Anderson, Bill. “Bayer CEO: Corporate Bureaucracy Belongs in the 19th Century. Here’s How We’re Fighting It.” Fortune, 21 Mar. 2024, https://fortune.com/2024/03/21/bayer-ceo-bill-anderson-corporate-bureaucracy-19th-century-leadership/.

    Beer, Stafford. Cybernetics and Management. English Universities Press, 1959.

    Blum, Sam. “‘It’s a Scam.’ Accusations of Mass Non-Payment Grow Against Scale AI’s Subsidiary, Outlier AI.” Inc., 14 June 2024, http://inc.com/sam-blum/its-a-scam-accusations-of-mass-non-payment-grow-against-scale-ais-subsidiary-outlier-ai.html.

    —. “Scale AI Lays Off Workers Via Email With No Warning.” Inc., 27 Aug. 2024, https://www.inc.com/sam-blum/scale-ai-lays-off-workers-via-email-with-no-warning.html.

    —. “A Scale AI Subsidiary Targeted Small Businesses for Data to Train an AI. Entrepreneurs Threatened Legal Action to Get Paid.” Inc., 28 Aug., 2024, https://www.inc.com/sam-blum/a-scale-ai-subsidiary-targeted-small-businesses-for-data-to-train-an-ai-entrepreneurs-threatened-legal-action-to-get-paid.html.

    Cahn, David. “AI’s $600B Question.” Sequoia Capital, 20 June 2024, https://www.sequoiacap.com/article/ais-600b-question/.

    Crawford, Kate. Atlas of AI Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale, 2021.

    de Vaujany, François-Xavier et al. Organization Studies and Posthumanism: Towards a More-than-Human World. London: Routledge, 2024.

    Dyer-Whitheford, Nick, et al. Inhuman Power: Artificial Intelligence and the Future of Capitalism. London: Pluto Press, 2019.

    Gent, Craig. Cyberboss: The Rise of Algorithmic Management and the New Struggle for Control at Work. New York: Verso Books, 2024.

    Halpern, Orit. “The Future Will Not Be Calculated: Neural Nets, Neoliberalism, and Reactionary Politics.” Critical Inquiry, 48:2, 2022: 334-359

    Hancock, Max. “Spontaneity and Control: Friedrich Hayek, Stafford Beer, and the Principles of Self-Organization.” Modern Intellectual History, 2024: 1–20.

    Kohlbry, Marc. “Technologies of Hope (Fiction, Platforms, Management).” Social Text, 42:4, 2024: 51-79.

    Linstead, Stephen and Torkild Thanem. “Multiplicity, Virtuality and Organization: The Contribution of Gilles Deleuze.” Organization Studies, 28:10, 2007: 1483-1501.

    Mau, Søren. Mute Compulsion: A Marxist Theory of the Economic Power of Capital. New York: Verso Books, 2023.

    McKinlay, Alan and Ken Starkey. Foucault, Management and Organization Theory: From Panopticon to Technologies of Self. SAGE, 1998.

    Oliva, Gabriel. “The Road to Servomechanisms: The Influence of Cybernetics on Hayek from the Sensory Order to the Social Order.”The Center for the History of Political Economy Working Paper Series, 2015, https://ssrn.com/abstract=2670064.

    Rosenblat, Alex. Uberland: How Algorithms Are Rewriting the Rules of Work. Oakland: University of California Press, 2018.

    Royle, Orianna Rosa. “Pharmaceutical Giant Bayer Is Getting Rid of Bosses and Asking Nearly 100,000 Workers to ‘Self-Organize’ to Save $2.15 Billion.” Fortune, 11 Apr. 2024, https://fortune.com/europe/2024/04/11/pharmaceutical-giant-bayer-ceo-bill-anderson-rid-bosses-staff-self-organize-save-2-billion/.

    Streitfeld, David. “If A.I. Can Do Your Job, Maybe It Can Also Replace Your C.E.O.” The New York Times, 28 May 2024, https://www.nytimes.com/2024/05/28/technology/ai-chief-executives.html.

    Tyler, Melissa. Judith Butler and Organization Theory. London: Routledge, 2019.

    Vonnegut, Kurt. Player Piano. New York: The Dial Press, 1952.

  • Tim Christiaens–“Why Do People Fight For Their Exploitation As If It Was Liberation?” (Review of Jason Read’s The Double Shift)

    Tim Christiaens–“Why Do People Fight For Their Exploitation As If It Was Liberation?” (Review of Jason Read’s The Double Shift)

    “Why Do People Fight For Their Exploitation As If It Was Liberation?”

    Tim Christiaens

    Popular culture is one of the sites where this struggle for and against a culture of the powerful is engaged: it is also the stake to be won or lost in that struggle. It is the arena of consent and resistance.

    – Stuart Hall (2019, 360)

    Please Sign Here to Work Yourself to Death

    A sense of dread befalls us when the topic of work enters the conversation. From nurses in overrun hospitals to adjunct academics between teaching gigs, many workers are seeing their workload grow while paychecks and job security shrink. Yet surprisingly, people still cling to their jobs and the capitalist work ethic in general as if it were their salvation. Hustle culture and structural overwork are rampant across the labor market. From gig workers operating on multiple platforms to Wall Street interns working around the clock, the capitalist work ethic has become what Karl Marx called a “religion of everyday life” (Marx 1992, 969). The rituals of time management and incessant self-branding dominate not only people’s labor-time but also their free time. Away from the office, many still zealously perform the liturgies of self-optimization and social networking to maximize their productivity.

    When people genuinely need the income from their wages to survive, a harsh or opportunistic work ethic is understandable. Without a job, people are oftentimes reduced to the status of “homeless and empty-handed labor-powers” (Marx 2005, 509). With nothing but their labor-power to sell, they have little choice but to drag their bodies to work every morning. Yet historically, many institutions have fostered a work ethic built on higher motivations than mere fear. In early capitalism, the Protestant ethic of thrifty labor in the service of God justified newly emerging capitalist labor relations, and even without the ominous gnawing of hunger, large sections of the twentieth-century male working class in the Global North supported traditional employment relations. In the days of Fordism, even monotonous hard labor often gave access to a chance at a middle-class lifestyle and social recognition as a contributing member of society. ‘Having a job’ was a gateway to recognized public status and comfortable living standards. Thanks to a social compromise between capital and the white male working class, the latter received high wages and political representation in exchange for social peace and labor productivity. If people agreed to work hard in the factory and not vote communist, the company and the welfare state would guarantee a secure consumer lifestyle.

    For a long time, this seemed like a good deal for all parties involved. Yet circumstances changed in the 1970s and ‘80s, when economic crises, popular protests, and capitalist investment strikes blew up the social compromise. Workers wanted more opportunities for authentic self-expression in their labor, populations previously excluded from the benefits of Fordism demanded emancipation (women, immigrants, the colonized), and capital experienced a profitability crisis for industrial activity in the Global North. Without cheap resources from colonial territories and free reproductive labor from working-class women, male workers’ high wages and political influence became too costly. Large corporations moved their factories to low-cost countries, while demanding states to cut their social security systems. The fear of gnawing hunger returned to put workers in their place. However, the decline in job security and income protection still constitutes an insufficient explanation for the persistence of the capitalist work ethic. The compulsion to work is not just an externally imposed burden today, but also an ethos and intimate desire. Hustle culture permeates everyday life across social classes and lifestyles.

    When the first waves of the COVID-19 pandemic subsided, public discourse reveled at the rise of ‘quiet quitting’. Suddenly workers restricted their efforts to the bare minimum job requirements. Employees would work just enough to avoid alarming the boss but refused to put in any extra effort. However, a few years later enthusiasm has returned for always-on work culture. In a cruel twist on Marx’s prophecies about life under communism, late capitalism has made it possible for me to do one thing today and another tomorrow, to hunt for bartender gigs in the morning, fish for online clickwork in the afternoon, curate a mass following on LinkedIn in the evening, and monetize my car on Uber after dinner, depending on whatever suits me at the time, without ever becoming hunter, fisherman, herdsman or taxi driver.

    Hustle culture promises to render each moment of the day productive. People believe themselves to be entrepreneurs of their own lives and combine multiple jobs to chase after idealized representations of financial success and social prestige. While this attitude might have been born under the sign of economic necessity, what keeps such micro-entrepreneurs going demands further explanation. Constant busy-ness is not just begrudgingly tolerated but actively desired. And this time, there is no Protestant ethic or Fordist social compromise to explain it. Even when people can count on social safety nets, like unemployment benefits, many still strive for overachievement. A deeply rooted affective attachment to work animates late-capitalist culture. This might have made sense when jobs gave access to middle-class consumerist lifestyles, but today this promise often rings hollow. University students take out loans, swallow Ritalin and cram for exams to get a diploma that no longer guarantees steady employment. Journalists spend hours building an online reputation on social media in a desperate effort to avoid being replaced by AI. Aspiring academics publish numerous papers in the hopes that, against dwindling odds, they finally receive a chance at tenure.

    After decades of dismantling worker protections and welfare institutions, this attitude seems painfully masochistic. An epidemic of burnouts has consequently accompanied the rise of hustle culture. At one point, the incessant pressure to work must run out of steam. One cannot continuously push one’s body and mind beyond their limits without suffering the consequences sooner or later. Yet the work ethic is so ingrained in everyday life that many prefer to persist in overwork rather than allowing themselves some well-earned rest. Amazon installs vending machines dispensing painkillers to its warehouse workers, while life-coaches build careers out of helping people squeeze money out of every minute of their day. The Japanese even have a word for people dying from overwork: karoshi.

    What explains this insistent attachment to the capitalist work ethic even at our own peril? This question forms the kernel of Jason Read’s reflections in his recent book The Double Shift: Spinoza and Marx on the Politics of Work (Verso, 2024). Read links it to the age-old question of voluntary servitude from the Dutch philosopher Benedictus Spinoza: “why do people fight for their servitude as if it were their salvation”? Why, in the age of the perpetual hustle, do so many workers voluntarily subject themselves to the exploitation of the neoliberal workplace and refuse to organize?

    Beyond Traditional Ideology Theory

    According to Read, the explanation for our passionate attachment to work lies in the inner workings of ideology. He eloquently defends an affective approach, claiming that the narratives and representations that justify contemporary work culture integrate workers’ affects into an ideological rationalization of the capitalist work ethos. He opens the book with a reference to Marx’s observation that “life is not determined by consciousness, but consciousness by life” (Read 2024, 2). Through our material interactions with our surroundings, we develop ideas and representations of how the world operates and what our place in this environment is. Through the lens of Spinoza, Read interprets these interactions as always already animated by affect. The world responds to human sensibility in ways that either increase or decrease those humans’ capability to act in the world. The world of work can make us angry, joyful, desperate, hopeful, and these tonalities structure our capacities to act in the world. Affects form the basic timbre of our everyday worldly conduct.

    However, ideology is not merely the immediate affective reflection of our material interaction with nature. An institutionally constructed regime of signs and representations filters these impressions through the dominant social culture. ‘Culture’ in a general sense is mediated by social institutions and the latter influence how we perceive the world and affect our surroundings. These institutions are chiefly in the hands of the ruling classes, so the interpretive lenses we acquire to put our lived experiences into words predominantly reflect the assumptions and aspirations of the ruling elites. Our felt sensibility of the world is our own, but the vocabulary with which we express and navigate these affects come from elsewhere. Ideology theory demonstrates how the ruling ideas of the ruling class sink so deeply into human subjectivity until they become the spontaneous ideology with which we articulate the world and our place in it. Ideology establishes what Stuart Hall has called “the regime of the taken for granted” (Hall 2016, 138), the narratives and the refrains that pre-consciously determine how our lived experiences are translated into the common vernacular of everyday life.

    Read subsequently identifies a tension at the heart of ideology and our understanding of work. On the one hand, our conscious reflections on work derive from our lived experiences, our material interactions on the job. On the other, the meaning of these interactions is filtered through a system of representations and practices alien to work itself. The ruling ideological apparatuses influence how we affectively undergo our labor conditions. People often experience their job as grueling exploitation yet live it as their liberation. According to Read, the ideology of work has curiously succeeded in coopting and subsuming people’s everyday experiences of work in an ethos that promotes structural overwork as the key to happiness.

    Work has become the answer to every problem, the solution to everything – not only for ensuring one’s economic status but also for defining relationships, one’s sense of self, and other fundamental elements of everyday life experience. In a society that increasingly eschews politics, or collective action, as a way of remedying or transforming life, work is that last remaining activity of transformation left to us. (Read 2024, 4)

    Work is indeed experienced as exhausting toil and sorrowful exploitation, yet within the ideologically constructed social imaginary of contemporary capitalism, there is no way out of this conundrum but through overwork. The ideology of work gives meaning to our efforts and articulates our hopes and joys in such a way that we see our liberation in more work rather than less. Desire itself is articulated through an ideology that attributes inherent value to the occupations of always-on work culture. Political solutions are blotted out of the regime of the taken for granted, and the only viable option left is to hustle through life in pursuit of getting rich enough to leave the rat race behind. Of course, for most of us the rat race never ends.

    In Capital Volume I, Marx predicted that two barriers would stop the infinite growth machine of capital accumulation: “the weak bodies and the strong wills of its human attendants” (Marx 1996, 406). Workers would organize and construct a collective will to counter the exploitative tactics of capital, or their bodies and mind would slowly falter under the pressure of factory-labor. Yet today, the workers’ collective will is rarely strong enough to obtain substantial gains. And their weak bodies burn out at excruciating speed with little effect. Ideology has neutralized the forces of resistance emanating from workplace domination to mobilize workers to act against their own interest.

    The red thread throughout Read’s book is the attempt to update Marxism’s traditional ideology theory to explain why people’s strong wills and weak bodies have been coopted into entrepreneurial ideology and incessant hustling. In classical Marxism, the class struggle between workers and capitalists determines the evolution of the economic base. Classical Marxism presents the labor experience as a pure or originary moment in which two classes struggle for control over the means of production. The cultural and political superstructure responsible for the development and diffusion of ideology is subsequently built on top of this struggle. In the immediate material interaction with the world, workers experience their exploitation, but the ideologies of the superstructure are allegedly designed to obscure this basic fact. All it takes for the revolution is for Marxist critics to pluck the imaginary flowers from workers’ chains with the cold force of reason to reveal the ugly truth underneath. This picture, firstly, ignores how ideologies always already permeate the work experience itself. There is no pure moment of real consciousness of workplace exploitation that is only afterwards corrupted by ideology. The affective experience of work is always already articulated through the lens of social imaginaries.

    Secondly, the classical Marxist approach is excessively rationalistic. It presumes that the mere act of informing workers of their real class interest, as opposed to their false consciousness, will make nefarious ideologies disappear, like spraying pesticides on imaginary flowers. However, as Spinoza commented, people often ‘see the better and do the worse’ (Read 2024, 166). They know the boss is a bully, the pay is unfair, and the system is rigged, yet they choose to adapt and rationalize that choice because they cannot imagine a way out. How individuals perceive and acquire knowledge about the world is immersed in an infrastructure of intimate desires that is impervious to reason. Ideology is not mere superficial veneer that can be scraped off through the power of reason. It affects human conduct so intimately that it becomes almost indistinguishable from life itself. Our most profound sense of self and our most personal choices are products of ideological work.

    Read illustrates this point with the 2019 film The Assistant, a story about a day in the life of Jane, a junior assistant at a movie production company where sexual harassment is rampant yet kept under the lid. Jane aspires to become a movie producer herself one day, but for now she is stuck in an infinite string of menial secretarial tasks. She gradually learns more about the sexual misconducts of her boss, yet she also picks up on the quietism among her co-workers, who all seem to know yet do nothing. They see the better but do the worse. Jane optimistically goes to human resources to save her colleague from sexual harassment, yet the director makes it clear that nothing will come from her complaint. By the end of her day, Jane has accomplished nothing. She realizes that continuing her investigation equals career suicide. Her hopes and dreams of becoming a movie producer all depend on the monster everyone knows to be a sexual predator, so there is little to be gained from filing a complaint. The boss would probably remain firmly in his seat, while Jane would have to abandon her dreams of a career in film. When the credits roll, Read hypothesizes that Jane will probably return to work the next day and settle for the cynical consensus that animates the office.

    The Assistant shows that ideological acceptance of workplace domination is not just a matter of fanciful illusions that could easily be dispelled by the cold force of reason. It has seeped into the affective lifeworld of the production company. Desperate resignation rules in the office corridors. Those who know better are emotionally blackmailed into doing the worse. Their career hopes and aspirations are coopted as leverage in a cynical play of voluntary servitude. According to Read’s Spinoza, those in power actively integrate parts of the lived experiences of the powerless to let ideological rationalizations of voluntary submission settle in. They must coopt sentiments circulating in the population and give them a specific political orientation that favors the status quo. This clarifies the tension between affective experience and ideological domination in Read’s analysis: our immediate lived experience of work do, in fact, disclose exploitation, but these experiences have been articulated within social imaginaries that coopt our deepest desires and block off any hopes for collective resistance. The only option left is to stay calm and carry on as if more exploitation were our only liberation.

    This theory of ideology enables Read to offer not only illuminating cultural readings of the despair that animates movies like The Assistant, but also to unmask the false alternatives offered in movies and tv-shows that are ostensibly more critical of late-capitalist work culture but fail to capture just how deep the ideological cooptation goes. In the first chapter, for example, Read impressively takes on two movies from 1999 that seem more critical of contemporary work culture than they actually are: Fight Club and Office Space. Both offer an explicit critique of the capitalist work ethos: Fight Club supposedly dispels the myths of “working jobs we hate so we can buy stuff we don’t need” (Read 2024, 63) and Office Space allegedly demonstrates the inanity of contemporary service and office jobs. But Read shows that these films confuse the critique of capitalist labor in general with the critique of specific forms of concrete labor in favor of other forms of (capitalist) concrete labor. Rather than attacking work culture as such, both films criticize two particular forms of work dominant in the 1990s: white-collar work in office culture and the rising trend of service work with high requirements of emotional labor. Yet neither film truly succeeds at imagining a post-capitalist, post-work future. Through the character of Peter, who ends up as a construction worker, Office Space offers a retreat to manual labor as the solution. Fight Club presents a more existentialist yet equally individualistic response. When Tyler Durden holds a convenience store cashier at gun point, he asks the man what job he truly wants to do. The cashier confesses he always dreamed of being a veterinarian but gave up too quickly. After this encounter with mortality, Durden lets the cashier live with the warning that, if he does not pursue his dream, Durden will come back to kill him. Both films do not question labor as such or the culture of overwork. They criticize the labor practices dominant in 1990s capitalism and replace them with manual or existentially authentic labor. They rearticulate the workers’ hatred of all labor into a critique of particular forms of labor and so leave the capitalist work ethic as such intact.

    In sum, an exploration of the ideology of work and people’s steadfast attachment to work as a means of getting ahead must take the affective dimension of ideology into account. Ideological justifications of work must convincingly put into words how people feel about themselves, each other, and their jobs. Marx and his acolytes might have called work under capitalism exploitative and alienating, but if I consider my boss and supervisors as a chosen family, then trade-unionists will probably fail to organize us. Marx and his acolytes might preach about class struggle between workers and capitalists, but if I see a foul-mouthed business mogul on tv trolling Democrats who look and talk like the high-school teachers that I used to hate, I will probably cheer along. Getting people to fight for their exploitation as if it were their liberation is often a matter of putting into words the affects others have failed to convincingly articulate into a more emancipatory project.

    The Affective Hellscape of Late Capitalism

    If we study more closely how Read applies his affective theory of ideology to work culture, we find three explanations for why workers actively desire contemporary hustle culture. Despite all reflections on the subtleties of ideology, Read firstly affirms that many workers still hate their job with a passion and would gladly quit … but at the end of the month, rent is due. Not ideological cooptation of their hopes and desires, but fear motivates their attachment to overwork. These individuals do not have any strong affinity with the work ethic, but the mute compulsion of the market forces them to endure. When people fear for their livelihoods or fear losing the social recognition that comes with having a job, they will latch on to the capitalist work ethic despite all the humiliations they suffer at work.

    Read’s second answer delves more deeply into his affective theory of ideology. He explains how mass media and popular myths about work coopt the feelings people have about their job and inflect these into passionate attachments to work for its own sake. The most obvious example is how Hollywood myths about self-made businessmen or ‘chosen ones’ inspire crowds of followers to think of themselves as entrepreneurs of their own lives. If books, movies, and tv-shows repeatedly put relatable underdogs in leading roles as chosen ones that beat all the odds to save the world, then audiences will undoubtedly start to imagine similar futures for themselves as well. A common joke about zombie movies illustrates this problem well. When watching zombie movies, people often imagine themselves in the role of the sole survivor who saves the world through impressive feats of heroism. While other people are mindless automatons, only the individual’s go-getter attitude assures survival. However, in the event of a real-life apocalypse, it is much more likely for any one of us to wander among the zombies. The film preys upon our main character syndrome to encourage narcissistic predispositions toward individualism. But any sober reflection on the apocalyptic scenario should show that egoistic survivalism is a sure route to oblivion. That emancipation more often comes from collective action rather than individual success is maneuvered beyond the frame.

    Yet the power of myth goes deeper in our working lives. Especially in careers that facilitate the joys of creative self-expression or meaningful human contact, these momentary flashes of happiness are often abused as excuses for worsening pay and working conditions. Jane, The Assistant’s protagonist, finds genuine joy in the world of movies she suddenly inhabits. But her happiness is quickly used against her. The same leveraging of joy in exchange for pain is rampant in contemporary work culture. Should unpaid interns at a fashion company really complain about grueling working hours if they get to work with genius designers? Is art not their passion? Should adjunct academics really demand a proper wage if they get to teach the theories they love to eager students? Is academia not a vocation? Should nurses really form a union and go on strike if they can meaningfully contribute to patients’ lives? Is the latter’s gratitude not payment enough?

    Read’s third and most harrowing answer is the phenomenon of negative solidarity. Sometimes workers see the suffering they have endured as a badge of honor. They allegedly deserve social recognition because they have dragged themselves to work every day and now demand compensatory respect for their pain. This is the middle manager who, because they were bullied into submission their first day on the job, must make new colleagues suffer just as much. Or the person who refuses to join a union “because we did not have collective bargaining back when I was just barely getting by”. Or the university professor who ignores rumors about a colleague’s sexual misconduct, “because we all have stories like that from when I was a grad student”. Or the exploited workers who refuse to express solidarity with striking teachers “because they get time off during the summer while we still have to work”. Negative solidarity is the living embodiment of the sunk cost fallacy. By now, people have invested so much in a losing game that they obstinately refuse to withdraw and keep on suffering more losses. Any other people with a chance at improving their lives must be violently struck down again. If I cannot have nice things, no one can. In that way, solidarity in misery is perpetuated.

    Under the aegis of negative solidarity, people give up the struggle for a better future, choose to adapt to desperate circumstances and force others to do the same. The best they can hope for is to become entirely self-reliant and cut off all ties to others. If you do not need help from anyone, then no one can ever disappoint you. In Coming Up Short, Jennifer Silva (2015) documents how millennials struck by the Great Recession rarely reacted with public outrage or political action. Apart from a few highly localized outbursts of collective action around the Occupy Wall Street movement, most millennials did not politically organize at all. They interpreted their fate as the outcome of personal trauma and singular misfortunes in their individual biographies. They deeply mistrusted social movements or collective institutions and hoped to become completely independent to pursue individual success. An excruciatingly demanding work ethic and the endurance of endless hustles were the safest route to such independence.

    Sorry to Bother You With Some Rebellion

    After approximately 200 pages, one closes The Double Shift with a feeling of despair reminiscent of Mark Fisher’s Capitalist Realism. It is easier to imagine the end of the world than the end of the capitalist work ethic. That may come as a surprise since Read himself regularly writes that resistance is possible and that workers can always fight back against their exploitation. He even ends his book’s conclusion with a discussion of Sorry to Bother You, a film that allegedly exemplifies successful resistance against the all-pervasive ideology of work. In this film, Cassius Green, a.k.a. Cash, a black low-level employee at a telemarketing company, finds sudden success when he learns how to use his ‘white voice’ to sell products. While his co-workers are striking for better working conditions, Cash increasingly focuses on his own social climbing. Even when his girlfriend and co-worker Detroit confronts him about selling slave labor over the phone, Cash rationalizes the problem away in an impressive feat of negative solidarity: “What the fuck isn’t slave labor?” (Read 2024, 200). But all ends well, in Read’s interpretation. Ultimately, Cash is confronted with the surreal abuse of his company breeding literal workhorses from human/animal hybrids. He snaps out of the ideological dream of self-centered careerism and joins the protests. According to Read, this shows that the power of imagination and surrealist revelations of exploitation still possess the potential to wake people up from their ideological slumber.

    And yet the dread remains. The last ten pages of the book do not make up for 190 pages of doom and gloom. Read’s approach suffers from a feature common in contemporary critical theory: final page redemptionism. It also affects the writings of, among others, Theodor Adorno, Giorgio Agamben, or Franco ‘Bifo’ Berardi. They first delineate in excruciating detail how instrumental rationality dominates contemporary culture, the state of exception has become the rule, or neuro-totalitarian fascism has killed off human sensibility. But then, at the very end of the book, a glimmer of hope appears. Where the danger is, also grows the saving power. A message in a bottle, tossed out at sea, can bring back reason from its hibernation. A real state of exception can render us all ungovernable. Chaosmic spasms of exhaustion will give birth to new rhythms of life. These redemption arcs are, however, usually cut short too soon. While the oppressive reality of the status quo is disclosed in its minutest details, the answers presented are mere concluding gestures.

    While despair is tempting and imagining alternative futures is hard, it is a task worth pursuing. Not only for our own sanity, but also out of responsibility for those who suffer from the injustices depicted in critical theory. Structural overwork and the epidemics of burnout, loneliness, and despair make new victims every day. Meticulously explaining how this disaster has come about and can persist, amounts to philosophical defeatism. It surrenders the social imaginary of emancipation to the status quo of capitalist realism. Theory fails to fathom an escape from current practices of domination. However, theory will take you only so far. In opposition to theoretical defeatism, Albert Camus pleads for a praxis of incessant rebellion: “Rebellion is born of the spectacle of irrationality, confronted with an unjust and incomprehensible condition. […] It insists that the outrage be brought to an end” (Camus 1991, 10). The spectacle of irrationality in always-on work culture can trigger two kinds of responses: theoretical melancholy that incisively describes how our suffering was inevitable due to ideological cooptation, or practical outrage that struggles to make it stop. Rather than theorizing about how resistance is futile and only a god can save us, rebellion emanates from a practical attitude that insists on injustice to end. In The Plague, when the epidemic has overcome the city and the protagonists fail to construct an effective response, doctor Rieux and the other inhabitants continue to develop new tactics to combat the disease. Through incessant experimenting and strategizing, they hope to find a solution. Camus writes that

    many fledgeling [sic] moralists in those days were going about our town proclaiming there was nothing to be done about it and we should bow to the inevitable. And Tarrou, Rieux, and their friends might give one answer or another, but its conclusion was always the same, their certitude that a fight must be put up, in this way or that, and there must be no bowing down. The essential thing was to save the greatest possible number of persons from dying and being doomed to unending separation. And to do this there was only one resource: to fight the plague. (Camus 2010, 128–29)

    In The Plague, the infection eventually just disappears. The collective action of doctor Rieux and his comrades had little to do with the outcome. But my argument for rebellion against quietism is not just a matter of desperately wishful thinking. Through the practical insistence of rebellion against workplace domination, alternatives can still appear where critical theory might not have expected them.

    Already in the 1960s, the Italian Marxist Romano Alquati stressed how Marxist theory tends to be excessively pessimistic about the capacities of working-class self-organization, because it fails to register the forms of ‘invisible organization’ among factory workers (Alquati 2013). This idea has recently gained new traction to explain the spontaneous protests of, for instance, Deliveroo couriers and Amazon warehouse workers (Cant 2020, 130; Delfanti 2021, 148). The answers that fail to emerge on the pages of philosophical texts, often arise unexpectedly in material practices. Resistance to the capitalist work ethic is taking place in the workspace almost all the time, though it often stays just under the radar as a weapon of the weak. Workers go more often to the bathroom than their bladders can justify, they steal office equipment at an alarming pace, they hold secret meetings on the job about the Israel/Palestine conflict, and union activism in the United States is at an all-time high. I am not saying that global emancipation is just around the corner, but a complete theory of ideology should be able to explain the successes of the oppressed just as convincingly as their defeats.

    New emancipatory tactics are constantly being invented. Oftentimes, the protest movements that seem to have failed or that did not deliver the revolutionary salvation hoped for, become the incubation hubs for novel experimental initiatives. After the 2015 Nuits Debout protests in Paris, for example, IT specialists wanted to directly combat the exploitative practices of food-delivery platforms like Deliveroo and Uber Eats. They developed software for a cooperative food-delivery platform and have since founded CoopCycle, an international federation of food-delivery couriers across the globe. While Deliveroo and others pressure couriers toward overwork via surreptitious and opaque algorithmic management, CoopCycle’s app opts for human dispatching so that a human manager determines whether or not tasks are doable for couriers. After the Occupy Wall Street protests, Michelle Miller, Jess Kutch and others developed the online tool Coworker.org, with which workers can propose and discuss issues for collective action at work. Coworker.org has, for instance, been instrumental in mobilizing Starbucks workers, first for relatively small-scale actions, like demanding updates on dress code policy. But meanwhile, these small-scale successes have paved the way for large-scale collective action at Starbucks. Every struggle, even those that ostensibly fail, build up a reservoir of invisible organizational power that can be reignited in novel forms at a later date.

    Do You Hear the People Sing?

    A good theory of ideology should not only explain how dominant institutions coopt popular desires and integrate them into the reproduction of the status quo. It should also highlight the invisible organizations that silently subvert or retool ruling discourses to further oppositional interests. Ideology is not just a matter of unilateral cooptation or neutralization, but also of struggle and negotiation. The ideology of work is not just an expression of one-way workplace domination, but is also a stake in a struggle over who controls the labor process. Stuart Hall once called these tactics of ideological resistance ‘cultures of survival’ (Hall 2016, 187). Even in their darkest days, the oppressed are never entirely defeated or coopted into a dominant ideology. They rather continuously negotiate with dominant ideologies to expand their interstitial freedoms. In the cracks of the system, the oppressed struggle for room to breathe.

    Hall’s prime example was the Afro-American black slaves’ reinvention of Christianity as a subversive ideology. One could dismiss religion as yet another ideological state apparatus geared toward the cooptation of people’s lived experiences. Slave-owners had originally preached Christianity among the slaves to encourage obedient submission, but the slaves themselves negotiated for another interpretation. They took the Christian religion of their oppressors to give voice to their suffering and hope of liberation. While the slave-owners’ religion attempted to articulate slaves’ affective experiences into an ethic of voluntary obedience, slaves’ counter-articulation of these affects gave birth to a more combative ethos. Instead of Matthew’s “Blessed are the meek” or Saint-Paul’s exhortation for slaves to consider themselves free in the Lord, the black slaves turned to Moses and his call for liberation from bondage. It gave birth to a rich musical tradition of Afro-American gospel and blues, in which the sorrows, anger, and hope of slaves’ lived experience were put into words in opposition to the status quo. As Hall perceived, “suddenly, you could hear this traditional religious music and language – a part of the dominant culture – being subverted rhythmically from underneath” (Hall 2016, 198).

    In The Black Atlantic, Paul Gilroy takes up Hall’s suggestion for a more elaborate study of the history of black music. Popular music has been famously criticized as a culture industry that neutralizes resistant affects among the people with soothing bedtime songs of comfort. But Gilroy notes, among others, the development of call-and-response motifs in black music as a formal weapon of the weak to democratically take back control over the discourses narrating their lives:

    There is a democratic, communitarian moment enshrined in the practice of antiphony, which symbolizes and anticipates (but does not guarantee) new, non-dominating social relationships. Lines between self and other are blurred and special forms of pleasure are created as a result of the meetings and conversations that are established between one fractured, incomplete, and unfinished racial self and others. Antiphony is the structure that hosts these essential encounters. (Gilroy 2022, 79)

    Ideological cooptation emerges from the ruling class’s ideological narratives translating the affective, lived experience of oppression into subjectivities that support the status quo. Yet counter-hegemonic tactics, like the antiphonies of black music, facilitate a democratic remaking of these ideological subjectivities. The oppressed learn to experience the joy and pleasures of a culture built on communitarian solidarity through the singing of the black diaspora.

    One still recognizes this practice in songs like Kendrik Lamar’s “Alright,” the unofficial anthem of the US Black Lives Matter movement. With Lamar’s verses rapping about anti-black discrimination and police brutality, Pharell Williams’s chorus sounds like a vox populi evangelizing “We gon’ be alright”. The song reached mainstream international acclaim in 2015, when Lamar performed it at the BET Awards on top of a graffitied police car. Protesters in Cleveland had also sung the chorus to celebrate their successful struggle to prevent the police illicitly arresting a 14-year-old black boy. The video of this incident went viral online, further solidifying the song’s status as articulation of Afro-American lived experience of police violence. We could dismiss this as just another cooptation of popular desires into the business strategy of a culture industry avid to make money from the lived experience of the poor and oppressed. However, such reductionism does little justice to what the song has actually meant to BLM protestors. In the verses, Lamar reflects on how fame and money are luring him into a deal with the devil that can only end in depression and “going cray”. These temptations are, however, just the new “forty acres and a mule” promised to black people to coopt their desires into complacency with a social system that routinely brutalizes and kills black men and can easily destroy even powerful individuals like Lamar himself. These apocalyptic contemplations are interspersed with an uplifting chorus that asserts confidence in the future (“We gon’ be alright”). While such confidence can easily be mistaken for naïve optimism, a more truthful reading offers it as a battle cry. A politically articulated ‘We’ senses itself strong enough to make sure that everything is going to be alright. This song does not articulate yet another “Blessed are the meek”, but an oppositional “Let my people go”.

    If we listen for similar statements among anti-work anthems, we could cross the ocean of Gilroy’s Black Atlantic and focus on Belgian artist Stromae’s break-out song Alors on danse. When asked about why he wrote this song about nightclubs, he responded, “for me [the most melancholic places] are nightclubs, because arguably they entertain a kind of false euphoria. One says one goes to a nightclub to have a good time. […] Because if you did not have a good time, it was not a successful night-out. […] These are people who live otherwise normal lives and I would hence argue that wherever there is extreme joy, there must also be extreme sadness” (Burnett 2017, 81 my translation). The clubbers’ manic oscillation between exhilaration and depression takes shape in the dialogue between the song’s verses and chorus. The verses document the drudgeries of everyday working life:

    Qui dit proches te dit deuils

    Car les problèmes ne viennent pas seuls

    Qui dit crise te dit monde

    Dit famine, dit tiers-monde

    Et qui dit fatigue dit réveil

    Encore sourd de la veille

    Alors on sort pour oublier tous les problèmes.

    With this call for escapism in response to mourning, crisis, famine, and exhaustion, the upbeat chorus commences “Alors on danse” (“So one dances”). It’s interesting to note Stromae’s choice for the impersonal pronoun ‘on’ (one) instead of the more personal ‘nous’ (we). He does not sing that “we dance” on the weekend to forget our troubles but that “one dances” to deal with one’s issues. Even the joy of dancing is no longer a lived experience we could call our own. In contrast to Lamar’s articulate We, Stromae documents the alienation of a fragmented One. Dancing at the club is just another motion one goes through as “mere living accessory of [capital’s] machinery” (Marx 2005, 693). The impersonal pronoun articulates the alienation of an existence lived on repeat, a cynical struggle to make it until the weekend so one can binge-drink as self-administered therapy. At least then, one can party one’s misery away until Monday morning, when the ordeal starts all over again and the drunken memory of past joys has receded into oblivion. Stromae articulates the despair of an unsustainable culture built on exploitative work, yet he attacks any notion that work and exploitation would constitute our liberation. Rather than articulating and coopting our dreams and desires into an ideological narrative that identifies empowerment with more work, Alors on danse explicitly dispels any myths about our dream careers. Work is nothing but five days of drudgery so one can pay rent and drink alcohol to forget. Stromae takes up the reality of always-on work culture and presents it in its grimmest grayest colors. Rather than massaging our lived experience into compliance with the status quo, the back-and-forth rhythm between verses and chorus highlights the stark contrast between our lived experience and the meagre coping mechanisms society offers. By laying bare the contradictions of the capitalist work ethic, the music might one day stop.

    What distinguishes the approach to ideology inspired by Hall and Gilroy from Read’s is that the latter mostly focuses on how ideologies coopt the perspective of the oppressed to subsequently make it serve the oppressors’ interests. Gilroy and Hall are certainly no strangers to this analysis, but their focus is on culture and ideology as matters of struggle. The oppressed frequently corrupt the ideological refrains of their oppressors and use it against their masters. In music, for example, the oppressed articulate weapons of the weak that expose the drudgeries of work and the injustices of social domination. The music industry, in that regard, is not just an ideological state apparatus to functionally reproduce the status quo. It is an arena of negotiation, where marginalized cultures struggle to articulate their lived experiences. The affective hellscape of contemporary capitalism hits hardest in the contradictory call-and-response dynamics of Alright or Alors on danse. It is by renegotiating the dominant ideologies’ attempts at cooptation that the oppressed increase their interstitial freedoms. As Hall concludes, “the conditions within which people are able to construct subjective possibilities and new political subjectivities for themselves are not simply given in the dominant system. They are won in the practices of articulation which produce them” (Hall 2016, 206).

    Tim Christiaens is assistant professor of economic ethics and philosophy at Tilburg University in the Netherlands. His research focuses on critical theory and the digitalization of work, neoliberalism, and the power of financial markets. He is the author of Digital Working Livespublished with Rowman & Littlefield in 2022, and has published papers in Theory, Culture & Society, European Journal of Social Theoryand Big Data & Society.

    References

    Alquati, Romano. 2013. ‘Struggle at FIAT’. Translated by Evan Calder Williams. Viewpoint Magazine. 26 September 2013. https://viewpointmag.com/2013/09/26/struggle-at-fiat-1964/.

    Burnett, Joanne. 2017. ‘Why Stromae Matters: Dance Music as a Master Class for the Social Issues of Our Time’. The French Review 91 (1): 79–92.

    Camus, Albert. 1991. The Rebel: An Essay on Man in Revolt. New York: Vintage Books.

    ———. 2010. The Plague. London: Penguin Books.

    Cant, Callum. 2020. Riding for Deliveroo: Resistance in the New Economy. Cambridge: Polity Press.

    Delfanti, Alessandro. 2021. The Warehouse: Workers and Robots at Amazon. London: Pluto Press.

    Gilroy, Paul. 2022. The Black Atlantic: Modernity and Double Consciousness. London: Verso.

    Hall, Stuart. 2016. Cultural Studies 1983. Durham: Duke University Press.

    ———. 2019. Essential Essays: Vol. 1. Durham: Duke University Press.

    Marx, Karl. 1992. Capital: Vol. 3. Translated by David Fernbach. London: Penguin.

    ———. 1996. Capital: Vol. 1. Translated by Samuel Moore and Edward Aveling. London: Lawrence & Wishart.

    ———. 2005. Grundrisse: Foundations of the Critique of Political Economy. Translated by Martin Nicolaus. London: Penguin Books.

    Read, Jason. 2024. The Double Shift: Spinoza and Marx on the Politics and Ideology of Work. London: Verso.

    Silva, Jennifer. 2015. Coming up Short: Working-Class Adulthood in an Age of Uncertainty. Oxford: Oxford University Press.

  • Zachary Loeb — Where We’re Going, We’ll Still Probably Need Roads (Review of Paris Marx, Road to Nowhere: What Silicon Valley Gets Wrong about the Future of Transportation)

    Zachary Loeb — Where We’re Going, We’ll Still Probably Need Roads (Review of Paris Marx, Road to Nowhere: What Silicon Valley Gets Wrong about the Future of Transportation)

    a review of Paris Marx, Road to Nowhere: What Silicon Valley Gets Wrong about the Future of Transportation (Verso, 2022)

    by Zachary Loeb

    You can learn a lot about your society’s relationship to technology by looking at its streets. Are the roads filled with personal automobiles or trolley-cars, bike lanes or occupied parking spaces, are there navigable sidewalks or is this the sort of place where a car is a requirement, does a subway rumble beneath the street or is the only sound the honking of cars stuck in traffic, are the people standing on the corner waiting for the bus or for the car they just booked through an app, or is it some kind of strange combination of many of these things simultaneously? The roadways we traverse on a regular basis can come to seem quite banal in their familiarity, yet they capture a complex tale of past decisions, current priorities, as well as a range of competing visions of the future.

    Our streets not only provide us with a literal path by which to get where we are going, they also represent an essential space in which debates about where we are going as a society play out. All of which is to say, as we hurtle down the road towards the future, it is important to pay attention to the fight for control of the steering wheel, and it’s worth paying attention to the sort of vehicle in which we find ourselves.

    In Road to Nowhere: What Silicon Valley Gets Wrong about the Future of Transportation, Paris Marx analyzes the social forces that have been responsible for making our roads (and by extension our cities, towns, and suburbs) function the way they do, while providing particular emphasis on the groups and individuals trying to determine what the roads of the future will look like. It is a cutting assessment that examines the ways in which tech companies are seeking to take over the streets, and sidewalks, as well as the space above and below them: with gig-economy drivers, self-driving cars, new tunnels, delivery robots, and much else. To the extent that technological solutions are frequently touted as the only possible response to complex social/political/economic problems, Marx moves beyond the flashy headlines to consider what those technological solutions actually look like when the proverbial rubber hits the road. In Road to Nowhere the streets and sidewalks appear as sites of political contestation, and Marx delivers an urgent warning against surrendering those spaces to big tech. After all, as Marx documents, the lords of the information superhighway are leaving plenty of flaming debris along the literal highways.

    The primary focus of Road to Nowhere is on the particular vision of mobility being put forth by contemporary tech companies, but Marx takes care to explore the industries and interests that had been enforcing their view of mobility long before anyone had ever held a smartphone. As Marx explains, the street and the city were not always the possession of the personal automobile, indeed the automobile was at one time “the dominant technology that ‘disrupted’ our society” (10). The introduction of the automobile saw these vehicles careening down streets that were once shared by many other groups, and as automobiles left destruction in their wake, the push for safety was one that was won by ostensibly protecting pedestrians by handing the streets over to the automobile. Marx connects the rise of the personal automobile to “a much longer trend of elites remaking the city to serve their interests” (11), and emphasizes how policies favoring automobiles undermined other ways of moving about cities (including walking and streetcars). As the personal automobile grew in popularity, and mass production made it a product available not only to the wealthy, physical spaces were further transformed such that an automobile became less and less of a luxury and more and more of a need. From the interstate highway system to the growth of suburbs to under-investment in public transit to the development of a popular mythos connecting the car to freedom—Marx argues that the auto-oriented society is not the inevitable result flowing from the introduction of the automobile, but the result of policies and priorities that gradually remade streets and cities in the automobile’s image.

    Even as the automobile established its dominance in the mid-twentieth century, a new sort of technology began to appear that promised (and threatened) to further remake society: the computer. Pivoting for a moment away from the automobile, Marx considers the ideological foundations of many tech companies, with their blend of techno-utopian hopefulness and anti-government sentiment wherein “faith was also put in technology itself as the means to address social and economic challenges” (44). While the mythology of Silicon Valley often lauds the rebellious geek, hacking away in a garage, Marx highlights the ways in which Silicon Valley (and the computing industry more generally) owes its early success to a massive influx of government money. Cold War military funding was very good—indeed, essential—for the nascent computing sector. Despite the significance of government backing, Silicon Valley became a hotbed for an ideology that sneered at democratic institutions while elevating the computer (and its advocates) as the bringer(s) of societal change. Thus, the very existence of complex social/political/economic problems became evidence of the failures of democracy and proof of the need for high-tech solutions—this was not only an ahistorical and narrow worldview, but one wherein a group of mostly-wealthy, mostly-white, mostly-cis-male tech lovers saw themselves as the saviors society had been waiting for. And while this worldview was reified in various gadgets, apps, and platforms “as tech companies seek to extend their footprint into the physical world” this same ideology—alongside an agenda that places “growth, profits, and power ahead of the common good”—is what undergirds Silicon Valley’s mobility project (62).

    One of the challenges in wrestling with tech companies’ visions is to not be swept away by the shiny high-tech vision of the future they disseminate. And one area where this can be particularly difficult is when it comes to electric cars. After all, amongst the climate conscious, the electric car appears as an essential solution in the fight against climate change. Yet, beyond the fact that “electric vehicles are not a new invention” (64), the electric car appears as an almost perfect example of the ways in which tech companies attempt to advance a seemingly progressive vision of the future while further entrenching the status quo. Much of the green messaging around electric vehicles “narrowly focuses on tailpipe emissions, ignoring the harms that pervades the supply chain and the unsustainable nature of auto-oriented development” (71). Too often the electric car appears as a way for individuals of means to feel that they are doing their part to “personal responsibility” their way out of climate change, even as the continued focus on the personal automobile blocks the transition towards public transit that is needed. Furthermore, the shift towards electric vehicles does not end destructive extraction, it just shifts the extraction from fossil fuels to minerals like copper, nickel, cobalt, lithium, and coltan. The electric car risks being a way of preserving auto-centric society, and this “does not solve how the existing transportation system fuels the climate crisis and the destruction of local environments all around the world” (88).

    If personal ownership of a car is such a problem, perhaps the solution is to simply have an app on your phone that lets you summon a vehicle (complete with a driver) when you need one, right? Not so fast. Companies like Uber sold themselves to the public on a promise of making cars available when needed, especially for urban dwellers who did not necessarily have a car of their own. The pitch was one of increased mobility, where those in need of a ride could easily hire one, while cash-strapped car owners could have a new opportunity to earn a few extra bucks driving in the evenings. Far from solving congestion, empowering drivers, and increasing everyone’s mobility, “the Uber model adds vehicles to the road and creates more traffic, especially since the app incentivizes drivers to be active during peak times when traffic is already backed up” (99). Despite claims that their app based services would solve a host of issues, Uber (and its ilk) have added to urban congestion, failed to provide their drivers with a stable income, and have not truly increased the mobility options for underserved communities.

    If gig-drivers wind up being such an issue, why not try to construct a world where drivers are not necessary? And thus, perhaps few ideas related to the future of mobility have as firm a grasp on the popular imagination as the idea of the self-driving car. A fantasy that seems straight out of science fiction. Albeit, with good reason. After all, what a science fiction writer can dream up, and what a special effects team can mock up for a movie, face serious obstacles in the real world. The story of tech companies and autonomous vehicles is one of grandiose hype (that often generates numerous glowing headlines), followed by significantly diminished plans once the challenges of introducing self-driving cars are recognized. While much of the infrastructure we encounter is built with automobiles in mind, autonomous cars require a variety of other sorts of not-currently existing infrastructure. Just as “automobiles required a social reconstruction in addition to a physical reconstruction, so too will autonomous vehicles” (125), and this will entail transforming infrastructure and habits that have been built up over decades. Attempts to introduce autonomous vehicles have revealed the clash between the tech company vision of the world and the complexities of the actually existing world—which is a major reason why many tech companies are quietly backing away from the exuberance with which they once hyped autonomous cars.

    Well, if the already existing roads are such a challenge, why not think abstractly? Instead of looking at the road, look above the road and below the road! Thus, plans such as Boring’s proposed tunnels, and ideas about “flying cars,” seek to get around many of the challenges the tech industry is encountering in the streets by attempting to capitalize on seemingly unused space. At first glance, such ideas may seem like clear examples of the sort of “out of the box thinking” for which tech companies are famed, yet “the span of time between the initial bold claims of prominent tech figures and the general realization that they are fraudulent appears to be shrinking” (159). And once more, in contrast to the original framing that seeks to treat new tunnels and flying cars as emancipatory routes, what becomes clear is that these are just another area in which wealthy tech elites are fantasizing about ways of avoiding getting stuck in traffic with the hoi polloi.

    Much of the history of the automobile that Marx recounts, involves pedestrians being deprived of more and more space, and this is a story that continues as new battles for the sidewalk intensify. As with other tech company interventions in mobility, micromobility solutions that cover sidewalks in scooters and bikes that are rentable via app, present themselves with a veneer of green accessibility. Yet littering cities with cheap bikes and scooters that wear out quickly while clogging the sidewalks, turn out to be just another service “designed to benefit the company” without genuinely assessing the mobility needs of particular communities (166). Besides, all of those sidewalk scooters are also finding that they need to compete for space with swarms of delivery robots that make sidewalks more difficult to use.

    From the electric car to the app summoned chauffeur to the autonomous car to the flying car, tech companies have no shortage of high-tech ideas for the future of mobility. And yet, “the truth is that when we look at the world that is actually being created by the tech industry’s interventions, we find that the bold promises are in fact a cover for a society that is both more unequal and one where that inequality is even more fundamentally built into the infrastructure and services we interact with every single day” (185). While the built environment is filled with genuine mobility issues, the solutions put forward by tech companies ignore the complexity of how these issues came about in favor of techno-fixes designed to favor tech companies’ bottom lines while simultaneously feeding them new data streams to capitalize. The gleaming city envisioned by tech elites and their companies may be broadcast to all, but these cities are playgrounds for the wealthy tech elite, not for the rest of us.

    The hope that tech companies will come along and sort everything out with some sort of nifty high-tech program speaks to a lack of faith in societies’ ability to tackle the complex issues they face. Yet, to make mobility work for everyone, what is essential is not to flee from politics, but to truly address politics. The tech companies are working to reshape our streets and cities to better fit their needs, but this demands that people counter by insisting that their streets and cities be made to actually meet people’s needs. Instead of looking to cities with roads clogged with Ubers and sidewalks blocked by broken scooters, we need to be paying attention to the cities that have devoted resources (and space) to pedestrians while improving and expanding public transit. The point is not to reject technology but to reject the tech companies’ narrow definition of what technology is and how it can be used, “we need to utilize technology where it can serve us, while ensuring power remains firmly in the hands of a democratic public” (223).

    After all, “better futures are possible, but they will not be delivered through technological advancement alone” (225). We can no longer sit idle in the passenger seat, we need to take the wheel, and the wheels.

    ***

    Contrary to its somewhat playful title, Road to Nowhere lays out a very clear case that Silicon Valley’s vision of the future of mobility is in fact a road to somewhere—the problem is that it’s not a good somewhere. While the excited pronouncements of tech CEOs (and the oft-uncritical coverage of those pronouncements) may evoke images of gleaming high tech utopias, a more critical and grounded assessment of these pipedreams reveals them to be unrealistic fantasies mixed with ideas that are designed to primarily meet the needs of tech CEOs over the genuine mobility needs of most people. As Paris Marx makes clear throughout the chapters of Road to Nowhere, it is essential to stop taking the plans of tech companies at face value and to instead do the discomforting work of facing up to the realities of these plans. The way our streets and cities have been built certainly present a range of very real problems to solve, but in the choice of which problems to address it makes a difference whether the challenges being considered are those facing a minimum-wage worker or a billionaire mogul furious about sitting in traffic. Or, to put it somewhat differently, there are flying cars in the movie Blade Runner, but that does not mean we should attempt to build that world.

    Road to Nowhere: Silicon Valley and the Future of Mobility provides a thoughtful analysis and impassioned denunciation of Silicon Valley’s mobility efforts up to this point, and pivots from this consideration of the past and the present to cast doubt on Silicon Valley’s future efforts. Throughout the book, Marx writes with the same punchy eloquence that has made Marx such a lively host of the Tech Won’t Save Us podcast. And while Marx has staked out an important space in the world of contemporary tech critique thanks to that podcast, this book makes it clear that Marx is not only a dynamic interviewer of other critics, but a vital critic in their own right. With its wide-ranging analysis, and clear consideration of the route we find ourselves on unless we change course, Road to Nowhere presents an important read for those concerned with where Silicon Valley is driving us.

    The structure of the book provides a clear argument that briskly builds momentum, and even as the chapters focus on certain specific topics they flow seamlessly from one to the next. Having started by providing a quick history of the auto-centric city, and the roots of Silicon Valley’s ideology, Marx’s chapters follow a clear path through mobility issues. If the problem is pollution, why not electric cars? If the problem is individual cars, even electric ones, why not make it easy to summon someone else’s car? If the problem is the treatment of the drivers of those cars, why not cars without drivers? If autonomous vehicles are unrealistic because of already existing infrastructure, why not wholly new infrastructure? If creating wholly new infrastructure (below and above ground) is more difficult than it may seem, what about flooding cities with cheap bikes? Part of what makes Road to Nowhere’s critique of Silicon Valley’s ideas so successful is that Marx does not get bogged down in just one of Silicon Valley’s areas of interest, and instead provides a critique that captures that it is not only a matter of Silicon Valley’s response to this or that problem, but that the issues is the way that Silicon Valley frames problems and envisions solutions. To the extent that the auto-centric world is reflective of a world that was remade in the shape of the automobile, Silicon Valley is currently hard at work attempting to remake the world in its own shape, and as Marx makes clear the needs of Silicon Valley companies and the needs of people trying to get around are not the same.

    At the core of Marx’s analysis is a sense that the worldview of Silicon Valley is one that is no longer so easily confined to certain geographical boundaries in California. As the tech companies have been permitted to present themselves as the shiny saviors of society, that ideology has often overwhelmed faith in democratic solutions. Marx notes that “as the neoliberal political system gave up on bold policies in favor of managing a worsening status quo, they left the door open to techno-utopians to fill the void” (5). When people no longer believe that a democratic society can even maintain the bridges and roads, it opens up a space in which tech companies can drive into town and announce an ambitious project to remake the roads. Marx further argues, “too often, governments stand back and allow the tech industry to roll out whatever ideas its executives and engineers can dream up,” this belief if undergirded by a sense that “whatever tech companies want is inevitable…and that neither governments, traditional companies, nor even the public should stand in their way” (178). Part of the danger of this sense of inevitability is that it cedes the future of mobility to the tech companies, robbing the municipalities both of initiative and of the responsibility to meet the mobility needs of the people who live there. Granted, as the many failures Marx documents show, just  because a tech company says that it will do something does not necessarily mean that it will be able to do it.

    Published by Verso Books and written in a clear comprehensive voice, Road to Nowhere stands as an intervention into broad discussions about the future of mobility, particularly those currently taking place on the political left. Thus, even as many readers are likely to cheer at Marx’s skewering of Musk, it is likely that many of those same readers will chafe at the book’s refusal to treat electric cars as a solution. Sure, it’s one thing to lambast Elon Musk (and by extension Tesla), but to critique electric cars as such? Here Marx makes it very clear that we cannot be taken in by too neat techno-fixes, whether they are touted by a specific company (such as Tesla), or whether they are made about a certain class of technologies (electric cars). As Marx makes clear, all of the minerals in those electric cars come from somewhere, and what’s more the issues that we face (in terms of mobility and environmental ones) are not simply the result of one particular technology (such as the gas-powered car) but the way in which we have built our societies around certain technologies and the infrastructure that those technologies require. Therefore, the matter of mobility is about which questions we are willing to ask, and recognizing that we need to be asking a different set of questions.

    Road to Nowhere is at its best when Marx does this work by moving past the particular tech companies to consider the deeper matters of the underlying technologies. Certainly, readers of the book will find plenty of consideration of Tesla and Uber (alongside their famous leaders), but the strength of Road to Nowhere is that the book does not act as though the problem is simply Tesla or Uber. Rather, Marx considers the way in which the problem forces us to think about automobiles themselves, about the long history of automobiles, and about the ways in which so much physical infrastructure has been built to prioritize the use of automobiles. This is, obviously, not to give Uber or Tesla a pass—but Marx does the essential work of emphasizing that this isn’t just about a handful of tech companies and their bombastic CEOs, this is a question about the ways in which societies orient themselves around particular sets of technologies. And Marx’s response is not a call for a return to some romanticized pastoral landscape, but is instead an argument in favor of placing the needs of people above the needs of technologies (and the people selling those technologies). Much of our built environment has been constructed around the automobile, what if we started building that environment around the needs of the human being?

    The challenge of what it would mean to construct our cities around the needs of people, rather than the needs of profit (or the needs of machines), is not a new question. And while Marx briefly considers some past figures who have wrestled with this matter—such as Jane Jacobs and Murray Bookchin—it might have been worthwhile to spend a little more time engaging more fully with past critics. At risk of becoming too much of a caricature of myself as a reviewer, it does seem like an unfortunate missed opportunity in a book about technology and cities not to engage with the prominent technological critic Lewis Mumford whose oeuvre includes numerous books specifically on the topic of technology and cities (he won the National Book Award for his volume The City in History). And these matters of cities, speed, and vehicles have been topics with which many other critics of technology engaged in the twentieth century. Indeed, the rise of the auto-centric society has had its critics all along the way, and it could have been fascinating to engage with more of those figures. Marx certainly makes a strong case for the ways in which Silicon Valley’s designs on the city are informed by its particular ideology, but engaging more closely with earlier critics of technology could have opened up other spaces for considering broader problems about ideologies surrounding technology that predate Silicon Valley. Of course, it is unfair to criticize an author for the book they did not write, and the intention is not to take away from Marx’s important book—but contemporary criticism of technology has much to gain not just from the history of technology but from the history of technological criticism.

    Road to Nowhere is a challenging book in the best sense of that word, for it discomforts the reader and pushes them to see the world around them in a new light. Marx achieves this particularly well by refusing to be taken in by easy solutions, and by recognizing that even as techno-fixes may be the standard offering from Silicon Valley, that a belief in such fixes permeates beyond just the pitches by tech firms. Nevertheless, Marx is also clear in recognizing that even as many of our problems flow from and have been exacerbated by technology, that technology needs to be seen as part of the solution. And here, Marx is deft at considering the way in which technology represents a much more robust and wide-ranging category than the too simplistic version that it is often reduced to when conversations turn to “tech.” Thus, the matter is nothing so ridiculous as conversations about being “pro-technology” or “anti-technology” but recognizing “that technology is not the primary driver in creating fairer and more equitable cities and transportation systems” what is necessary is “deeper and more fundamental change to give people more power over the decision that are made about their communities” (8). The matter is not just about technology (as such), but about the value systems embedded in particular sorts of technologies, and recognizing that certain sets of technologies are going to be better for achieving particular social goals. After all, “the technologies unleashed by Silicon Valley are not neutral,” (179) though the same is also very much true of the technologies that were unleashed before Silicon Valley. Constructing a different world thus requires us to consider not only how we can remake that world, but how we can remake our technologies. As Marx wonderfully puts it, “when we assume that technology can only develop in one way, we accept the power of the people who control that process, but there is no guarantee that their ideal world is one that truly works for everyone” (179).

    You can learn a lot about your society’s relationship to technology by looking at its streets. And Road to Nowhere is a powerful reminder, that those streets do not have to look the way they do, and that we have a role to play in determining what future those streets are taking us towards.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2o Review Digital Studies section.

    Back to the essay

  • Zachary Loeb — Is Big Data the Message? (Review of Natasha Lushetich, ed., Big Data—A New Medium?)

    Zachary Loeb — Is Big Data the Message? (Review of Natasha Lushetich, ed., Big Data—A New Medium?)

    a review of Natasha Lushetich, ed. Big Data—A New Medium? (Routledge, 2021)

    by Zachary Loeb

    When discussing the digital, conversations can quickly shift towards talk of quantity. Just how many images are being uploaded every hour, how many meticulously monitored purchases are being made on a particular e-commerce platform every day, how many vehicles are being booked through a ride-sharing app at 3 p.m. on Tuesday afternoon, how many people are streaming how many shows/movies/albums at any given time? The specific answer to the “how much?” and “how many?” will obviously vary depending upon the rest of the question, yet if one wanted to give a general response across these questions it would likely be fair to answer with some version of “a heck of a lot.” Yet from this flows another, perhaps more complicated and significant question, namely: given the massive amount of information being generated by seemingly every online activity, where does all of that information actually go, and how is that information rendered usable and useful? To this the simple answer may be “big data,” but this in turn just serves to raise the question of what we mean by “big data.”

    “Big data” denotes the point at which data begins to be talked about in terms of scale, not merely gigabytes but zettabytes. And, to be clear, a zettabyte represents a trillion gigabytes—and big data is dealing with zettabytes, plural. Beyond the sheer scale of the quantity in question, considering big data “as process and product” involves a consideration of “the seven Vs: volume” (the amount of data previously generated and newly generated), “variety” (the various sorts of data being generated), “velocity” (the highly accelerated rate at which data is being generated), “variability” (the range of types of information that make up big data), “visualization” (how this data can be visually represented to a user), “value” (how much all of that data is worth, especially once it can be processed in a useful way), and “veracity” (3) (the reliability, trustworthiness, and authenticity of the data being generated). In addition to these “seven Vs” there are also the “three Hs: high dimension, high complexity, and high uncertainty” (3). Granted, “many of these terms remain debatable” (3). Big data is both “process and product” (3), its applications vary from undergirding the sorts of real-time analysis that makes it possible to detect viral outbreaks as they are happening to the directions app that is able to suggest an alternative route before you hit traffic to the recommendation software (be it banal or nefarious) that forecast future behavior based on past actions.

    To the extent that discussions around the digital generally focus on the end(s) results of big data, the means remain fairly occluded both from public view and from many of the discussants. And while big data has largely been accepted as an essential aspect of our digital lives by some, for many others it remains highly fraught.

    As Natasha Lushetich notes, “in the arts and (digital) humanities…the use of big data remains a contentious issue not only because data architectures are increasingly determining classificatory systems in the educational, social, and medical realms, but because they reduce political and ethical questions to technical management” (4). And it is this contentiousness that is at the heart of Lushetich’s edited volume Big Data—A New Medium? (Routledge, 2021). Drawing together scholars from a variety of different disciplines ranging across “the arts and (digital) humanities,” this book moves beyond an analysis of what big data is to a complex considerations of what big data could be (and may be in the process of currently becoming). In engaging with the perils and potentialities of big data, the book (as its title suggests) wrestles with the question as to whether or not big data can be seen as constituting “a new medium.” Through engaging with big data as a medium, the contributors to the volume grapple not only with how big data “conjugates human existence” but also how it “(re)articulates time, space, the material and immaterial world, the knowable and the unknowable; how it navigates or alters, hierarchies of importance” and how it “enhances, obsolesces, retrieves and pushes to the limits of potentiality” (8). Across four sections, the contributors grapple with big data in terms of knowledge and time, use and extraction, cultural heritage and memory, as well as people.

    “Patterning Knowledge and Time” begins with a chapter by Ingrid M. Hoofd that places big data in the broader trajectory of the university’s attempt to make the whole of the world knowable. Considering how “big data renders its object of analysis simultaneously more unknowable (or superficial) and more knowable (or deep)” (18), Hoofd’s chapter examines how big data replicates and reinforces the ways in which that which becomes legitimated as knowable are the very things that can be known through the university’s (and big data’s) techniques. Following Hoofd, Franco “Bifo” Berardi provocatively engages with the power embedded in big data, treating it as an attempt to assert computerized control over a chaotic future by forcing it into a predictable model. Here big data is treated as a potential constraint wherein “the future is no longer  a possibility, but the implementation of a logical necessity inscribed in the present” (43), as participation in society becomes bound up with making one’s self and one’s actions legible and analyzable to the very systems that enclose one’s future horizons. Shifting towards the visual and the environmental, Abelardo Gil-Fournier and Jussi Parikka consider the interweaving of images and environments and how data impacts this. As Gil-Fournier and Parikka explore, as a result of developments in machine learning and computer vision “meteorological changes” are increasingly “not only observable but also predictable as images” (56).

    The second part of the book, “Patterning Use and Existence” starts with Btihaj Ajana reflecting on the ways in which “surveillance technologies are now embedded in our everyday products and services” (64). By juxtaposing the biometric control of refugees with the quantified-self movement, Ajana explores the datafication of society and the differences (as well as similarities) between willing participation and forced participation in regimes of surveillance of the self. Highlighting a range of well-known gig-economy platforms (such as Uber, Deliveroo, and Amazon Mechanical Turk), Tim Christaens examines the ways that “the speed of the platform’s algorithms exceeds the capacities of human bodies” (81). While offering a thorough critique of the inhuman speed imposed by gig economy platforms/algorithms, Christaens also offers a hopeful argument for the possibility that by making their software open source some of these gig platforms could “become a vehicle for social emancipation instead of machinic subjugation” (90). While aesthetic and artistic considerations appear in earlier chapters, Lonce Wyse’s chapter pushes fully into this area through looking at the ways that deep learning systems create the sorts of works of art “that, when recognized in humans, are thought of as creative” (95). Wyse provides a rich, and yet succinct, examination of how these systems function while highlighting the sorts of patterns that emerge (sometimes accidentally) in the process of training these systems.

    At the outset of the book’s third section, “Patterning cultural heritage and memory,” Craig J. Saper approaches the magazine The Smart Set as an object of analysis and proceeds to zoom in and zoom out to reveal what is revealed and what is obfuscated at different scales. Highlighting that “one cannot arbitrarily discount or dismiss particular types of data, big or intimate, or approaches to reading, distant or close” Saper’s chapter demonstrates how “all scales carry intellectual weight” (124). Moving away from the academic and the artist, Nicola Horsley’s chapter reckons with the work of archivists and the ways in which their intellectual labor and the tasks of their profession have been challenged by digital shifts. While archival training teaches archivists that “the historical record, on which collective memory is based, is a process not a product” (140) and in interacting with researchers archivists seek to convey that lesson, Horsley’s considers the ways in which the shift away from the physical archive and towards the digital archive (wherein a researcher may never directly interact with an archivist or librarian) means this “process” risks going unseen. From the archive to the work of art, Natasha Lushetich and Masaki Fujihata’s chapter explores Fujihata’s project BeHere: The Past in the Present and how augmented reality opens up the space for new artistic experience and challenges how individual memory is constructed. Through its engagement with “images obtained through data processing and digital frottage” the BeHere project reveals “new configurations of machinically (rather than humanly) perceived existents” and thus can “shed light on that which eludes the (naked) human eye” (151).

    The fourth and final section of the volume, begins with Dominic Smith’s exploration of the aesthetics of big data. While referring back to the “Seven Vs” of big data, Smith argues that to imagine big data as a “new medium” requires considering “how we make sense of data” in regards to both “how we produce it” and “how we perceive it” (164). A matter which Smith explores through an analysis of “surfaces and depths” of oceanic images. Though big data is closely connected with sheer scale (hence the “big”), Mitra Azar observes that “it is never enough as it is always possible to generate new data and make more comprehensive data sets” (180). Tangling with this in a visual registry, Azar contrasts the cinematic point of view with that of the big data enabled “data double” of the individual (which is meant to stand in for that user). Considering several of his own artistic installations—Babel, Dark Matter, and Heteropticon—Simon Biggs examines the ways in which big data reveals “the everyday and trivial and how it offers insights into the dense ambient noise that is our daily lives” (192). In contrast to treating big data as a revelator of the sublime, Biggs discusses big data’s capacity to show “the infra-ordinary” and to show the value of seemingly banal daily details. The book concludes with Warren Neidich’s speculative gaze to what the future of big data might portend, couched in a belief that “we are at the beginning of a transition from knowledge-based economics to a neural or brain-based economy” (207). Surveying current big data technologies and the trajectories they may suggest, Neidich forecasts “a gradual accumulation of telepathic technocueticals” such that “at some moment a critical point might be reached when telepathy could become a necessary skill for successful adaptation…similar to being able to read in today’s society” (218).

    In the introduction to the book, Natasha Lushetich grounds the discussion in a recognition that “it is also important to ask how big data (re)articulates time, space, the material and immaterial world, the knowable and the unknowable; how it navigates or alters, hierarchies of importance” (8), and over the course of this fascinating and challenging volume, the many contributors do just that.

    ***

    The term big data captures the way in which massive troves of digitally sourced information are made legible and understandable. Yet one of the challenges of discussing big data is trying to figure out a way to make big data itself legible and understandable. In discussions around the digital, big data is often gestured at rather obliquely as the way to explain a lot of mysterious technological activity in the background. We may not find ourselves capable, for a variety of reasons, of prying open the various black boxes of a host of different digital systems but stamped in large letters on the outside of that box are the words “big data.” When shopping online or using a particular app, a user may be aware that the information being gathered from their activities is feeding into big data and that the recommendations being promoted to them come courtesy of the same. Or they may be obliquely aware that there is some sort of connection between the mystery shrouded algorithms and big data. Or the very evocation of “big” when twinned with a recognition of surveillance technologies may serve as a discomforting reminder of “big brother.” Or “big data” might simply sound like a non-existent episode of Star Trek: The Next Generation in which Lieutenant Commander Data is somehow turned into a giant. All of which is to say, that though big data is not a new matter, the question of how to think about it (which is not the same as how to use and be used by it) remains a challenging issue.

    With Big Data—A New Medium?, Natasha Lushetich has assembled an impressive group of thinkers to engage with big data in a novel way. By raising the question of big data as “a new medium,” the contributors shift the discussion away from considerations focused on surveillance and algorithms to wrestle with the ways that big data might be similar and distinct from other mediums. While this shift does not represent a rejection, or move to ignore, the important matters related to issues like surveillance, the focus on big data as a medium raises a different set of questions. What are the aesthetics of big data? As a medium what are the affordances of big data? And what does it mean for other mediums that in the digital era so many of those mediums are themselves being subsumed by big data? After all, so many of the older mediums that theorists have grown so accustomed to discussing have undergone some not insignificant changes as a result of big data. And yet to engage with big data as a medium also opens up a potential space for engaging with big data that does not treat it as being wholly captured and controlled by large tech firms.

    The contributors to the volume do not seem to be fully in agreement with one another about whether big data represents poison or panacea, but the chapters are clearly speaking to one another instead of shouting over each other. There are certainly some contributions to the book, notably Berardi’s, with its evocation of a “new century suspended between two opposite polarities: chaos and automaton” (44), that seem a bit more pessimistic. While other contributors, such as Christaens, engage with the unsavory realities of contemporary data gathering regimes but envision the ways that these can be repurposed to serve users instead of large companies. And such optimistic and pessimistic assessments come up against multiple contributions that eschew such positive/negative framings in favor of an artistically minded aesthetic engagement with what it means to treat big data as a medium for the creation of works of art. Taken together, the chapters in the book provide a wide-ranging assessment of big data, one which is grounded in larger discussions around matters such as surveillance and algorithmic bias, but which pushes readers to think of big data beyond those established frameworks.

    As an edited volume, one of the major strengths of Big Data—A New Medium? is the way it brings together perspectives from such a variety of fields and specialties. As part of Routledge’s “studies in science, technology, and society” series, the volume demonstrates the sort of interdisciplinary mixing that makes STS such a vital space for discussions of the digital. Granted, this very interdisciplinary richness can serve to be as much benefit as burden, as some readers will wish there had been slightly more representation of their particular subfield, or wish that the particular scholarly techniques of a particular discipline had seen greater use. Case in point: Horsley’s contribution will be of great interest to those approaching this book from the world of libraries and archives (and information schools more generally), and some of those same readers will wish that other chapters in the book had been equally attentive to the work done by archive professionals. Similarly those who approach the book from fields more grounded in historical techniques may wish that more of the authors had spent more time engaging with “how we got here” instead of focusing so heavily on the exploration of the present and the possible future. Of course, these are always the challenges with edited interdisciplinary volumes, and it is a major credit to Lushetich as an editor that this volume provides readers from so many different backgrounds with so much to mull over. Beyond presenting numerous perspectives on the titular question, the book is also an invitation to artists and academics to join in discussion about that titular question.

    Those who are broadly interested in discussions around big data will find much in this volume of significance, and will likely find their own thinking pushed in novel directions. That being said, this book will likely be most productively read by those who are already somewhat conversant in debates around big data/the digital humanities/the arts/and STS more generally. While contributors are consistently careful in clearly defining their terms and referencing the theorists from whom they are drawing, from Benjamin to Foucault to Baudrillard to Marx to Deleuze and Guattari (to name but a few), the contributors to this book couch much of their commentary in theory, and a reader of this volume will be best able to engage with these chapters if they have at least some passing familiarity with those theorists themselves. Many of the contributors to this volume are also clearly engaging with arguments made by Shoshana Zuboff in Surveillance Capitalism and this book can be very productively read as critique and complement to Zuboff’s tome. Academics in and around STS, and artists who incorporate the digital into their practice, will find that this book makes a worthwhile intervention into current discourse around big data. And though the book seems to assume a fairly academically engaged readership, this book will certainly work well in graduate seminars (or advanced undergraduate classrooms)—many of the chapter will stand quite well on their own, though much of the book’s strength is in the way the chapters work in tandem.

    One of the claims that is frequently made about big data is that—for better or worse—it will allow us to see the world from a fresh perspective. And what Big Data—A New Medium? does is allow us to see big data itself from a fresh perspective.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2o Review Digital Studies section.

    Back to the essay

  • David Gerard — Creationism on the Blockchain (review of George Gilder, Life After Google)

    David Gerard — Creationism on the Blockchain (review of George Gilder, Life After Google)

    a review of George Gilder, Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy (Regnery, 2018)

    by David Gerard

    George Gilder is most famous as a conservative author and speechwriter. He also knows his stuff about technology, and has a few things to say.

    But what he has to say about blockchain in his book Life After Google is rambling, ill-connected and unconvincing — and falls prey to the fixed points in his thinking.

    Gilder predicts that the Google and Silicon Valley approach — big data, machine learning, artificial intelligence, not charging users per transaction — is failing to scale, and will collapse under its own contradictions.

    The Silicon Valley giants will be replaced by a world built around cryptocurrency, blockchains, sound money … and the obsolescence of philosophical materialism — the theory that thought and consciousness needs only physical reality. That last one turns out to be Gilder’s main point.

    At his best, as in his 1990 book Life After Television, Gilder explains consequences following from historical materialism — Marx and Engels’ theory that historical events emerge from economic developments and changes to the mode of production — to a conservative readership enamoured with the obsolete Great Man theory of history.

    (That said, Gilder sure does love his Great Men. Men specifically.)

    Life After Google purports to be about material forces that follow directly from technology. Gilder then mixes in his religious beliefs as, literally, claims about mathematics.

    Gilder has a vastly better understanding of technology than most pop science writers. If Gilder talks tech, you should listen. He did a heck of a lot of work on getting out there and talking to experts for this book.

    But Gilder never quite makes his case that blockchains are the solutions to the problems he presents — he just presents the existence of blockchains, then talks as if they’ll obviously solve everything.

    Blockchains promise Gilder comfort in certainty: “The new era will move beyond Markov chains of disconnected probabilistic states to blockchain hashes of history and futurity, trust and truth,” apparently.

    The book was recommended to me by a conservative friend, who sent me a link to an interview with Gilder on the Hoover Institution’s Uncommon Knowledge podcast. My first thought was “another sad victim of blockchain white papers.” You see this a lot — people tremendously excited by blockchain’s fabulous promises, with no idea that none of this stuff works or can work.

    Gilder’s particular errors are more interesting. And — given his real technical expertise — less forgivable.

    Despite its many structural issues — the book seems to have been left in dire need of proper editing — Life After Google was a hit with conservatives. Peter Thiel is a noteworthy fan. So we may need to pay attention. Fortunately, I’ve read it so you don’t have to.

    About the Author

    Gilder is fêted in conservative circles. His 1981 book Wealth and Poverty was a favourite of supply-side economics proponents in the Reagan era. He owned conservative magazine The American Spectator from 2000 to 2002.

    Gilder is frequently claimed to have been Ronald Reagan’s favourite living author — mainly in his own publicity: “According to a study of presidential speeches, Mr. Gilder was President Reagan’s most frequently quoted living author.”

    I tried tracking down this claim — and all citations I could find trace back to just one article: “The Gilder Effect” by Larissa MacFarquhar, in The New Yorker, 29 May 2000.

    The claim is one sentence in passing: “It is no accident that Gilder — scourge of feminists, unrepentant supply-sider, and now, at sixty, a technology prophet — was the living author Reagan most often quoted.” The claim isn’t substantiated further in the New Yorker article — it reads like the journalist was told this and just put it in for colour.

    Gilder despises feminism, and has described himself as “America’s number-one antifeminist.” He has written two books — Sexual Suicide, updated as Men and Marriage, and Naked Nomads — on this topic alone.

    Also, per Gilder, Native American culture collapsed because it’s “a corrupt and unsuccessful culture,” as is Black culture — and not because of, e.g., massive systemic racism.

    Gilder believes the biological theory of evolution is wrong. He co-founded the Discovery Institute in 1990, as an offshoot of the Hudson Institute. The Discovery Institute started out with papers on economic issues, but rapidly pivoted to promoting “intelligent design” — the claim that all living creatures were designed by “a rational agent,” and not evolved through natural processes. It’s a fancy term for creationism.

    Gilder insisted for years that the Discovery Institute’s promotion of intelligent design totally wasn’t religious — even as judges ruled that intelligent design in schools was promotion of religion. Unfortunately for Gilder, we have the smoking gun documents showing that the Discovery Institute was explicitly trying to push religion into schools — the leaked Wedge Strategy document literally says: “Design theory promises to reverse the stifling dominance of the materialist worldview, and to replace it with a science consonant with Christian and theistic convictions.”

    Gilder’s politics are approximately the polar opposite of mine. But the problems I had with Life After Google are problems his fans have also had. Real Clear Marketsreview is a typical example — it’s from the conservative media sphere and written by a huge Gilder fan, and he’s very disappointed at how badly the book makes its case for blockchain.

    Gilder’s still worth taking seriously on tech, because he’s got a past record of insight — particularly his 1990s books Life After Television and Telecosm.

    Life After Television

    Life After Television: The Coming Transformation of Media and American Life is why people take Gilder seriously as a technology pundit. First published in 1990, it was expanded in 1992 and again in 1994.

    The book predicts television’s replacement with computers on networks — the downfall of the top-down system of television broadcasting and the cultural hegemony it implies. “A new age of individualism is coming, and it will bring an eruption of culture unprecedented in human history.” Gilder does pretty well — his 1990 vision of working from home is a snapshot of 2020, complete with your boss on Zoom.

    You could say this was obvious to anyone paying attention — Gilder’s thesis rests on technology that had already shown itself capable of supporting the future he spelt out — but not a lot of people in the mainstream were paying attention, and the industry was in blank denial. Even Wired, a few years later, was mostly still just terribly excited that the Internet was coming at all.

    Life After Television talks way more about the fall of the television industry than the coming future network. In the present decade, it’s best read as a historical record of past visions of the astounding future.

    If you remember the first two or three years of Wired magazine, that’s the world Gilder’s writing from. Gilder mentored Wired and executive editor Kevin Kelly in its first few years, and appeared on the cover of the March 1996 edition. Journalist and author Paulina Borsook detailed Gilder’s involvement in Wired in her classic 2000 book Cyberselfish: A Critical Romp through the Terribly Libertarian Culture of High Tech, (also see an earlier article of the same name in Mother Jones) which critiques his politics including his gender politics at length, noting that “Gilder worshipped entrepreneurs and inventors and appeared to have found God in a microchip” (132-3) and describing “a phallus worship he has in common with Ayn Rand” (143).

    The only issue I have with Gilder’s cultural predictions in Life After Television is that he doesn’t mention the future network’s negative side-effects — which is a glaring miss in a world where E. M. Forster predicted social media and some of its effects in The Machine Stops in 1909.

    The 1994 edition of Life After Television goes in quite a bit harder than the 1990 edition. The book doesn’t say “Internet,” doesn’t mention the Linux computer operating system — which was already starting to be a game-changer — and only says “worldwide web” in the sense of “the global ganglion of computers and cables, the new worldwide web of glass and light.” (p23) But then there’s the occasional blinder of a paragraph, such as his famous prediction of the iPhone and its descendants:

    Indeed, the most common personal computer of the next decade will be a digital cellular phone. Called personal digital assistants, among many other coinages, they will be as portable as a watch and as personal as a wallet; they will recognise speech and navigate streets, open the door and start the car, collect the mail and the news and the paycheck, connecting to thousands of databases of all kinds. (p20)

    Gilder’s 1996 followup Telecosm is about what unlimited bandwidth would mean. It came just in time for a minor bubble in telecom stocks, because the Internet was just getting popular. Gilder made quite a bit of money in stock-picking, and so did subscribers to his newsletter — everyone’s a financial genius in a bubble. Then that bubble popped, and Gilder and his subscribers lost their shirts. But his main error was just being years early.

    So if Gilder talks tech, he’s worth paying attention to. Is he right, wrong, or just early?

    Gilder, Bitcoin and Gold

    Gilder used to publish through larger generalist publishers. But since around 2000, he’s published through small conservative presses such as Regnery, small conservative think tanks, or his own Discovery Institute. Regnery, the publisher of Life After Google, is functionally a vanity press for the US far right, famous for, among other things, promising to publish a book by US Senator Josh Hawley after Simon & Schuster dropped it due to Hawley’s involvement with the January 6th capital insurrection.

    Gilder caught on to Bitcoin around 2014. He told Reason that Bitcoin was “the perfect libertarian solution to the money enigma.”

    In 2015, his monograph The 21st Century Case for Gold: A New Information Theory of Money was published by the American Principles Project — a pro-religious conservative think tank that advocates a gold standard and “hard money.”

    This earlier book uses Bitcoin as a source of reasons that an economy based on gold could work in the 21st century:

    Researches in Bitcoin and other digital currencies have shown that the real source of the value of any money is its authenticity and reliability as a measuring stick of economic activity. A measuring stick cannot be part of what it measures. The theorists of Bitcoin explicitly tied its value to the passage of time, which proceeds relentlessly beyond the reach of central banks.

    Gilder drops ideas and catch-phrases from The 21st Century Case for Gold all through Life After Google without explaining himself — he just seems to assume you’re fully up on the Gilder Cinematic Universe. An editor should have caught this — a book needs to work as a stand-alone.

    Life After Google’s Theses

    The theses of Life After Google are:

    • Google and Silicon Valley’s hegemony is bad.
    • Google and Silicon Valley do capitalism wrong, and this is why they will collapse from their internal contradictions.
    • Blockchain will solve the problems with Silicon Valley.
    • Artificial intelligence is impossible, because Gödel, Turing and Shannon proved mathematically that creativity cannot result without human consciousness that comes from God.

    This last claim is the real point of the book. Gilder affirmed that this was the book’s point in an interview with WND.

    I should note, by the way, that Gödel, Turing and Shannon proved nothing of the sort. Gilder claims repeatedly that they and other mathematicians did, however.

    Marxism for Billionaires

    Gilder’s objections to Silicon Valley were reasonably mainstream and obvious by 2018. They don’t go much beyond what Clifford Stoll said in Silicon Snake Oil in 1995. And Stoll was speaking to his fellow insiders. (Gilder cites Stoll, though he calls him “Ira Stoll.”) But Gilder finds the points still worth making to his conservative audience, as in this early 2018 Forbes interview:

    A lot of people have an incredible longing to reduce human intelligence to some measurable crystallization that can be grasped, calculated, projected and mechanized. I think this is a different dimension of the kind of Silicon Valley delusion that I describe in my upcoming book.

    Gilder’s scepticism of Silicon Valley is quite reasonable … though he describes Silicon Valley as having adopted “what can best be described as a neo-Marxist political ideology and technological vision.”

    There is no thing, no school of thought, that is properly denoted “neo-Marxism.” In the wild, it’s usually a catch-all for everything the speaker doesn’t like. It’s a boo-word.

    Gilder probably realises that it comes across as inane to label the ridiculously successful billionaire and near-trillionaire capitalists of the present day as any form of “Marxist.” He attempts to justify his usage:

    Marx’s essential tenet was that in the future, the key problem of economics would become not production amid scarcity but redistribution of abundance.

    That’s not really regarded as the key defining point of Marxism by anyone else anywhere. (Maybe Elon Musk, when he’s tweeting words he hasn’t looked up.) I expect the libertarian post-scarcity transhumanists of the Bay Area, heavily funded by Gilder’s friend Peter Thiel, would be disconcerted too.

    “Neo-Marxism” doesn’t rate further mention in the book — though Gilder does use the term in the Uncommon Knowledge podcast interview. Y’know, there’s red-baiting to get in.

    So — Silicon Valley’s “neo-marxism” sucks. “It is time for a new information architecture for a globally distributed economy. Fortunately, it is on its way.” Can you guess what it is?

    You’re Doing Capitalism Wrong

    Did you know that Isaac Newton was the first Austrian economist? I didn’t. (I still don’t.)

    Gilder doesn’t say this outright. He does speak of Newton’s work in physics, as a “system of the world,” a phrase he confesses to having lifted from Neal Stephenson.

    But Gilder is most interested in Newton’s work as Master of the Mint — “Newton’s biographers typically underestimate his achievement in establishing the information theory of money on a firm foundation.”

    There is no such thing as “the information theory of money” — this is a Gilder coinage from his 2015 book The 21st Century Case for Gold.

    Gilder’s economic ideas aren’t quite Austrian economics, but he’s fond of their jargon, and remains a huge fan of gold:

    The failure of his alchemy gave him — and the world — precious knowledge that no rival state or private bank, wielding whatever philosopher’s stone, would succeed in making a better money. For two hundred years, beginning with Newton’s appointment to the Royal Mint in 1696, the pound, based on the chemical irreversibility of gold, was a stable and reliable monetary Polaris.

    I’m pretty sure this is not how it happened, and that the ascendancy of Great Britain’s pound sterling had everything to do with it being backed by a world-spanning empire, and not any other factor. But Gilder goes one better:

    Fortunately the lineaments of a new system of the world have emerged. It could be said to have been born in early September 1930, when a gold-based Reichsmark was beginning to subdue the gales of hyperinflation that had ravaged Germany since the mid-1920s.

    I am unconvinced that this quite explains Germany in the 1930s. The name of an obvious and well-known political figure, who pretty much everyone else considers quite important in discussing Germany in the 1930s, is not mentioned in this book.

    The rest of the chapter is a puréed slurry of physics, some actual information theory, a lot of alleged information theory, and Austrian economics jargon, giving the impression that these are all the same thing as far as Gilder is concerned.

    Gilder describes what he thinks is Google’s “System of the World” — “The Google theory of knowledge, nicknamed ‘big data,’ is as radical as Newton’s and as intimidating as Newton’s was liberating.” There’s an “AI priesthood” too.

    A lot of people were concerned early on about Google-like data sponges. Here’s Gilder on the forces at play:

    Google’s idea of progress stems from its technological vision. Newton and his fellows, inspired by their Judeo-Christian world view, unleashed a theory of progress with human creativity and free will at its core. Google must demur.

    … Finally, Google proposes, and must propose, an economic standard, a theory of money and value, of transactions and the information they convey, radically opposed to what Newton wrought by giving the world a reliable gold standard.

    So Google’s failures include not proposing a gold standard, or perhaps the opposite.

    Open source software is also part of this evil Silicon Valley plot — the very concept of open source. Because you don’t pay for each copy. Google is evil for participating in “a cult of the commons (rooted in ‘open source’ software)”.

    I can’t find anywhere that Gilder has commented on Richard M. Stallman’s promotion of Free Software, of which “open source” was a business-friendly politics-washed rebranding — but I expect that if he had, the explosion would have been visible from space.

    Gilder’s real problem with Google is how the company conducts its capitalism — how it applies creativity to the goal of actually making money. He seems to consider the successful billionaires of our age “neo-Marxist” because they don’t do capitalism the way he thinks they should.

    I’m reminded of Bitcoin Austrians — Saifedean Ammous in The Bitcoin Standard is a good example — who argue with the behaviour of the real-life markets, when said markets are so rude as not to follow the script in their heads. Bitcoin maximalists regard Bitcoin as qualitatively unique, unable to be treated in any way like the hodgepodge of other things called “cryptos,” and a separate market of its own.

    But the real-life crypto markets treat this as all one big pile of stuff, and trade it all on much the same basis. The market does not care about your ideology, only its own.

    Gilder mixes up his issues with the Silicon Valley ideology — the Californian Ideology, or cyberlibertarianism, as it’s variously termed in academia — with a visceral hatred of capitalists who don’t do capitalism his way. He seems to despise the capitalists who don’t do it his way more than he despises people who don’t do capitalism at all.

    (Gilder was co-author of the 1994 document “Magna Carta for the Knowledge Age” that spurred Langdon Winner to come up with the term “cyberlibertarianism” in the first place.)

    Burning Man is bad because it’s a “commons cult” too. Gilder seems to be partially mapping out the Californian Ideology from the other side.

    Gilder is outraged by Google’s lack of attention to security, in multiple senses of the word — customer security, software security, military security. Blockchain will fix all of this — somehow. It just does, okay?

    Ads are apparently dying. Google runs on ads — but they’re on their way out. People looking to buy things search on Amazon itself first, then purchase things for money — in the proper businesslike manner.

    Gilder doesn’t mention the sizable share of Amazon’s 2018 income that came from sales of advertising on its own platform. Nor does Gilder mention that Amazon’s entire general store business, which he approves of, posted huge losses in 2018, and was subsidised by Amazon’s cash-positive business line, the Amazon Web Services computing cloud.

    Gilder visits Google’s data centre in The Dalles, Oregon. He notes that Google embodies Sun Microsystems’ old slogan “The Network is the Computer,” coined by John Gage of Sun in 1984 — though Gilder attributes this insight to Eric Schmidt, later of Google, based on an email that Schmidt sent Gilder when he was at Sun in 1993.

    All successful technologies develop on an S-curve, a sigmoid function. They take off, raise in what looks like exponential growth … and then they level off. This is normal and expected. Gilder knows this. Correctly calling the levelling-off stage is good and useful tech punditry.

    Gilder notes the siren call temptations of having vastly more computing power than anyone else — then claims that Google will therefore surely fail. Nothing lasts forever; but Gilder doesn’t make the case for his claimed reasons.

    Gilder details Google’s scaling problems at length — but at no point addresses blockchains’ scaling problems: a blockchain open to all participants can’t scale and stay fast and secure (the “blockchain trilemma”). I have no idea how he missed this one. If he could see that Google has scaling problems, how could he not even mention that public blockchains have scaling problems?

    Gilder has the technical knowledge to be able to understand this is a key question, ask it and answer it. But he just doesn’t.

    How would a blockchain system do the jobs presently done by the large companies he’s talking about? What makes Amazon good when Google is bad? The mere act of selling goods? Gilder resorts entirely to extrapolation from axioms, and never bothers with the step where you’d expect him to compare his results to the real world. Why would any of this work?

    Gilder is fascinated by the use of Markov chains to statistically predict the next element of a series: “By every measure, the most widespread, immense, and influential of Markov chains today is Google’s foundational algorithm, PageRank, which encompasses the petabyte reaches of the entire World Wide Web.”

    Gilder interviews Robert Mercer — the billionaire whose Mercer Family Foundation helped bankroll Trump, Bannon, Brexit, and those parts of the alt-right that Peter Thiel didn’t fund.

    Mercer started as a computer scientist. He made his money on Markov-related algorithms for financial trading — automating tiny trades that made no human sense, only statistical sense.

    This offends Gilder’s sensibilities:

    This is the financial counterpart of Markov models at Google translating languages with no knowledge of them. Believing as I do in the centrality of knowledge and learning in capitalism, I found this fact of life and leverage absurd. If no new knowledge was generated, no real wealth was created. As Peter Drucker said, ‘It is less important to do things right than to do the right things.’

    Gilder is faced with a stupendously successful man, whose ideologies he largely concurs with, and who’s won hugely at capitalism — “Mercer and his consort of superstar scholars have, mutatis mutandis, excelled everyone else in the history of finance” — but in a way that is jarringly at odds with his own deeply-held beliefs.

    Gilder believes Mercer’s system, like Google’s, “is based on big data that will face diminishing returns. It is founded on frequencies of trading that fail to correspond to any real economic activity.”

    Gilder holds that it’s significant that Mercer’s model can’t last forever. But this is hardly a revelation — nothing lasts forever, and especially not an edge in the market. It’s the curse of hedge funds that any process that exploits inefficiencies will run out of other people’s inefficiencies in a few years, as the rest of the market catches on. Gilder doesn’t make the case that Mercer’s trick will fail any faster than it would be expected to just by being an edge in a market.

    Ten Laws of the Cryptocosm

    Chapter 5 is “Ten Laws of the Cryptocosm”. These aren’t from anywhere else — Gilder just made them up for this book.

    “Cryptocosm” is a variant on Gilder’s earlier coinage “Telecosm,” the title of his 1996 book.

    Blockchain spectators should be able to spot the magical foreshadowing term in rule four:

    The fourth rule is “Nothing is free. This rule is fundamental to human dignity and worth. Capitalism requires companies to serve their customers and to accept their proof of work, which is money. Banishing money, companies devalue their customers.

    Rules six and nine are straight out of The Bitcoin Standard:

    The sixth rule: ‘Stable money endows humans with dignity and control.’ Stable money reflects the scarcity of time. Without stable money, an economy is governed only by time and power.

    The ninth rule is ‘Private keys are held by individual human beings, not by governments or Google.’ … Ownership of private keys distributes power.

    In a later chapter, Gilder critiques The Bitcoin Standard, which he broadly approves of.

    Gödel’s Incompetence Theorem

    Purveyors of pseudoscience frequently drop the word “quantum” or “chaos theory” to back their woo-mongering in areas that aren’t physics or mathematics. There’s a strain of doing the same thing with Gödel’s incompleteness theorems to make remarkable claims in areas that aren’t maths.

    What Kurt Gödel actually said was that if you use logic to build your mathematical theorems, you have a simple choice: either your system is incomplete, meaning you can’t prove every statement that is true, and you can’t know which of the unproven statements are true — or you introduce internal contradictions. So you can have holes in your maths, or you can be wrong.

    Gödel’s incompleteness theorems had a huge impact on the philosophy of mathematics. They seriously affected Bertrand Russell’s work on the logicism programme, to model all of mathematics as formal logic, and caused issues for Hilbert’s second problem, which sought a proof that arithmetic is consistent — that is, free of any internal contradictions.

    It’s important to note that Gödel’s theorems only apply in a particular technical sense, to particular very specific mathematical constructs. All the words are mathematical jargon, and not English.

    But humans have never been able to resist a good metaphor — so, as with quantum physics, chaos theory and Turing completeness, people seized upon “Gödel” and ran off in all directions.

    One particular fascination was what the theorems meant for the idea of philosophical materialism — whether interesting creatures like humans could really be completely explained by ordinary mathematics-based physics, or if there was something more in there. Gödel himself essayed haltingly in the direction of saying he thought there might be more than physics there — though he was slightly constrained by knowing what the mathematics actually said.

    Compare the metaphor abuse surrounding blockchains. Deploy a mundane data structure and a proof-of-work system to determine who adds the next bit of data, and thus provide technically-defined, constrained and limited versions of “trustlessness,” “irreversibility” and “decentralisation.” People saw these words, and attributed their favoured shade of meaning of the plain-language words to anything even roughly descended from the mundane data structure — or that claimed it would be descended from it some time in the future.

    Gilder takes Gödel’s incompleteness theorems, adds Claude Shannon on information theory, and mixes in his own religious views. He asserts that the mathematics of Shannon’s information theory and Gödel’s incompleteness theorems prove that creativity can only come from a human consciousness, created by God. Therefore, artificial intelligence is impossible.

    This startling conclusion isn’t generally accepted. Torkel Franzén’s excellent Gödel’s Theorem: An Incomplete Guide to Its Use and Abuse, chapter 4, spends several pages bludgeoning variations on this dumb and bad idea to death:

    there is no such thing as the formally defined language, the axioms, and the rules of inference of “human thought,” and so it makes no sense to speak of applying the incompleteness theorem to human thought.

    If something is not literally a mathematical “formal system,” Gödel doesn’t apply to it.

    The free Google searches and the fiat currencies are side issues — what Gilder really loathes is the very concept of artificial intelligence. It offends him.

    Gilder leans heavily on the ideas of Gregory Chaitin — one of the few mathematicians with a track record of achievement in information theory who also buys into the idea that Gödel’s incompleteness theorem may disprove philosophical materialism. Of the few people convinced by Chaitin’s arguments, most happen to have matching religious beliefs.

    It’s one thing to evaluate technologies according to an ethical framework informed by your religion. It’s quite another to make technological pronouncements directly from your religious views, and to claim mathematical backing for your religious views.

    Your Plastic Pal Who’s Fun to Be With

    Chapter 7 talks about artificial intelligence, and throwing hardware at the problem of machine learning. But it’s really about Gilder’s loathing of the notion of a general artificial intelligence that would be meaningfully comparable to a human being.

    The term “artificial intelligence” has never denoted any particular technology — it’s the compelling science-fictional vision of your plastic pal who’s fun to be with, especially when he’s your unpaid employee. This image has been used through the past few decades to market a wide range of systems that do a small amount of the work a human might otherwise do.

    But throughout Life After Google, Gilder conflates the hypothetical concept of human-equivalent general artificial intelligence with the statistical machine learning products that are presently marketed as “artificial intelligence.”

    Gilder’s next book, Gaming AI: Why AI Can’t Think but Can Transform Jobs (Discovery Institute, 2020), confuses the two somewhat less — but still hammers on his completely wrong ideas about Gödel.

    Gilder ends the chapter with three paragraphs setting out the book’s core thesis:

    The current generation in Silicon Valley has yet to come to terms with the findings of von Neumann and Gödel early in the last century or with the breakthroughs in information theory of Claude Shannon, Gregory Chaitin, Anton Kolmogorov, and John R. Pierce. In a series of powerful arguments, Chaitin, the inventor of algorithmic information theory, has translated Gödel into modern terms. When Silicon Valley’s AI theorists push the logic of their case to explosive extremes, they defy the most crucial findings of twentieth-century mathematics and computer science. All logical schemes are incomplete and depend on propositions that they cannot prove. Pushing any logical or mathematical argument to extremes — whether ‘renormalized’ infinities or parallel universe multiplicities — scientists impel it off the cliffs of Gödelian incompleteness.

    Chaitin’s ‘mathematics of creativity’ suggests that in order to push the technology forward it will be necessary to transcend the deterministic mathematical logic that pervades existing computers. Anything deterministic prohibits the very surprises that define information and reflect real creation. Gödel dictates a mathematics of creativity.

    This mathematics will first encounter a major obstacle in the stunning successes of the prevailing system of the world not only in Silicon Valley but also in finance.

    There’s a lot to unpack here. (That’s an academic jargon phrase meaning “yikes!”) But fundamentally, Gilder believes that Gödel’s incompleteness theorems mean that artificial intelligence can’t come up with true creativity. Because Gilder is a creationist.

    The only place I can find Chaitin using a phrase akin to “mathematics of creativity” is in his 2012 book of intelligent design advocacy, Proving Darwin: Making Biology Mathematical, which Gilder cites. Chaitin writes:

    To repeat: Life is plastic, creative! How can we build this out of static, perfect mathematics? We shall use postmodern math, the mathematics that comes after Gödel, 1931, and Turing, 1936, open not closed math, the math of creativity, in fact.

    Whenever you see Gilder talk about “information theory,” remember that he’s using the special creationist sense of the term — a claim that biological complexity without God pushing it along would require new information being added, and that this is impossible.

    Real information theory doesn’t say anything of the sort — the creationist version is a made-up pseudotheory, developed at the Discovery Institute. It’s the abuse of a scientific metaphor to claim that a loose analogy from an unrelated field is a solid scientific claim.

    Gilder’s doing the thing that bitcoiners, anarchocapitalists and neoreactionaries do — where they ask a lot of the right questions, but come up with answers that are completely on crack, based on abuse of theories that they didn’t bother understanding.

    Chapter 9 is about libertarian transhumanists of the LessWrong tendency, at the 2017 Future Of Life conference on hypothetical future artificial intelligences, hosted by physicist Max Tegmark.

    Eliezer Yudkowsky, the founder of LessWrong, isn’t named or quoted, but the concerns are all reheated Yudkowsky: that a human-equivalent general artificial intelligence will have intelligence but not human values, will rapidly increase its intelligence, and thus its power, vastly beyond human levels, and so will doom us all. Therefore, we must program artificial intelligence to have human values — whatever those are.

    Yudkowsky is not a programmer, but an amateur philosopher. His charity, the Machine Intelligence Research Institute (MIRI), does no programming, and its research outputs are occasional papers in mathematics. Until recently, MIRI was funded by Peter Thiel, but it’s now substantially funded by large Ethereum holders.

    Gilder doesn’t buy Yudkowsky’s AI doomsday theory at all — he firmly believes that artificial intelligence cannot form a mind because, uh, Gödel: “The blind spot of AI is that consciousness does not emerge from thought; it is the source of it.”

    Gilder doesn’t mention that this is because, as a creationist, he believes that true intelligence lies in souls. But he does say “The materialist superstition is a strange growth in an age of information.” So this chapter turns into an exposition of creationist “information theory”:

    This materialist superstition keeps the entire Google generation from understanding mind and creation. Consciousness depends on faith—the ability to act without full knowledge and thus the ability to be surprised and to surprise. A machine by definition lacks consciousness. A machine is part of a determinist order. Lacking surprise or the ability to be surprised, it is self-contained and determined.

    That is: Gilder defines consciousness as whatever it is a machine cannot have, therefore a machine cannot achieve consciousness.

    Real science shows that the universe is a singularity and thus a creation. Creation is an entropic product of a higher consciousness echoed by human consciousness. This higher consciousness, which throughout human history we have found it convenient to call God, endows human creators with the space to originate surprising things.

    You will be unsurprised to hear that “real science” does not say anything like this. But that paragraph is the closest Gilder comes in this book to naming the creationism that drives his outlook.

    The roots of nearly a half-century of frustration reach back to the meeting in Königsberg in 1930, where von Neumann met Gödel and launched the computer age by showing that determinist mathematics could not produce creative consciousness.

    You will be further unsurprised to hear that von Neumann and Gödel never produced a work saying any such thing.

    We’re nine chapters in, a third of the way through the book, and someone from the blockchain world finally shows up — and, indeed, the first appearance of the word “blockchain” in the book at all. Vitalik Buterin, founder of Ethereum and MIRI’s largest individual donor, attends Tegmark’s AI conference: “Buterin succinctly described his company, Ethereum, launched in July 2015, as a ‘blockchain app platform.’”

    The blockchain is “an open, distributed, unhackable ledger devised in 2008 by the unknown person (or perhaps group) known as ‘Satoshi Nakamoto’ to support his cryptocurrency, bitcoin.” This is the closest Gilder comes at any point in the book to saying what a blockchain in fact is.

    Gilder says the AI guys are ignoring the power of blockchain — but they’ll get theirs, oh yes they will:

    Google and its world are looking in the wrong direction. They are actually in jeopardy, not from an all-powerful artificial intelligence, but from a distributed, peer-to-peer revolution supporting human intelligence — the blockchain and new crypto-efflorescence … Google’s security foibles and AI fantasies are unlikely to survive the onslaught of this new generation of cryptocosmic technology.

    Gilder asserts later in the book:

    They see the advance of automation, machine learning, and artificial intelligence as occupying a limited landscape of human dominance and control that ultimately will be exhausted in a robotic universe — Life 3.0. But Charles Sanders Peirce, Kurt Gödel, Alonzo Church, Alan Turing, Emil Post, and Gregory Chaitin disproved this assumption on the most fundamental level of mathematical logic itself.

    These mathematicians still didn’t do any such thing.

    Gilder’s forthcoming book Life after Capitalism (Regnery, 2022), with a 2021 National Review essay as a taster, asserts that his favoured mode of capitalism will reassert itself. Its thesis invokes Gilder’s notions of what he thinks information theory says.

    How Does Blockchain Do All This?

    Gilder has explained the present-day world, and his problems with it. The middle section of the book then goes through several blockchain-related companies and people who catch Gilder’s attention.

    It’s around here that we’d expect Gilder to start explaining what the blockchain is, how it works, and precisely how it will break the Google paradigm of big data, machine learning and artificial intelligence — the way he did when talking about the downfall of television.

    Gilder doesn’t even bother — he just starts talking about bitcoin and blockchains as Google-beaters, and carries through on the assumption that this is understood.

    But he can’t get away with this — he claims to be making a case for the successor to the Google paradigm, a technological case … and he just doesn’t ever do so.

    By the end of this section, Gilder seems to think he’s made his point clear that Google is having trouble scaling up — because they don’t charge a micro-payment for each interaction, or something — therefore various blockchain promises will win.

    The trouble with this syllogism is that the second part doesn’t follow. Gilder presents blockchain projects he thinks have potential — but that’s all. He makes the first case, and just doesn’t make the second.

    Peter Thiel Hates Universities Very Much

    Instead, let’s go to the 1517 Fund — “led by venture capitalist-hackers Danielle Strachman and Mike Gibson and partly financed by Peter Thiel.” Gilder is also a founding partner.

    Gilder is a massive Thiel fan, calling him “the master investor-philosopher Peter Thiel”:

    Thiel is the leading critic of Silicon Valley’s prevailing philosophy of ‘inevitable’ innovation. [Larry] Page, on the other hand, is a machine-learning maximalist who believes that silicon will soon outperform human beings, however you want to define the difference.

    Thiel is a fan of Gilder, and Life After Google, in turn.

    The 1517 Fund’s name comes from “another historic decentralization” — 31 October 1517 was the day that Martin Luther put up his ninety-five theses on a church door in Wittenberg.

    The 1517 team want to take down the government conspiracy of paperwork university credentials, which ties into the fiat-currency-based system of the world. Peter Thiel offers Thiel Fellowships, where he pays young geniuses not to go to college. Vitalik Buterin, founder of Ethereum, got a Thiel Fellowship.

    1517 also invests in the artificial intelligence stuff that Gilder derided in the previous section, but let’s never mind that.

    The Universidad Francisco Marroquín in Guatemala is a university for Austrian and Chicago School economics. Gilder uses UFM as a launch pad for a rant about US academia, and the 1517 Fund’s “New 95” theses about how much Thiel hates the US university system. Again: they ask some good questions, but their premises are bizarre, and their answers are on crack.

    Fictional Evidence

    Gilder rambles about author Neal Stephenson, who he’s a massive fan of. The MacGuffin of Stephenson’s 1999 novel Cryptonomicon is a cryptographic currency backed by gold. Stephenson’s REAMDE (2011) is set in a Second Life-style virtual world whose currency is based on gold, and which includes something very like Bitcoin mining:

    Like gold standards through most of human history — look it up — T’Rain’s virtual gold standard is an engine of wealth. T’Rain prospers mightily. Even though its money is metafictional, it is in fact more stable than currencies in the real world of floating exchange rates and fiat money.

    Thus, fiction proves Austrian economics correct! Because reality certainly doesn’t — which is why Ludwig von Mises repudiated empirical testing of his monetary theories early on.

    Is There Anything Bitcoin Can’t Do?

    Gilder asserts that “Bitcoin has already fostered thousands of new apps and firms and jobs.” His example is cryptocurrency mining, which is notoriously light on labour requirements. Even as of 2022, the blockchain sector employed 18,000 software developers — or 0.07% of all developers.

    “Perhaps someone should be building an ark. Or perhaps bitcoin is our ark — a new monetary covenant containing the seeds of a new system of the world.” I wonder why the story of the ark sprang to his mind.

    One chapter is a dialogue, in which Gilder speaks to an imaginary Satoshi Nakamoto, Bitcoin’s pseudonymous creator, about how makework — Bitcoin mining — can possibly create value. “Think of this as a proposed screenplay for a historic docudrama on Satoshi. It is based entirely on recorded posts by Satoshi, interlarded with pleasantries and other expedients characteristic of historical fictions.”

    Gilder fingers cryptographer Nick Szabo as the most likely candidate for Bitcoin’s pseudonymous creator, Satoshi Nakamoto — “the answer to three sophisticated textual searches that found Szabo’s prose statistically more akin to Nakomoto’s than that of any other suspected Satoshista.”

    In the blockchain world, any amazing headline that would turn the world upside-down were it true is unlikely to be true. Gilder has referenced a CoinDesk article, which references research from Aston University’s Centre for Forensic Linguistics.

    I tracked this down to an Aston University press release. The press release does not link to any research outputs — the “study” was an exercise that Jack Grieve at Aston gave his final-year students, then wrote up as a splashy bit of university press-release-ware.

    The press release doesn’t make its case either: “Furthermore, the researchers found that the bitcoin whitepaper was drafted using Latex, an open-source document preparation system. Latex is also used by Szabo for all his publications.” LaTeX is used by most computer scientists anywhere for their publications — but the Bitcoin white paper was written in OpenOffice 2.4, not LaTeX.

    This press release is still routinely used by lazy writers to claim that Szabo is Satoshi, ’cos they heard that linguistic analysis says so. Gilder could have dived an inch below the surface on this remarkable claim, and just didn’t.

    Gilder then spends a chapter on Craig Wright, who — unlike Szabo — claims to be Satoshi. This is based on Andrew O’Hagan’s lengthy biographical piece on Wright, “The Satoshi Affair” for the London Review of Books, reprinted in his book The Secret Life: Three True Stories. This is largely a launch pad for how much better Vitalik Buterin’s ideas are than Wright’s.

    Blockstack

    We’re now into a list of blockchainy companies that Gilder is impressed with. This chapter introduces Muneeb Ali and his blockchain startup, Blockstack, whose pitch is a parallel internet where you own all your data, in some unspecified sense. Sounds great!

    Ali wants a two-layer network: “monolith, the predictable carriers of the blockchain underneath, and metaverse, the inventive and surprising operations of its users above.” So, Ethereum then — a blockchain platform, with applications running on top.

    Gilder recites the press release description of Blockstack and what it can do — i.e., might hypothetically do in the astounding future.

    Under its new name, Stacks, the system is being used as a platform for CityCoins — local currencies on a blockchain — which was started in the 2021 crypto bubble. MiamiCoin notably collapsed in price a few months after its 2021 launch, and the city only didn’t show a massive loss on the cryptocurrency because Stacks bailed them out on their losses.

    Brendan Eich and Brave

    Brendan Eich is famous in the technical world as one of the key visionaries behind the Netscape web browser, the Mozilla Foundation, and the Firefox web browser, and as the inventor of the JavaScript programming language.

    Eich is most famous in the non-technical world for his 2008 donation to Proposition 8, to make gay marriage against the California constitution. This donation came to light in 2012, and made international press at the time.

    Techies can get away with believing the most awful things, as long as they stay locked away in their basement — but Eich was made CEO of Mozilla in 2014, and somehow the board thought the donation against gay marriage wouldn’t immediately become 100% of the story.

    One programmer, whose own marriage had been directly messed up by Proposition 8, said he couldn’t in good conscience keep working on Firefox-related projects — and this started a worldwide boycott of Mozilla and Firefox. Eich refused to walk back his donation in any manner — though he did promise not to actively seek to violate California discrimination law in the course of his work at Mozilla, so that’s nice — and quit a few weeks later.

    Eich went off to found Brave, a new web browser that promises to solve the Internet advertising problem using Basic Attention Tokens, a token that promises a decentralised future for paying publishers that is only slightly 100% centralised in all functional respects.

    Gilder uses Eich mostly to launch into a paean to Initial Coin Offerings — specifically, in their rôle as unregistered penny stock offerings. Gilder approves of ICOs bypassing regulation, and doesn’t even mention how the area was suffused with fraud, nor the scarcity of ICOs that delivered on any of their promises. The ICO market collapsed after multiple SEC actions against these blatant securities frauds.

    Gilder also approves of Brave’s promise to combat Google’s advertising monopoly, by, er, replacing Google’s ads with Brave’s own ads.

    Goodbye Digital

    Dan Berninger’s internet phone startup Hello Digital is, or was, an enterprise so insignificant it isn’t in the first twenty companies returned by a Google search on “hello digital”. Gilder loves it.

    Berninger’s startup idea involved end-to-end non-neutral precedence for Hello Digital’s data. And the US’s net neutrality rules apparently preclude this. Berninger sued the FCC to make it possible to set up high-precedence private clearways for Hello Digital’s data on the public Internet.

    This turns out to be Berninger’s suit against the FCC to protest “net neutrality” — on which the Supreme Court denied certiorari in December 2018.

    Somehow, Skype and many other applications managed enormously successful voice-over-internet a decade previously on a data-neutral Internet. But these other systems “fail to take advantage of the spontaneous convergence of interests on particular websites. They provide no additional sources of revenue for Web pages with independent content. And they fail to add the magic of high-definition voice.” Apparently, all of this requires proprietary clearways for such data on the public network? Huge if true.

    Gilder brings up 5G mobile Internet. I think it’s supposed to be in Google’s interests? Therefore it must be bad. Nothing blockchainy here, this chapter’s just “Google bad, regulation bad”.

    The Empire Strikes Back

    Old world big money guys — Jamie Dimon, Warren Buffett, Charlie Munger, Paul Krugman — say Bitcoin is trash. Gilder maintains that this is good news for Bitcoin.

    Blockchain fans and critics — and nobody else — will have seen Kai Stinchcombe’s blog post of December 2017, “Ten years in, nobody has come up with a use for blockchain.” Stinchcombe points out that “after years of tireless effort and billions of dollars invested, nobody has actually come up with a use for the blockchain — besides currency speculation and illegal transactions.” It’s a good post, and you should read it.

    Gilder spends an entire chapter on this blog post. Some guy who wrote a blog post is a mid-level boss in this book.

    Gilder concedes that Stinchcombe’s points are hard to argue with. But Stinchcome merely being, you know, right, is irrelevant — because, astounding future!

    Stinchcombe writes from the womb of the incumbent financial establishment, which has recently crippled world capitalism with a ten-year global recession.

    One day a bitcoiner will come up with an argument that isn’t “but what about those other guys” — but today is not that day.

    At Last, We Escape

    We’ve made it to the last chapter. Gilder summarises how great the blockchain future will be:

    The revolution in cryptography has caused a great unbundling of the roles of money, promising to reverse the doldrums of the Google Age, which has been an epoch of bundling together, aggregating, all the digital assets of the world.

    Gilder confidently asserts ongoing present-day processes that are not, here in tawdry reality, happening:

    Companies are abandoning hierarchy and pursuing heterarchy because, as the Tapscotts put it, ‘blockchain technology offers a credible and effective means not only of cutting out intermediaries, but also of radically lowering transaction costs, turning firms into networks, distributing economic power, and enabling both wealth creation and a more prosperous future.’

    If you read Don and Alex Tapscott’s Blockchain Revolution (Random House, 2016), you’ll see that they too fail to demonstrate any of these claims in the existing present rather than the astounding future. Instead, the Tapscotts spend several hundred pages talking about how great it’s all going to be potentially, and only note blockchain’s severe technical limitations in passing at the very end of the book.

    We finish with some stirring blockchain triumphalism:

    Most important, the crypto movement led by bitcoin has reasserted the principle of scarcity, unveiling the fallacy of the prodigal free goods and free money of the Google era. Made obsolete will be all the lavish Google prodigies given away and Google mines and minuses promoted as ads, as well as Google Minds fantasizing superminds in conscious machines.

    Bitcoin promoters routinely tout “scarcity” as a key advantage of their Internet magic beans — ignoring, as Gilder consistently does, that anyone can create a whole new magical Internet money by cut’n’paste, and they do. Austrian economics advocates had noted that issue ever since it started happening with altcoins in the early 2010s.

    The Google era is coming to an end because Google tries to cheat the constraints of economic scarcity and security by making its goods and services free. Google’s Free World is a way of brazenly defying the centrality of time in economics and reaching beyond the wallets of its customers directly to seize their time.

    The only ways in which the Google era has been shown to be “coming to an end” is that their technologies are reaching the tops of their S-curves. This absolutely counts as an end point as Gilder describes technological innovation, and he might even be right that Google’s era is ending — but his claimed reasons have just been asserted, and not at all shown.

    By reestablishing the connections between computation, finance, and AI on the inexorable metrics of time and space, the great unbundling of the blockchain movement can restore economic reality.

    The word “can” is doing all the work there. It was nine years at this book’s publication, and thirteen years now, and there’s a visible lack of progress on this front.

    Everything will apparently decentralise naturally, because at last it can:

    Disaggregated will be all the GAFAM (Google, Apple, Facebook, Amazon, Microsoft conglomerates) — the clouds of concentrated computing and commerce.

    The trouble with this claim is that the whole crypto and blockchain middleman infrastructure is full of monopolies, rentiers and central points of failure — because centralisation is always more economically efficient than decentralisation.

    We see recentralisation over and over. Bitcoin mining recentralised by 2014. Ethereum mining was always even more centralised than Bitcoin mining, and almost all practical use of Ethereum has long been dependent on ConsenSys’ proprietary Infura network. “Decentralisation” has always been a legal excuse to say “can’t sue me, bro,” and not any sort of operational reality.

    Gilder concludes:

    The final test is whether the new regime serves the human mind and consciousness. The measure of all artificial intelligence is the human mind. It is low-power, distributed globally, low-latency in proximity to its environment, inexorably bounded in time and space, and creative in the image of its creator.

    Gilder wants you to know that he really, really hates the idea of artificial intelligence, for religious reasons.

    Epilogue: The New System of the World

    Gilder tries virtual reality goggles and likes them: “Virtual reality is the opposite of artificial intelligence, which tries to enhance learning by machines. Virtual reality asserts the primacy of mind over matter. It is founded on the singularity of human minds rather than a spurious singularity of machines.”

    There’s a bit of murky restating of his theses: “The opposite of memoryless Markov chains is blockchains.” I’m unconvinced this sentence is any less meaningless with the entire book as context.

    And Another Thing!

    “Some Terms of Art and Information for Life after Google” at the end of the book isn’t a glossary — it’s a section for idiosyncratic assertions without justification that Gilder couldn’t fit in elsewhere, e.g.:

    Chaitin’s Law: Gregory Chaitin, inventor of algorithmic information theory, ordains that you cannot use static, eternal, perfect mathematics to model dynamic creative life. Determinist math traps the mathematician in a mechanical process that cannot yield innovation or surprise, learning or life. You need to transcend the Newtonian mathematics of physics and adopt post-modern mathematics — the mathematics that follows Gödel (1931) and Turing (1936), the mathematics of creativity.

    There doesn’t appear to be such a thing as “Chaitin’s Law” — all Google hits on the term are quotes of Gilder’s book.

    Gilder also uses this section for claims that only make sense if you already buy into the jargon of goldbug economics that failed out in the real world:

    Economic growth: Learning tested by falsifiability or possible bankruptcy. This understanding of economic growth follows from Karl Popper’s insight that a scientific proposition must be framed in terms that are falsifiable or refutable. Government guarantees prevent learning and thus thwart economic growth.

    Summary

    Gilder is sharp as a tack in interviews. I can only hope to be that sharp when I’m seventy-nine. But Life After Google fails in important ways — ways that Regnery bothering to bless the book with an editorial axe might have remedied. Gilder should have known better, in so many directions, and so should Regnery.

    Gilder keeps making technological and mathematical claims based directly on his religious beliefs. This does none of his other ideas any favours.

    Gilder is sincere. (Apart from that time he was busted lying about intelligent design not being intended to promote religion.) I think Gilder really does believe that Gödel’s incompleteness theorems and Shannon’s information theory, as further developed by Chaitin, mathematically prove that intelligence requires the hand of God. He just doesn’t show it, and nor has anyone else — particularly not any of the names he drops.

    This book will not inform you as to the future of the blockchain. It’s worse than typical ill-informed blockchain advocacy text, because Gilder’s track record means we expect more of him. Gilder misses key points he has no excuse for missing.

    The book may be of use in its rôle as some of what’s informing the technically incoherent blockchain dreams of billionaires. But it’s a slog.

    Those interested in blockchain — for or against — aren’t going to get anything useful from this book. Bitcoin advocates may see new avenues and memes for evangelism. Gilder fans appear disappointed so far.

    _____

    David Gerard is a writer, technologist, and leading critic of bitcoin and blockchain. He is the author of Attack of the 50-Foot Blockchain: Bitcoin, Blockchain, Ethereum and Smart Contracts (2017) and Libra Shrugged: How Facebook Tried to Take Over the Money (2020), and blogs at https://davidgerard.co.uk/blockchain/.

    Back to the essay

  • Alexander R. Galloway — Big Bro (Review of Wendy Hui Kyun Chun, Discriminating Data Correlation, Neighborhoods, and the New Politics of Recognition)

    Alexander R. Galloway — Big Bro (Review of Wendy Hui Kyun Chun, Discriminating Data Correlation, Neighborhoods, and the New Politics of Recognition)

    a review of Wendy Hui Kyun Chun, Discriminating Data Correlation, Neighborhoods, and the New Politics of Recognition (MIT Press, 2021)

    by Alexander R. Galloway

    I remember snickering when Chris Anderson announced “The End of Theory” in 2008. Writing in Wired magazine, Anderson claimed that the structure of knowledge had inverted. It wasn’t that models and principles revealed the facts of the world, but the reverse, that the data of the world spoke their truth unassisted. Given that data were already correlated, Anderson argued, what mattered was to extract existing structures of meaning, not to pursue some deeper cause. Anderson’s simple conclusion was that “correlation supersedes causation…correlation is enough.”

    This hypothesis — that correlation is enough — is the thorny little nexus at the heart of Wendy Chun’s new book, Discriminating Data. Chun’s topic is data analytics, a hard target that she tackles with technical sophistication and rhetorical flair. Focusing on data-driven tech like social media, search, consumer tracking, AI, and many other things, her task is to exhume the prehistory of correlation, and to show that the new epistemology of correlation is not liberating at all, but instead a kind of curse recalling the worst ghosts of the modern age. As Chun concludes, even amid the precarious fluidity of hyper-capitalism, power operates through likeness, similarity, and correlated identity.

    While interleaved with a number of divergent polemics throughout, the book focuses on four main themes: correlation, discrimination, authentication, and recognition. Chun deals with these four as general problems in society and culture, but also interestingly as specific scientific techniques. For instance correlation has a particular mathematical meaning, as well as a philosophical one. Discrimination is a social pathology but it’s also integral to discrete rationality. I appreciated Chun’s attention to details large and small; she’s writing about big ideas — essence, identity, love and hate, what does it mean to live together? — but she’s also engaging directly with statistics, probability, clustering algorithms, and all the minutia of data science.

    In crude terms, Chun rejects the — how best to call it — the “anarcho-materialist” turn in theory, typified by someone like Gilles Deleuze, where disciplinary power gave way to distributed rhizomes, schizophrenic subjects, and irrepressible lines of flight. Chun’s theory of power isn’t so much about tessellated tapestries of desiring machines as it is the more strictly structuralist concerns of norm and discipline, sovereign and subject, dominant and subdominant. Big tech is the mechanism through which power operates today, Chun argues. And today’s power is racist, misogynist, repressive, and exclusionary. Power doesn’t incite desire so much as stifle and discipline it. In other words George Orwell’s old grey-state villain, Big Brother, never vanished. He just migrated into a new villain, Big Bro, embodied by tech billionaires like Mark Zuckerberg or Larry Page.

    But what are the origins of this new kind of data-driven power? The reader learns that correlation and homophily, or “the notion that birds of a feather naturally flock together” (23), not only subtend contemporary social media platforms like Facebook, but were in fact originally developed by eugenicists like Francis Galton and Karl Pearson. “British eugenicists developed correlation and linear regression” (59), Chun notes dryly, before reminding us that these two techniques are at the core of today’s data science. “When correlation works, it does so by making the present and future coincide with a highly curated past” (52). Or as she puts it insightfully elsewhere, data science doesn’t so much anticipate the future, but predict the past.

    If correlation (pairing two or more pieces of data) is the first step of this new epistemological regime, it is quickly followed by some additional steps. After correlation comes discrimination, where correlated data are separated from other data (and indeed internally separated from themselves). This entails the introduction of a norm. Discriminated data are not simply data that have been paired, but measurements plotted along an axis of comparison. One data point may fall within a normal distribution, while another strays outside the norm within a zone of anomaly. Here Chun focuses on “homophily” (love of the same), writing that homophily “introduces normativity within a supposedly nonnormative system” (96).

    The third and fourth moments in Chun’s structural condition, tagged as “authenticity” and “recognition,” complete the narrative. Once groups are defined via discrimination, they are authenticated as a positive group identity, then ultimately recognized, or we could say self-recognized, by reversing the outward-facing discriminatory force into an inward-facing act of identification. It’s a complex libidinal economy that Chun patiently elaborates over four long chapters, linking these structural moments to specific technologies and techniques such as Bayes’ theorem, clustering algorithms, and facial recognition technology.

    A number of potential paths emerge in the wake of Chun’s work on correlation, which we will briefly mention in passing. One path would be toward Shane Denson’s recent volume, Discorrelated Images, on the loss of correlated experience in media aesthetics. Another would be to collide Chun’s critique of correlation in data science with Quentin Meillassoux’s critique of correlation in philosophy, notwithstanding the significant differences between their two projects.

    Correlation, discrimination, authentication, and recognition are the manifest contents of the book as it unfolds page by page. At the same time Chun puts forward a few meta arguments that span the text as a whole. The first is about difference and the second is about history. In both, Chun reveals herself as a metaphysician and moralist of the highest order.

    First Chun picks up a refrain familiar to feminism and anti-racist theory, that of erasure, forgetting, and ignorance. Marginalized people are erased from the archive; women are silenced; a subject’s embodiment is ignored. Chun offers an appealing catch phrase for this operation, “hopeful ignorance.” Many people in power hope that by ignoring difference they can overcome it. Or as Chun puts it, they “assume that the best way to fight abuse and oppression is by ignoring difference and discrimination” (2). Indeed this posture has been central to political liberalism for a long time, in for instance John Rawls’ derivation of justice via a “veil of ignorance.” For Chun the attempt to find an unmarked category of subjectivity — through that frequently contested pronoun “we” — will perforce erase and exclude those structurally denied access to the universal. “[John Perry] Barlow’s ‘we’ erased so many people,” Chun noted in dismay. “McLuhan’s ‘we’ excludes most of humanity” (9, 15). This is the primary crime for Chun, forgetting or ignoring the racialized and gendered body. (In her last book, Updating to Remain the Same, Chun reprinted a parody of a well-known New Yorker cartoon bearing the caption “On the Internet, nobody knows you’re a dog.” The posture of ignorance, of “nobody knowing,” was thoroughly critiqued by Chun in that book, even as it continues to be defended by liberals).

    Yet if the first crime against difference is to forget the mark, the second crime is to enforce it, to mince and chop people into segregated groups. After all, data is designed to discriminate, as Chun takes the better part of her book to elaborate. These are engines of difference and it’s no coincidence that Charles Babbage called his early calculating machine a “Difference Engine.” Data is designed to segregate, to cluster, to group, to split and mark people into micro identities. We might label this “bad” difference. Bad difference is when the naturally occurring multiplicity of the world is canalized into clans and cliques, leveraged for the machinations of power rather than the real experience of people.

    To complete the triad, Chun has proposed a kind of “good” difference. For Chun authentic life is rooted in difference, often found through marginalized experience. Her muse is “a world that resonates with and in difference” (3). She writes about “the needs and concerns of black women” (49). She attends to “those whom the archive seeks to forget” (237). Good difference is intersectional. Good difference attends to identity politics and the complexities of collective experience.

    Bad, bad, good — this is a triad, but not a dialectical one. Begin with 1) the bad tech posture of ignoring difference; followed by 2) the worse tech posture of specifying difference in granular detail; contrasted with 3) a good life that “resonates with and in difference.” I say “not dialectical” because the triad documents difference changing position rather than the position of difference changing (to paraphrase Catherine Malabou from her book on Changing Difference). Is bad difference resolved by good difference? How to tell the difference? For this reason I suggest we consider Discriminating Data as a moral tale — although I suspect Chun would balk at that adjective — because everything hinges on a difference between the good and the bad.

    Chun’s argument about good and bad difference is related to an argument about history, revealed through what she terms the “Transgressive Hypothesis.” I was captivated by this section of the book. It connects to a number of debates happening today in both theory and culture at large. Her argument about history has two distinct waves, and, following the contradictory convolutions of history, the second wave reverses and inverts the first.

    Loosely inspired by Michel Foucault’s Repressive Hypothesis, Chun’s Transgressive Hypothesis initially describes a shift in society and culture roughly coinciding with the Baby Boom generation in the late Twentieth Century. Let’s call it the 1968 mindset. Reacting to the oppressions of patriarchy, the grey-state threats of centralized bureaucracy, and the totalitarian menace of “Nazi eugenics and Stalinism,” liberation was found through “‘authentic transgression’” via “individualism and rebellion” (76). This was the time of the alternative, of the outsider, of the nonconformist, of the anti-authoritarian, the time of “thinking different.” Here being “alt” meant being left, albeit a new kind of left.

    Chun summons a familiar reference to make her point: the Apple Macintosh advertisement from 1984 directed by Ridley Scott, in which a scary Big Brother is dethroned by a colorful lady jogger brandishing a sledge hammer. “Resist, resist, resist,” was how Chun put the mantra. “To transgress…was to be free” (76). Join the resistance, unplug, blow your mind on red pills. Indeed the existential choice from The Matrix — blue pill for a life of slavery mollified by ignorance, red pill for enlightenment and militancy tempered by mortal danger — acts as a refrain throughout Chun’s book. In sum the Transgressive Hypothesis “equated democracy with nonnormative structures and behaviors” (76). To live a good life was to transgress.

    But this all changed in 1984, or thereabouts. Chun describes a “reverse hegemony” — a lovely phrase that she uses only twice — where “complaints against the ‘mainstream’ have become ‘mainstreamed’” (242). Power operates through reverse hegemony, she claims, “The point is never to be a ‘normie’ even as you form a norm” (34). These are the consequences of the rise of neoliberalism, fake corporate multiculturalism, Ronald Reagan and Margaret Thatcher but even more so Bill Clinton and Tony Blaire. Think postfordism and postmodernism. Think long tails and the multiplicity of the digital economy. Think woke-washing at CIA and Spike Lee shilling cryptocurrency. Think Hypernormalization, New Spirit of Capitalism, Theory of the Young Girl, To Live and Think Like Pigs. Complaints against the mainstream have become mainstreamed. And if power today has shifted “left,” then — Reverse Hegemony Brain go brrr — resistance to power shifts “right.” A generation ago the Q Shaman would have been a leftwing nut nattering about the Kennedy assassination. But today he’s a right wing nut (alas still nattering about the Kennedy assassination).

    “Red pill toxicity” (29) is how Chun characterizes the responses to this new topsy-turvy world of reverse hegemony. (To be sure, she’s only the latest critic weighing in on the history of the present; other well-known accounts include Angela Nagle’s 2017 book Kill All Normies, and Mark Fisher’s notorious 2013 essay “Exiting the Vampire Castle.”) And if libs, hippies, and anarchists had become the new dominant, the election of Donald Trump showed that “populism, paranoia, polarization” (77) could also reemerge as a kind of throwback to the worst political ideologies of the Twentieth Century. With Trump the revolutions of history — ironically, unstoppably — return to where they began, in “the totalitarian world view” (77).

    In other words these self-styled rebels never actually disrupted anything, according to Chun. At best they used disruption as a kind of ideological distraction for the same kinds of disciplinary management structures that have existed since time immemorial. And if Foucault showed that nineteenth-century repression also entailed an incitement to discourse, Chun describes how twentieth-century transgression also entailed a novel form of management. Before it was “you thought you were repressed but in fact you’re endlessly sublating and expressing.” Now it’s “you thought you were a rebel but disruption is a standard tactic of the Professional Managerial Class.” Or as Jacques Lacan said in response to some young agitators in his seminar, vous voulez un maître, vous l’aurez. Slavoj Žižek’s rendering, slightly embellished, best captures the gist: “As hysterics, you demand a new master. You will get it!

    I doubt Chun would embrace the word “hysteric,” a term indelibly marked by misogyny, but I wish she would, since hysteria is crucial to her Transgressive Hypothesis. In psychoanalysis, the hysteric is the one who refuses authority, endlessly and irrationally. And bless them for that; we need more hysterics in these dark times. Yet the lesson from Lacan and Žižek is not so much that the hysteric will conjure up a new master out of thin air. In a certain sense, the lesson is the reverse, that the Big Other doesn’t exist, that Big Brother himself is a kind of hysteric, that power is the very power that refuses power.

    This position makes sense, but not completely. As a recovering Deleuzian, I am indelibly marked by a kind of antinomian political theory that defines power as already heterogenous, unlawful, multiple, anarchic, and material. However I am also persuaded by Chun’s more classical posture, where power is a question of sovereign fiat, homogeneity, the central and the singular, the violence of the arche, which works through enclosure, normalization, and discipline. Faced with this type of power, Chun’s conclusion is, if I can compress a hefty book into a single writ, that difference will save us from normalization. In other words, while Chun is critical of the Transgressive Hypothesis, she ends up favoring the Big-Brother theory of power, where authentic alternatives escape repressive norms.

    I’ll admit it’s a seductive story. Who doesn’t want to believe in outsiders and heroes winning against oppressive villains? And the story is especially appropriate for the themes of Discriminating Data: data science of course entails norms and deviations; but also, in a less obvious way, data science inherits the old anxieties of skeptical empiricism, where the desire to make a general claim is always undercut by an inability to ground generality.

    Yet I suspect her political posture relies a bit too heavily on the first half of the Transgressive Hypothesis, the 1984 narrative of difference contra norm, even as she acknowledges the second half of the narrative where difference became a revanchist weapon for big tech (to say nothing of difference as a bonafide management style). This leads to some interesting inconsistencies. For instance Chun notes that Apple’s 1984 hammer thrower is a white woman disrupting an audience of white men. But she doesn’t say much else about her being a woman, or about the rainbow flag that ends the commercial. The Transgressive Hypothesis might be the quintessential tech bro narrative but it’s also the narrative of feminism, queerness, and the new left more generally. Chun avoids claiming that feminism failed; but she’s also savvy enough to avoid saying that it succeeded. And if Sadie Plant once wrote that “cybernetics is feminization,” for Chun it’s not so clear. According to Chun the cybernetic age of computers, data, and ubiquitous networks still orients around structures of normalization: masculine, white, straight, affluent and able-bodied. Resistant to such regimes of normativity, Chun must nevertheless invent a way to resist those who were resisting normativity.

    Regardless, for Chun the conclusion is clear: these hysterics got their new master. If not immediately they got it eventually, via the advent of Web 2.0 and the new kind of data-centric capitalism invented in the early 2000s. Correlation isn’t enough — and that’s the reason why. Correlation means the forming of a general relation, if only the most minimal generality of two paired data points. And, worse, correlation’s generality will always derive from past power and organization rather than from a reimagining of the present. Hence correlation for Chun is a type of structural pessimism, in that it will necessarily erase and exclude those denied access to the general relation.

    Characterized by a narrative poignancy and an attention to the ideological conditions of everyday life, Chun highlights alternative relations that could hopefully replace the pessimism of correlation. Such alternatives might take the form of a “potential history” or a “critical fabulation,” phrases borrowed from Ariella Azoulay and Saidiya Hartman, respectively. For Azoulay potential history means to “‘give an account of diverse worlds that persist’”; for Hartman, critical fabulation means “to see beyond numbers and sources” (79). A slim offering covering a few pages, nevertheless these references to Azoulay and Hartman indicate an appealing alternative for Chun, and she ends her book where it began, with an eloquent call to acknowledge “a world that resonates with and in difference.”

    _____

    Alexander R. Galloway is a writer and computer programmer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), Laruelle: Against the Digital (University of Minnesota, 2014), and most recently, Uncomputable: Play and Politics in the Long Digital Age (Verso, 2021).

    Back to the essay

     

  • Hannah Zeavin — Glasses for the Voice (Review of Jonathan Sterne, Diminished Faculties: A Political Phenomenology of Impairment)

    Hannah Zeavin — Glasses for the Voice (Review of Jonathan Sterne, Diminished Faculties: A Political Phenomenology of Impairment)

    a review of Jonathan Sterne, Diminished Faculties: A Political Phenomenology of Impairment (Duke UP, 2022)

    by Hannah Zeavin

    Somewhere between 500,000 and over 1 million Americans, and many more people worldwide, are now living with some form of post-viral symptomatology from COVID-19—or “Long COVID.” In a pandemic first and pervasively represented by elderly death or “mild” cases no worse than the flu, there are, in reality, three true outcomes after contracting the virus, one of which includes long-term illness, impairment, and disability. These “long haulers” are discovering what disability activists have long known and fought against: accommodation and access are not readily forthcoming, insurance is a nightmare, and people of color and women are much less likely to have their symptoms taken seriously enough to lead to a medical diagnosis. And medical diagnosis, if received, is fraught, too. If 1 in 4 Americans is already disabled, we have been and continue to be living through what some are calling a mass disabling event, akin to a war. This situation is not limited to the circulation of a virus and its aftermath in individual persons and bodies; it extends to the conditions past and present that have produced its lethality: capitalism and its attendants, including medical redlining, environmental racism, settler-colonialism.

    Jonathan Sterne’s Diminished Faculties: A Political Phenomenology of Impairment arrives then just in time to complicate that history via the experience of impairment (as well as its kin experiences and identities, illness and disability). As Sterne writes, “The semantic ambiguity among impairment, disability, and illness remains a constitutive feature of all three categories. They move through the same space and bump into one another, sometimes overlapping, sometimes repelling. All three are conditioned by a divergence from medical or social norms. All three are conditioned by an ideology of ability and a preference for ability and health.” Sterne’s book doesn’t just map the experiences of impairment, he also troubles the binary of disabled and able body/mind. By thinking about impairment and faculties, Sterne upends our received notion that we, somehow, are in control of our senses (or our minds, our limbs). Instead, some forms of impairment are accepted, even become norms, while others present as problems. Sterne’s book is about many kinds of impairment, and their intersections in subjects who are understood to be normative nonetheless or even because they’re impaired; what we think of as normal (gradual hearing loss as we work, listen to music, age) versus what is marked off as different and constitutes an unquestioned disability (e.g., childhood deafness following viral illness).

    Early in the book, Sterne quotes the disability studies adage, “you will someday join us.” This definitive book is also Sterne’s personal story of living in the matrixes of illness, impairment, and disability, in the materiality of their experience as well as the cultures that contain and produce those experiences. Rather than presenting a work at the end of learning, deleting all the traces of theorization up until the point of arrival, Sterne fully tells the story of how he “joined”: from study groups to blog posts, across changes in understanding and bodily experience. Diminished Faculties therefore provides a rigorous, moving account of the experience of the normal and the pathological, the accounted-for body both disabled and abled, and the one shoved to the margins. Sterne also offers his reader the account of impairment via a political phenomenology grounded in his own story while moving slowly and responsibly beyond it to reconceive impairment theory as a theory of labor, of media, and fundamentally, of political experience.

    Sterne is a preeminent voice in Media Studies, and the author of The Audible Past (Duke UP, 2003) and MP3: The Meaning of a Format (Duke UP, 2012). Diminished Faculties is his first book in nearly a decade, the third in a series of works that have shaped and reshaped sound studies, and the first to center his own history.

    While in this way, Diminished Faculties is moving beyond his previous books to auto-theory, If The Audible Past begins with the “Hello” of the telephone, Diminished Faculties takes on another, amplified greeting. In 2009, Sterne was diagnosed with an aggressive case of thyroid cancer; the surgery to remove his tumor (the size of a pomegranate, as demonstrated in a drawing from S. Lochlann Jain) paralyzed one of his two vocal cords. Normal vocal cord functioning looks like, as Sterne puts it elsewhere “a monkey crashing cymbals”; a normative voice depends on that coordinated cooperation between halves. And as he tells us, his voice may sound better, whatever that really means, to his listener (smokey and rich) on one of his worst days. But Sterne also talks for a living—teaching and delivering research-and his voice blows out, he gets exhausted. As Sterne began vocal therapy, he started to use a personal amplification device that hangs from his neck, which he has termed his “dork-o-phone.” Staying with the example of what gets made visible as impairment, Sterne tells the story of someone coming to a house party, pointing to his chest and saying, “What the fuck is that?” Sterne replies: “Glasses for my voice.” This book tries, in part, to account for this importunate reaction, reconciling a moment of surprise or frustration or intolerance with the fact that impairment is everywhere, and tracking what that reaction does to the subject who is marked as other. As Sterne writes, “Think of all the moving parts in that scenario: a subject whose body cannot match its will; but also auditors struggling to align themselves with whatever techniques the speaker is using. Everyone is trying; nobody is quite succeeding.”

    This is one way of naming the book’s method: “think of all the moving parts.” Each of its chapters weaves disability studies, auto-theory, history of science, and media history, turning the levels up or down on any particular input and frame. Diminished Faculties ushers the reader through these interlinked hermeneutics toward a redescription of impairment in the long 20th century.

    The first chapter, “Degrees of Muteness,” offers a deep consideration of the uses of phenomenology, and its methods for describing experience, centered on Sterne’s diagnosis, surgery, and its aftermath. As Sterne writes, “this book begins with consciousness of unconsciousness (or is it unconsciousness of consciousness?)” Here he also introduces a media theory of acquired impairment, arguing that, “the concept of impairment is itself also a media concept. The contemporary concept of normal hearing emerged out of the idea of communication impairments and from a very specific time and place.” He moves from this study of a phenomenology of impairment into its deployment, to consider his own voice, or voices v (spoken, amplified, written, authorial). Via his personal amplification device, which he has named the “dork-o-phone,” Sterne takes this object to think with to give us a history and experience of assistive technology and design as it interacts with other infrastructures.

    Sterne then moves from political phenomenology to breaking the normative form of a book by inserting the written guide for an imaginary exhibition “In Search of New Vocalities.” The exhibition is accessible, designed for bodies coming from places imaginary and real, an act of care in the scene of art going, if only in the mind. The tone of the book shifts once more for the concluding two chapters towards something more familiar from Sterne’s earlier books, here centered more squarely in STS and Disability studies.

    Chapter four is a theorization of Sterne’s identification of “aural scarification” and what he calls normal impairments. In this chapter, Sterne joins recent accounts of the built environment—and here he focuses on our sonic environment—that argue that disability itself reveals aspects of society that hurt everyone, however unevenly. Sara Hendren’s What Can a Body Do? (Riverhead, 2020) shows how the curb on the sidewalk, for example, makes city infrastructures impassable for wheelchair users—but also say, mothers pushing strollers, travelers with suitcases, skateboarders and so on. Add a curb cut and suddenly movement is much more possible in urban spaces for many—not just the conventionally disabled. On the other hand, sometimes access for disabled users is granted almost by accident. Sterne provides another example: closed captioning. Initially, closed captioning was resisted by major broadcast networks precisely because it was expensive and obtrusive—and would only help a small minority. Then other spaces changed and hearing users needed to be able to see what they would otherwise listen to, in airport bars, in hospital waiting rooms, at the gym. Suddenly, D/deaf users got the captions they needed—but only because abled users wanted the same technology. Sterne calls this “crip washing”; the scholar and critic Mara Mills calls this an “assistive pretext.”

    Sterne adds to this account that we live in a physical world that is in fact designed for people who are a little bit hearing impaired. Our entire infrastructure is loud: airplanes, bathroom hand dryers, music, whether live or in ear buds. Sterne shows that it is better not to hear perfectly and we hear less well because we interact with this environment; being alive leads to impairment even if we start without it (“you will someday join us”). Throughout Diminished Faculties, Sterne troubles the binary of disabled and abled body/mind by putting disability into a constellation with impairment and illness. By thinking about impairment and faculties, Sterne argues that some forms of impairment are accepted, even become norms, while others are marked as problems, which separates it as a term even as it overlaps with disability. What then is an impairment if we expect it, if it is normal, and it can be disappeared through design? Why are other impairments made visible through these same processes? Considering impairment and disability as a norm is a revision that Sterne requires of his reader, broadening our working understanding of the built environment.

    The concluding chapter of the book offers a deft theory and history of fatigue and rest. Opening with theorizations of how we manage fatigue in relation to labor, from Taylorism to energy quantified by “spoons” as theorized by Christine Miserandino, Sterne moves his account of fatigue through and beyond a depletion model. He asks whether we can think of fatigue as something other than a loss, a depletion of energy? He argues that rather than a lack of energy, fatigue is a presence. Sterne reminds his reader throughout that fatigue is so difficult to capture phenomenologically precisely because if it is too overtly present, he couldn’t write it down, if not present enough, he could not articulate the experience of fatigue from within. In this moment, Sterne returns to political phenomenology—including its limits. There are certain experiences, extreme fatigue being one of them—that are sometimes simply not accessible in the moment of writing.

    Impairment and fatigue are both concepts from media and the mediation of the body in society, and here are richly positioned within a history of technology and from disability studies. The two commingle, as Sterne deftly shows, to produce our lived experience of body in situ. Along the way, Sterne gives us additional experiences: an account of himself, an exhibition, and a theory to use (and a manual for how we might do it), turn to account, and even dispose of. Diminished Faculties is a lyric, genre-bending book, that is forcefully argued, rendered beautifully, and will open the path for further research. It is deeply generous both to reader and future scholar, as Sterne’s work always is. But additionally, this is a book that so many have needed, and need now, a way of situating the present emergency in a much longer, political history.

    _____

    Hannah Zeavin teaches in the History and English Departments at UC Berkeley. She is the author of The Distance Cure: A History of Teletherapy (2021, MIT Press). Other work is forthcoming or out from differences: A Journal of Feminist Cultural Studies, Dissent, The Guardian, n+1, Technology & Culture, and elsewhere.

    Back to the essay

  • Sue Curry Jansen and Jeff Pooley — Neither Artificial nor Intelligent (review of Crawford, Atlas of AI, and Pasquale, New Laws of Robotics)

    Sue Curry Jansen and Jeff Pooley — Neither Artificial nor Intelligent (review of Crawford, Atlas of AI, and Pasquale, New Laws of Robotics)

    a review of Kate Crawford, Atlas of AI Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale UP, 2021) and Frank Pasquale, New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard UP, 2021)

    by Sue Curry Jansen and Jeff Pooley

    Artificial intelligence (AI) is a Faustian dream. Conceived in the future tense, its most ardent AI visionaries seek to create an enhanced form of intelligence that far surpasses the capacities of human brains. AI promises to transcend the messiness of embodiment, the biases of human cognition, and the limitations of mortality. Entering its eighth decade, AI is largely a science fiction, despite recent advances in machine learning. Yet it has captured the public imagination since its inception, and acquired potent ideological cache. Robots have become AI’s humanoid faces, as well as icons of popular culture: cast as helpful companions or agents of the apocalypse.

    The transcendent vision of artificial intelligence has educated, informed, and inspired generations of scientists, military strategists, policy makers, entrepreneurs, writers, artists, filmmakers, and marketers. However, apologists have also frequently invoked AI’s authority to mystify, intimidate, and silence resistance to its vision, teleology, and deployments. Where, for example, the threat of automation once triggered labor activism, rallying opposition to an esoteric branch of computer science research that few non-specialists understand is a rhetorical non-starter. So is campaigning for alternatives to smart apps, homes, cars, cities, borders, and bombs.

    Two remarkable new books, Kate Crawford’s The Atlas of AI and Frank Pasquale’s New Laws of Robotics: Defending Human Expertise in the Age of AI, provide provocative critical assessments of artificial intelligence in clear, accessible, and engaging prose. Both books have titles that could discourage novices, but they are, in fact, excellent primers for non-specialists on what is at stake in the current ascendency of AI science and ideology—especially if read in tandem.

    Crawford’s thesis—“AI is neither artificial nor intelligent”—cuts through the sci-fi hype to radically reground AI power-knowledge in material reality. Beginning with its environmental impact on planet Earth, her narrative proceeds vertically to demystify AI’s ways of seeing—its epistemology, methodology, and applications—and then to examine the roles of labor, ideology, the state, and power in the AI enterprise. She concludes with a coda on space and the astronautical illusions of digital billionaires. Pasquale takes a more horizontal approach, surveying AI in health care, education, media, law, policy, economics, war, and other domains. His attention is on the practical present—on the ethical dilemmas posed by current and near-future deployments of AI. His through line is that human judgment, backed by policy, should steer AI toward human ends.

    Despite these differences, Crawford and Pasquale converge on several critical points. First, they agree that AI models are skewed by economic and engineering values to the exclusion of other forms of knowledge and wisdom. Second, both endorse greater transparency and accountability in artificial intelligence design and practices. Third, they agree that AI datasets are skewed: Crawford focuses on how the use of natural language datasets, no matter how large, reproduce the biases of the populations they are drawn from, while Pasquale attends to designs that promote addictive engagement to optimize ad revenue. Fourth, both cite the residual effects of AI’s military origins on its logic, values, and rhetoric. Fifth, Crawford and Pasquale both recognize that AI’s futurist hype tends to obscure the real-world political and economic interests behind the screens—the market fundamentalism that models the world as an assembly line. Sixth, both emphasize the embodiment of intelligence, which encompasses tacit and muscle knowledge that cannot be fully extracted and abstracted by artificial intelligence modelers. Seventh, they both view artificial intelligence as a form of data-driven behaviorism, in the stimulus-response sense. Eighth, they acknowledge that AI and economic experts claim priority for their own views—a position they both reject.

    Crawford literally travels the world to map the topologies of computation, beginning in the lithium mines of Nevada, on to Silicon Valley, Indonesia, Malaysia, China, and Mongolia, and ending under personal surveillance outside of Jeff Bezos’ Blue Origin suborbital launch facility in West Texas. Demonstrating that AI is anything but artificial, she documents the physical toll it extracts from the environment. Contra the industry’s earth-friendly PR and marketing, the myth of clean tech and metaphors like ‘the Cloud,’ Crawford points out that AI systems are built upon consuming finite resources that required billions of years to take form: “we are extracting Earth’s geological history to serve a split second of contemporary technological time, building devices like the Amazon Echo and iPhone that are often designed to last only a few years.” And the Cloud itself leaves behind a gigantic carbon footprint. AI data mining is not only dependent on human miners of rare minerals, but also on human labor functioning within a “registry of power” that is unequal and exploitive— where “many valuable automated systems feature a combination of underpaid digital piece workers and customers taking on unpaid tasks to make systems function,” all the while under constant surveillance.

    While there is a deskilling of human labor, there are also what Crawford calls Potemkin AI systems, which only work because of hidden human labor—Bezos himself calls such systems “artificial artificial intelligence.” AI often doesn’t work as well as the humans it replaces, as, for example, in automated telephone consumer service lines. But Crawford reminds us that AI systems scale up: customers ‘on hold’ replace legions of customer service workers in large organizations. Profits trump service. Her chapters on data and classification strip away the scientistic mystification of AI and Big Data. AI’s methodology is simply data at scale, and it is data that is biased at inception because it is collected indiscriminately, as size, not substance, counts. A dataset extracted and abstracted from a society secured in systemic racism will, for example, produce racist results. The increasing convergence of state and corporate surveillance not only undermines individual privacy, but also makes state actors reliant on technologies that they cannot fully understand as machine learning transforms them. In effect, Crawford argues, states have made a “devil’s bargain” with tech companies that they cannot control. These technologies, developed for command-and-control military and policing functions, increasingly erode the dialogic and dialectic nature of democratic commons.

    AI began as a highly subsidized public project in the early days of the Cold War. Crawford demonstrates, however, that it has been “relentlessly privatized to produce enormous financial gains for the tiny minority at the top of the extraction pyramid.” In collaboration with Alex Campolo, Crawford has described AI’s epistemological flattening of complexity as “enchanted determinism,” whereby “AI systems are seen as enchanted, beyond the known world, yet deterministic in that they discover patterns that can be applied with predictive certainty to everyday life.”[1] In some deep learning systems, even the engineers who create these systems cannot interpret them. Yet, they cannot dismiss them either. In such cases, “enchanted determinism acquires an almost theological quality,” which tends to place it beyond critique of both technological utopians as well as dystopians.

    Pasquale, for his part, examines the ethics of AI as currently deployed and often circumvented in several contexts: medicine, education, media, law, military, and the political economy of automation, in each case in relation to human wisdom. His basic premise is that “we now have the means to channel technologies of automation, rather than being captured or transformed by them.” Like Crawford, then, he recommends exercising a resistant form of agency. Pasquale’s focus is on robots as automated systems. His rhetorical point of departure is a critique and revision of Isaac Asimov’s highly influential “laws of robotics,” developed in a 1942 short story—more than a decade before AI was officially launched in 1956. Because the world and law-making is far more complex than a short story, Pasquale finds Asimov’s laws ambiguous and difficult to apply, and proposes four new ones, which become the basis of his arguments throughout the book. They are:

    1. Robotic systems and AI should complement professionals, not replace them.
    2. Robotic systems and AI should not counterfeit humanity.
    3. Robotic systems and AI should not intensify zero-sum arms races.
    4. Robotic systems and AI must always indicate the identity of their creator(s), controller(s), and owner(s).

    ‘Laws’ entail regulation, which Pasquale endorses to promote four corresponding values: complementarity, authenticity, cooperation, and attribution. The four laws’ deployment depends on a critical distinction that Pasquale draws between technologies that replace people and those that assist us in doing our jobs better. Classic definitions of AI have sought to create computers that “can sense, think, and act like humans.” Pasquale endorses an “Intelligence Augmentation” (IA) alternative. This is a crucial shift in emphasis; it is Pasquale’s own version of AI refusal.

    He acknowledges that, in the current economy, “there are economic laws that tilt the scale toward AI and against IA.” In his view, deployment of robots may, however, offer an opportunity for humanistic intervention in AI’s hegemony, because the presence of robots, unlike phones, tablets, or sensors, is physically intrusive. They are there for a purpose, which we may accept or reject at our peril, but find hard to ignore. Robots are being developed to enter fields that are already highly regulated, which offers an opportunity to shape their use in ways that conform to established legal standards of privacy and consumer protection. Pasquale is an advocate for building humane (IA) values within the technology, before robots are released into the wild.

    In each of his topical chapters, he explains how robots and other AI systems designed to advance the values of complementarity, authenticity, cooperation, and attribution might enhance human existence and community. Some chapters stand out, as particularly insightful, including those on “automated media,” human judgment, and the political economy of automation. One of Pasquale’s chapters addresses important terrain that Crawford does not consider, medicine. Given past abuses by medical researchers in exploiting and/or ignoring race and gender, they may be especially sensitive and receptive to an IA intervention, despite the formidable economic forces stacked against it. Pasquale shows, for example, how IA has amplified diagnostics in dermatology through pattern recognition, providing insight into what distinguishes malignant from benign moles.

    In our view, Pasquale’s closing chapter endorsing human wisdom, as opposed to AI, displays multiple examples of the former. But some of their impact is blunted by more diffuse discussions of literature and art, valuable though those practices may be in counter-balancing the instrumental values of economics and engineering. Nonetheless, Pasquale’s argument is an eloquent tribute to a “human form of life that is fragile, embodied in mortal flesh, time-delimited, and irreproducible in silico.”

    The two books, read together, amount to a critique of AI ideology. Pasquale and Crawford write about the stuff that phrases like “artificial intelligence” and “machine learning” refer to, but their main concern is the mystique surrounding the words themselves. Crawford is especially articulate on this theme. She shows that, as an idea, AI is self-warranting. Floating above the undersea cables and rare-earth mines—ethereal and cloud-like—the discourse makes its compelling case for the future. Her work is to cut through the cloud cover, to reveal the mines and cables.

    So the idea of AI justifies even as it obscures. What Crawford and Pasquale draw out is that AI is a way of seeing the world—a lay epistemology. When we see the world through the lens of AI, we see extraction-ready data. We see countable aggregates everywhere we look. We’re always peering ahead, predicting the future with machinist probabalism. It’s the view from Palo Alto that feels like a god’s eye view. From up there, the continents look patterned and classification-ready. Earth-bound disorder is flattened into clear signal. What AI sees, in Crawford’s phrase, is a “Linnaean order of machine-readable tables.” It is, in Pasquale’s view, an engineering mindset that prizes efficiency over human judgment.

    At the same time, as both authors show, the AI lens refracts the Cold War national security state that underwrote the technology for decades. Seeing like an AI means locating targets, assets, and anomalies. Crawford calls it a “covert philosophy of en masse infrastructural command and control,” a martial worldview etched in code.

    As Kenneth Burke observed, every way of seeing is also a way of not seeing. What AI can’t see is also its raw material: human complexity and difference. There is, in AI, a logic of commensurability—a reduction of messy and power-laden social life into “computable sameness.” So there is a connection, as both Crawford and Pasquale observe, between extraction and abstraction. The activity of everyday life is extracted into datasets that, in their bloodless tabulation, abstract away their origins. Like Marx’s workers, we are then confronted by the alienated product of our “labor”—interviewed or consoled or policed by AIs that we helped build.

    Crawford and Pasquale’s excellent books offer sharp and complementary critiques of the AI fog. Where they differ is in their calls to action. Pasquale, in line with his mezzo-level focus on specific domains like education, is the reformist. His aim is to persuade a policy community that he’s part of—to clear space between do-nothing optimists and fatalist doom-sayers. At core he hopes to use law and expertise to rein in AI and robotics—with the aim to deploy AI much more conscientiously, under human control and for human ends.

    Crawford is more radical. She sees AI as a machine for boosting the power of the already powerful. She is skeptical of the movement for AI “ethics,” as insufficient at best and veering toward exculpatory window-dressing. The Atlas of AI ends with a call for a “renewed politics of refusal,” predicated on a just and solidaristic vision of the future.

    It would be easy to exaggerate Crawford and Pasquale’s differences, which reflect their projects’ scope and intended audience more than any disagreement of substance. Their shared call is to see AI for what it is. Left to follow its current course, the ideology of AI will reinforce the bars on the “iron cage” that sociologist Max Weber foresaw a century ago: incarcerating us in systems of power dedicated to efficiency, calculation, and control.

    _____

    Sue Curry Jansen is Professor of Media & Communication at Muhlenberg College, in Allentown, PA. Jeff Pooley is Professor of Media & Communication at Muhlenberg, and director of mediastudies.press, a scholar-led publisher. Their co-authored essay on Shoshanna Zuboff’s Surveillance Capitalism—a review of the book’s reviews—recently appeared in New Media & Society.

    Back to the essay

    _____

    Notes

    [1] Crawford acknowledges the collaboration with Campolo, her research assistant, in developing this concept and the chapter on affect, generally.