b2o

  • Zachary Loeb — Hashtags Lean to the Right (Review of Schradie, The Revolution that Wasn’t: How Digital Activism Favors Conservatives)

    Zachary Loeb — Hashtags Lean to the Right (Review of Schradie, The Revolution that Wasn’t: How Digital Activism Favors Conservatives)

    a review of Jen Schradie,The Revolution that Wasn’t: How Digital Activism Favors Conservatives (Harvard University Press, 2019)

    by Zachary Loeb

    ~

    Despite the oft-repeated, and rather questionable, trope that social media is biased against conservatives; and beyond the attention that has been lavished on tech-savvy left-aligned movements (such as Occupy!) in recent years—this does not necessarily mean that social media is of greater use to the left. It may be quite the opposite. This is a topic that documentary filmmaker, activist and sociologist Jen Schradie explores in depth in her excellent and important book The Revolution That Wasn’t: How Digital Activism Favors Conservatism. Engaging with the political objectives of activists on the left and the right, Schradie’s book considers the political values that are reified in the technical systems themselves and the ways in which those values more closely align with the aims of conservative groups. Furthermore, Schradie emphasizes the socio-economic factors that allow particular groups to successfully harness high-tech tools, thereby demonstrating how digital activism reinforces the power of those who are already enjoying a fair amount of power. Rather than suggesting that high-tech tools have somehow been stolen from the left by the right, The Revolution That Wasn’t argues that these were not the left’s tools in the first place.

    The background against which Schradie’s analysis unfolds is the state of North Carolina in the years after 2011. Generally seen as a “red state,” North Carolina had flipped blue for Barack Obama in 2008, leading to the state being increasingly seen as a battleground. Even though the state was starting to take on a purplish color, North Carolina was still home to a deeply entrenched conservativism that was reflected (and still is reflected) in many aspects of the state’s laws, and in the legacy of racist segregation that is still felt in the state. Though the Occupy! movement lingers in the background of Schradie’s account, her focus is on struggles in North Carolina around unionization, the rapid growth of the Tea Party, and the emergence of the “Moral Monday” movement which inspired protests across the state (starting in 2013). While many considerations of digital activism have focused on hip young activists festooned with piercings, hacker skills, and copies of The Coming Insurrection—the central characters of Schradie’s book are members of the labor movement, campus activists, Tea Party members, Preppers, people associated with “Patriot” groups, as well as a smattering of paid organizers working for large organizations. And though Schradie is closely attuned to the impact that financial resources have within activist movements, she pushes back against the “astroturf” accusation that is sometimes aimed at right-wing activists, arguing that the groups she observed on both the right and the left reflected genuine populist movements.

    There is a great deal of specificity to Schradie’s study, and many of the things that Schradie observes are particular to the context of North Carolina, but the broader lessons regarding political ideology and activism are widely applicable. In looking at the political landscape in North Carolina, Schradie carefully observes the various groups that were active around the unionization issue, and pays close attention to the ways in which digital tools were used in these groups’ activism. The levels of digital savviness vary across the political groups, and most of the groups demonstrate at least some engagement with digital tools; however, some groups embraced the affordances of digital tools to a much greater extent than others. And where Schradie’s book makes its essential intervention is not simply in showing these differing levels of digital use, but in explaining why. For one of the core observations of Schradie’s account of North Carolina, is that it was not the left-leaning groups, but the right-leaning groups who were able to make the most out of digital tools. It’s a point which, to a large degree, runs counter to general narratives on the left (and possibly also the right) about digital activism.

    In considering digital activism in North Carolina, Schradie highlights the “uneven digital terrain that largely abandoned left working-class groups while placing right-wing reformist groups at the forefront of digital activism” (Schradie, 7). In mapping out this terrain, Schradie emphasizes three factors that were pivotal in tilting this ground, namely class, organization, and ideology. Taken independently of one another, each of these three factors provides valuable insight into the challenges posed by digital activism, but taken together they allow for a clear assessment of the ways that digital activism (and digital tools themselves) favor conservatives. It is an analysis that requires some careful wading into definitions (the different ways that right and left groups define things like “freedom” really matters), but these three factors make it clear that “rather than offering a quick technological fix to repair our broken democracy, the advent of digital activism has simply ended up reproducing, and in some cases, intensifying, preexisting power imbalances” (Schradie, 7).

    Considering that the core campaign revolves around unionization, it should not particularly be a surprise that class is a major issue in Schradie’s analysis. Digital evangelists have frequently suggested that high-tech tools allow for the swift breaking down of class barriers by providing powerful tools (and informational access) to more and more people—but the North Carolinian case demonstrates the ways in which class endures. Much of this has to do with the persistence of the digital divide, something which can easily be overlooked by onlookers (and academics) who have grown accustomed to digital tools. Schradie points to the presence of “four constraints” that have a pivotal impact on the class aspect of digital activism: “Access, Skills, Empowerment, and Time” (or ASETs for short; Schradie, 61). “Access” points to the most widely understood part of the digital divide, the way in which some people simply do not have a reliable and routine way of getting ahold of and/or using digital tools—it’s hard to build a strong movement online, when many of your members have trouble getting online. This in turn reverberates with “Skills,” as those who have less access to digital tools often lack the know-how that develops from using those tools—not everyone knows how to craft a Facebook post, or how best to make use of hashtags on Twitter. While digital tools have often been praised precisely for the ways in which they empower users, this empowerment is often not felt by those lacking access and skills, leading many individuals from working-class groups to see “digital activism as something ‘other people’ do” (Schradie, 64). And though it may be the easiest factor to overlook, engaging in digital activism requires Time, something which is harder to come by for individuals working multiple jobs (especially of the sort with bosses that do not want to see any workers using phones at work).

    When placed against the class backgrounds of the various activist groups considered in the book, the ASETs framework clearly sets up a situation in which conservative activists had the advantage. What Schradie found was “not just a question of the old catching up with the young, but of the poor never being able to catch up with the rich” (Schradie, 79), as the more financially secure conservative activists simply had more ASETs than the working-class activists on the left. And though the right-wing activists skewed older than the left-wing activists, they proved quite capable of learning to use new high-tech tools. Furthermore, an extremely important aspect here is that the working-class activists (given their economic precariousness) had more to lose from engaging in digital activism—the conservative retiree will be much less worried about losing their job, than the garbage truck driver interested in unionizing.

    Though the ASETs echo throughout the entirety of Schradie’s account, “Time” plays an essential connective role in the shift from matters of class to matters of organization. Contrary to the way in which the Internet has often been praised for invigorating horizontal movements (such as Occupy!), the activist groups in North Carolina attest to the ways in which old bureaucratic and infrastructural tools are still essential. Or, to put it another way, if the various ASETs are viewed as resources, then having a sufficient quantity of all four is key to maintaining an organization. This meant that groups with hierarchical structures, clear divisions of labor, and more staff (be these committed volunteers or paid workers) were better equipped to exploit the affordances of digital tools.

    Importantly, this was not entirely one-sided. Tea Party groups were able to tap into funding and training from larger networks of right-wing organizations, but national unions and civil rights organizations were also able to support left-wing groups. In terms of organization, the overwhelming bias is less pronounced in terms of a right/left dichotomy and more a reflection of a clash between reformist/radical groups. When it came to organization the bias was towards “reformist” groups (right and left) that replicated present power structures and worked within the already existing social systems; the groups that lose out here tend to be the ones that more fully eschew hierarchy (an example of this being student activists). Though digital democracy can still be “participatory, pluralist, and personalized,” Schradie’s analysis demonstrates how “the internet over the long-term favored centralized activism over connective action; hierarchy over horizontalism; bureaucratic positions over networked persons” (Schradie, 134). Thus, the importance of organization, demonstrates not how digital tools allowed for a new “participatory democracy” but rather how standard hierarchical techniques continue to be key for groups wanting to participate in democracy.

    Beyond class and organization (insofar as it is truly possible to get past either), the ideology of activists on the left and activists on the right has a profound influence on how these groups use digital tools. For it isn’t the case that the left and the right try to use the Internet for the exact same purpose. Schradie captures this as a difference between pursuing fairness (the left), and freedom (the right)—this largely consisted of left-wing groups seeking a “fairer” allocation of societal power, while those on the right defined “freedom” largely in terms of protecting the allocation of power already enjoyed by these conservative activists. Believing that they had been shut out by the “liberal media,” many conservatives flocked to and celebrated digital tools as a way of getting out “the Truth,” their “digital practices were unequivocally focused on information” (Schradie, 167). As a way of disseminating information, to other people already in possession of ASETs, digital means provided right-wing activists with powerful tools for getting around traditional media gatekeepers. While activists on the left certainly used digital tools for spreading information, their use of the internet tended to be focused more heavily on organizing: on bringing people together in order to advocate for change. Further complicating things for the left is that Schradie found there to be less unity amongst leftist groups in contrast to the relative hegemony found on the right. Comparing the intersection of ideological agendas with digital tools, Schradie is forthright in stating, “the internet was simply more useful to conservatives who could broadcast propaganda and less effective for progressives who wanted to organize people” (Schradie, 223).

    Much of the way that digital activism has been discussed by the press, and by academics, has advanced a narrative that frames digital activism as enhancing participatory democracy. In these standard tales (which often ground themselves in accounts of the origins of the internet that place heavy emphasis on the counterculture), the heroes of digital activism are usually young leftists. Yet, as Schradie argues, “to fully explain digital activism in this era, we need to take off our digital-tinted glasses” (Schradie, 259). Removing such glasses reveals the way in which they have too often focused attention on the spectacular efforts of some movements, while overlooking the steady work of others—thus, driving more attention to groups like Occupy!, than to the buildup of right-wing groups. And looking at the state of digital activism through clearer eyes reveals many aspects of digital life that are obvious, yet which are continually forgotten, such as the fact that “the internet is a tool that favors people with more money and power, often leaving those without resources in the dust” (Schradie, 269). The example of North Carolina shows that groups on the left and the right are all making use of the Internet, but it is not just a matter of some groups having more ASETs, it is also the fact that the high-tech tools of digital activism favor certain types of values and aims better than others. And, as Schradie argues throughout her book, those tend to be the causes and aims of conservative activists.

    Despite the revolutionary veneer with which the Internet has frequently been painted, “the reality is that throughout history, communications tools that seemed to offer new voices are eventually owned or controlled by those with more resources. They eventually are used to consolidate power, rather than to smash it into pieces and redistribute it” (Schradie, 25). The question with which activists, particularly those on the left, need to wrestle is not just whether or not the Internet is living up to its emancipatory potential—but whether or not it ever really had that potential in the first place.

    * * *

    In an iconic photograph from 1948, a jubilant Harry S. Truman holds aloft a copy of The Chicago Daily Tribune emblazoned with the headline “Dewey Beats Truman.” Despite the polls having predicted that Dewey would be victorious, when the votes were counted Truman had been sent back to the White House and the Democrats took control of the House and the Senate. An echo of this moment occurred some sixty-eight years later, though there was no comparable photo of Donald Trump smirking while holding up a newspaper carrying the headline “Clinton Beats Trump.” In the aftermath of Trump’s victory pundits ate crow in a daze, pollsters sought to defend their own credibility by emphasizing that their models had never actually said that there was no chance of a Trump victory, and even some in Trump’s circle seemed stunned by his victory.

    As shock turned to resignation, the search for explanations and scapegoats began in earnest. Democrats blamed Russian hackers, voter suppression, the media’s obsession with Trump, left-wing voters who didn’t fall in line, and James Comey; while Republicans claimed that the shock was simply proof that the media was out of touch with the voters. Yet, Republicans and Democrats seemed to at least agree on one thing: to understand Trump’s victory, it was necessary to think about social media. Granted, Republicans and Democrats were divided on whether this was a matter of giving credit or assigning blame. On the one hand, Trump had been able to effectively use Twitter to directly engage with his fan base; on the other hand, platforms like Facebook had been flooded with disinformation that spread rapidly through the online ecosystem. It did not take long for representatives, including executives, from the various social media companies to find themselves called before Congress, where these figures were alternately grilled about supposed bias against conservatives on their platforms, and taken to task for how their platforms had been so easily manipulated into helping Trump win election.

    If the tech companies were only finding themselves summoned before Congress it would have been bad enough, but they were also facing frustrated employees, as well as disgruntled users, and the word “techlash” was being used to describe the wave of mounting frustration with these companies. Certainly, unease with the power and influence of the tech titans had been growing for years. Cambridge Analytica was hardly the first tech scandal. Yet much of that earlier displeasure was tempered by an overwhelmingly optimistic attitude towards the tech giants, as though the industry’s problematic excesses were indicative of growing pains as opposed to being signs of intrinsic anti-democratic (small d) biases. There were many critics of the tech industry before the arrival of the “techlash,” but they were liable to find themselves denounced as Luddites if they failed to show sufficient fealty to the tech companies. From company CEOs to an adoring tech press to numerous technophilic academics, in the years prior to the 2016 election smart phones and social media were hailed for their liberating and democratizing potential. Videos shot on smart phone cameras and uploaded to YouTube, political gatherings organized on Facebook, activist campaigns turning into mass movements thanks to hashtags—all had been treated as proof positive that high tech tools were breaking apart the old hierarchies and ushering in a new era of high-tech horizontal politics.

    Alas, the 2016 election was the rock against which many of these high-tech hopes crashed.

    And though there are many strands contributing to the “techlash,” it is hard to make sense of this reaction without seeing it in relation to Trump’s victory. Users of Facebook and Twitter had been frustrated with those platforms before, but at the core of the “techlash” has been a certain sense of betrayal. How could Facebook have done this? Why was Twitter allowing Trump to break its own terms of service on a daily basis? Why was Microsoft partnering with ICE? How come YouTube’s recommendation algorithms always seemed to suggest far-right content?

    To state it plainly: it wasn’t supposed to be this way.

    But what if it was? And what if it had always been?

    In a 1985 interview with MIT’s newspaper The Tech, the computer scientist and social critic, Joseph Weizenbaum had some blunt words about the ways in which computers had impacted society, telling his interviewer: “I think the computer has from the beginning been a fundamentally conservative force. It has made possible the saving of institutions pretty much as they were, which otherwise might have had to be changed” (ben-Aaron, 1985). This was not a new position for Weizenbaum; he had largely articulated the same idea in his 1976 book Computer Power and Human Reason, wherein he had pushed back at those he termed the “artificial intelligentsia” and the other digital evangelists of his day. Articulating his thoughts to the interviewer from The Tech, Weizenbaum raised further concerns about the close links between the military and computer work at MIT, and cast doubt on the real usefulness of computers for society—couching his dire fears in the social critic’s common defense “I hope I’m wrong” (ben-Aaron, 1985). Alas, as the decades passed, Weizenbaum unfortunately felt he had been right. When he turned his critical gaze to the internet in a 2006 interview, he decried the “flood of disinformation,” while noting “it just isn’t true that everyone has access to the so-called Information age” (Weizenbaum and Wendt 2015, 44-45).

    Weizenbaum was hardly the only critic to have looked askance at the growing importance that was placed on computers during the 20th century. Indeed, Weizenbaum’s work was heavily influenced by that of his friend and fellow social critic Lewis Mumford who had gone so far as to identify the computer as the prototypical example of “authoritarian” technology (even suggesting that it was the rebirth of the “sun god” in technical form). Yet, societies that are in love with their high-tech gadgets, and which often consider technological progress and societal progress to be synonymous, generally have rather little time for such critics. When times are good, such social critics are safely quarantined to the fringes of academic discourse (and completely ignored within broader society), but when things get rocky they have their woebegone revenge by being proven right.

    All of which is to say, that thinkers like Weizenbaum and Mumford would almost certainly agree with The Revolution That Wasn’t. However, they would probably not be surprised by it. After all, The Revolution That Wasn’t is a confirmation that we are today living in the world about which previous generations of critics warned. Indeed, if there is one criticism to be made of Schradie’s work, it is that the book could have benefited by more deeply grounding its analysis in the longstanding critiques of technology that have been made by the likes of Weizenbaum, Mumford, and quite a few other scholars and critics. Jo Freeman and Langdon Winner are both mentioned, but it’s important to emphasize that many social critics warned about the conservative biases of computers long before Trump got a Twitter account, and long before Mark Zuckerberg was born. Our widespread refusal to heed these warnings, and the tendency to mock those issuing these warnings as Luddites, technophobes, and prophets of doom, is arguably a fundamental cause of the present state of affairs which Schradie so aptly describes.

    With The Revolution That Wasn’t, Jen Schradie has made a vital intervention in current discussions (inside the academy and amongst activists) regarding the politics of social media. Eschewing a polemical tone, which refuses to sing the praises of social media or to condemn it outright, Schradie provides a measured assessment that addresses the way in which social media is actually being used by activists of varying political stripes—with a careful emphasis on the successes these groups have enjoyed. There is a certain extent to which Schradie’s argument, and some of her conclusions, represent a jarring contrast to much of the literature that has framed social media as being a particular boon to left-wing activists. Yet, Schradie’s book highlights with disarming detail the ways in which a desire (on the part of left-leaning individuals) to believe that the Internet favors people on the left has been a sort of ideological blinder that has prevented them from fully coming to terms with how the Internet has re-entrenched the dominant powers in society.

    What Schradie’s book reveals is that “the internet did not wipe out barriers to activism; it just reflected them, and even at times exacerbated existing power differences” (Schradie, 245). Schradie allows the activists on both sides to speak in their own words, taking seriously their claims about what they were doing. And while the book is closely anchored in the context of a particular struggle in North Carolina, the analytical tools that Schradie develops (such as the ASET framework, and the tripartite emphasis on class/organization/ideology) allow Schradie’s conclusions to be mapped onto other social movements and struggles.

    While the research that went into The Revolution That Wasn’t clearly predates the election of Donald Trump, and though he is not a main character in the book, the 45th president lurks in the background of the book (or perhaps just in the reader’s mind). Had Trump lost the election, every part of Schradie’s analysis would be just as accurate and biting; however, those seeking to defend social media tools as inherently liberating would probably not be finding themselves on the defensive today (a position that most of them were never expecting themselves to be in). Yet, what makes Schradie’s account so important, is that the book is not simply concerned with whether or not particular movements used digital tools; rather, Schradie is able to step back to consider the degree to which the use of social media tools has been effective in fulfilling the political aims of the various groups. Yes, Occupy! might have made canny use of hashtags (and, if one wants to be generous one can say that it helped inject the discussion of inequality back into American politics), but nearly ten years later the wealth gap is continuing to grow. For all of the hopeful luster that has often surrounded digital tools, Schradie’s book shows the way in which these tools have just placed a fresh coat of paint on the same old status quo—even if this coat of paint is shiny and silvery.

    As the technophiles scramble to rescue the belief that the Internet is inherently democratizing, The Revolution That Wasn’t takes its place amongst a growing body of critical works that are willing to challenge the utopian aura that has been built up around the Internet. While it must be emphasized, as the earlier allusion to Weizenbaum shows, that there have been thinkers criticizing computers and the Internet for as long as there have been computers and the Internet—of late there has been an important expansion of such critical works. There is not the space here to offer an exhaustive account of all of the critical scholarship being conducted, but it is worthwhile to mention some exemplary recent works. Safiya Umoja Noble’s Algorithms of Oppression provides an essential examination of the ways in which societal biases, particularly about race and gender, are reinforced by search engines. The recent work on the “New Jim Code” by Ruha Benjamin as seen in such works as Race After Technology, and the Captivating Technology volume she edited, foreground the ways in which technological systems reinforce white supremacy. The work of Virginia Eubanks, both Digital Dead End (whose concerns make it likely the most important precursor to Schradie’s book) and her more recent Automating Inequality, discuss the ways in which high tech systems are used to police and control the impoverished. Examinations of e-waste (such as Jennifer Gabry’s Digital Rubbish) and infrastructure (such as Nicole Starosielski’s The Undersea Network, and Tung-Hui Hu’s A Prehistory of the Cloud) point to the ways in which colonial legacies are still very much alive in today’s high tech systems. While the internationalist sheen that is often ascribed to digital media is carefully deconstructed in works like Ramesh Srnivasan’s Whose Global Village? Works like Meredith Broussard’s Artificial Unintelligence and Shoshana Zuboff’s Age of Surveillance Capitalism raise deep questions about the overall politics of digital technology. And, with its deep analysis of the way that race and class are intertwined with digital access and digital activism, The Revolution That Wasn’t deserves a place amongst such works.

    What much of this recent scholarship has emphasized is that technology is never neutral. And while this may be a point which is accepted wisdom amongst scholars in these relevant fields, these works (and scholars) have taken great care to make this point to the broader public. It is not just that tools can be used for good, or for bad—but that tools have particular biases built into them. Pretending those biases aren’t there, doesn’t make them go away. Kranzberg’s Laws asserted that technology is not good, or bad, or neutral—but when one moves away from talking about technology to particular technologies, it is quite important to be able to say that certain technologies may actually be bad. This is a particular problem when one wants to consider things like activism. There has always been something asinine to the tactic of mocking activists pushing for social change while using devices created by massive multinational corporations (as the well-known comic by Matt Bors notes); however, the reason that this mockery is so often repeated is that it has a kernel of troubling truth to it. After all, there is something a little discomforting about using a device running on minerals mined in horrendous conditions, which was assembled in a sweatshop, and which will one day go on to be poisonous e-waste—for organizing a union drive.

    Matt Bors, detail from "Mister Gotcha" (2016)
    Matt Bors, detail from “Mister Gotcha” (2016)

    Or, to put it slightly differently, when we think about the democratizing potential of technology, to what extent are we privileging those who get to use (and discard) these devices, over those whose labor goes into producing them? That activists may believe that they are using a given device or platform for “good” purposes, does not mean that the device itself is actually good. And this is a tension Schradie gets at when she observes that “instead of a revolutionary participatory tool, the internet just happened to be the dominant communication tool at the time of my research and simply became normalized into the groups’ organizing repertoire” (Schradie, 133). Of course, activists (of varying political stripes) are making use of the communication tools that are available to them and widely used in society. But just because activists use a particular communication tool, doesn’t mean that they should fall in love with it.

    This is not in any way to call activists using these tools hypocritical, but it is a further reminder of the ways in which high-tech tools inscribe their users within the very systems they may be seeking to change. And this is certainly a problem that Schradie’s book raises, as she notes that one of the reasons conservative values get a bump from digital tools is that these conservatives are generally already the happy beneficiaries of the systems that created these tools. Scholarship on digital activism has considered the ideologies of various technologically engaged groups before, and there have been many strong works produced on hackers and open source activists, but often the emphasis has been placed on the ideologies of the activists without enough consideration being given to the ways in which the technical tools themselves embody certain political values (an excellent example of a work that truly considers activists picking their tools based on the values of those tools is Christina Dunbar-Hester’s Low Power to the People). Schradie’s focus on ideology is particularly useful here, as it helps to draw attention to the way in which various groups’ ideologies map onto or come into conflict with the ideologies that these technical systems already embody. What makes Schradie’s book so important is not just its account of how activists use technologies, but its recognition that these technologies are also inherently political.

    Yet the thorny question that undergirds much of the present discourse around computers and digital tools remains “what do we do if, instead of democratizing society, these tools are doing just the opposite?” And this question just becomes tougher the further down you go: if the problem is just Facebook, you can pose solutions such as regulation and breaking it up; however, if the problem is that digital society rests on a foundation of violent extraction, insatiable lust for energy, and rampant surveillance, solutions are less easily available. People have become so accustomed to thinking that these technologies are fundamentally democratic that they are loathe to believe analyses, such as Mumford’s, that they are instead authoritarian by nature.

    While reports of a “techlash” may be overstated, it is clear that at the present moment it is permissible to be a bit more critical of particular technologies and the tech giants. However, there is still a fair amount of hesitance about going so far as to suggest that maybe there’s just something inherently problematic about computers and the Internet. After decades of being told that the Internet is emancipatory, many people remain committed to this belief, even in the face of mounting evidence to the contrary. Trump’s election may have placed some significant cracks in the dominant faith in these digital devices, but suggesting that the problem goes deeper than Facebook or Amazon is still treated as heretical. Nevertheless, it is a matter that is becoming harder and harder to avoid. For it is increasingly clear that it is not a matter of whether or not these devices can be used for this or that political cause, but of the overarching politics of these devices themselves. It is not just that digital activism favors conservatism, but as Weizenbaum observed decades ago, that “the computer has from the beginning been a fundamentally conservative force.”

    With The Revolution That Wasn’t, Jen Schradie has written an essential contribution to current conversations around not only the use of technology for political purposes, but also about the politics of technology. As an account of left-wing and right-wing activists, Schradie’s book is a worthwhile consideration of the ways that various activists use these tools. Yet where this, altogether excellent, work really stands out is in the ways in which it highlights the politics that are embedded and reified by high-tech tools. Schradie is certainly not suggesting that activists abandon their devices—in so far as these are the dominant communication tools at present, activists have little choice but to use them—but this book puts forth a nuanced argument about the need for activists to really think critically about whether they’re using digital tools, or whether the digital tools are using them.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • ben-Aaron, Diana. 1985. “Weizenbaum Examines Computers and Society.” The Tech (Apr 9).
    • Weizenbaum, Joseph, and Gunna Wendt. 2015. Islands in the Cyberstream: Seeking Havens of Reason in a Programmed Society. Duluth, MN: Litwin Books.
  • “Dennis Erasmus” — Containment Breach: 4chan’s /pol/ and the Failed Logic of “Safe Spaces” for Far-Right Ideology

    “Dennis Erasmus” — Containment Breach: 4chan’s /pol/ and the Failed Logic of “Safe Spaces” for Far-Right Ideology

    “Dennis Erasmus”

    This essay has been peer-reviewed by “The New Extremism” special issue editors (Adrienne Massanari and David Golumbia), and the b2o: An Online Journal editorial board.

    Author’s Note: This article was written prior to the events of the deadly far-right riot in Charlottesville, Virginia, on August 11-12, 2017. Footnotes have been added with updated information where it is possible or necessary, but it has otherwise been largely unchanged.

    Introduction

    This piece is a discussion of one place on the internet where the far right meets, formulates their propaganda and campaigns, and ultimately reproduces and refines its ideology.

    4chan’s Politically Incorrect image board (like other 4chan boards, regularly referred to by the last portion of its URL, “/pol/”) is one of the most popular boards on the highly active and gently-moderated website, as well as a major online hub for far-right politics, memes, and coordinated harassment campaigns. Unlike most of the hobby-oriented boards on 4chan, /pol/ came into its current form through a series of board deletions and restorations with the intent of improving the discourse of the hobby boards by restricting unrelated political discussion, often of a bigoted nature, to a single location on the website. /pol/ is thus often referred to as a “containment board” with the understanding that far-right content is meant to be kept in that single forum.

    After deleting the /new/ – News board on January 17, 2011, /pol/ – Politically Incorrect was added to the website on November 10, 2011. 4chan’s original owner (and current Google employee) Christopher Poole (alias “moot”) deleted /new/ for having a disproportionately high proportion of racist discussion. In Poole’s words:

    As for /new/, anybody who used it knows exactly why it was removed. When I re-added the board last year, I made a note that if it devolved into /stormfront/, I’d remove it. It did — ages ago. Now it’s gone, as promised.[1]

    “/stormfront/” is a reference to Stormfront.org, one of the oldest and largest white supremacist forums on the internet. Stormfront was founded by a former KKK leader and is listed as an extremist group by the Southern Poverty Law Center (Southern Poverty Law Center 2017c).

    Despite once showing this commitment to maintaining a news board that was not dominated by far-right content, /pol/ nevertheless followed suit and gained a reputation as a haven for white supremacist politics (Dewey 2014).

    While there was the intention to keep political discussion contained in /pol/, far-right politics is a frequent theme on the other major discussion boards on the website and has come to be strongly associated with 4chan in general.

    The Logic of Containment

    The nature of 4chan means that for every new thread created, an old thread “falls off” of the website and is deleted or archived. Because of its high worldwide popularity and the fast pace of discussion, it has sometimes been viewed as necessary to split up boards into specific topics so that the rate of thread creation does not prematurely end productive, on-topic, ongoing conversations.

    The most significant example of a topic requiring “containment” is perhaps My Little Pony. The premiere of the 2010 animated series My Little Pony: Friendship is Magic led to a surge of interest in the franchise and a major fan following composed largely of young adult males (covered extensively in the media as “bronies”), 4chan’s key demographic (Whatisabrony.com 2017).

    Posters who wished to discuss other cartoons on the /co/ – Comics and Cartoons board were often left feeling crowded out by the intense and rapid pace of the large and excited fanbase that was only interested in discussing ponies. After months of complaints, a new board, /mlp/ – My Little Pony, was opened to accommodate both fans and detractors by giving the franchise a dedicated platform for discussion. For the most part, fans have been happy to stay and discuss the series among one another. There is also a site-wide rule that pony-related discussion must be confined in /mlp/, and while enforcement of the rules of 4chan is notoriously lax, this has mostly been applied (4chan 2017).

    A similar approach has been taken for several other popular hobbies; for instance, the creation of /vp/ – Pokémon for all media—be it video games, comics, or television—related to the very popular Japanese franchise.

    A common opinion on 4chan is that /pol/ serves as a “containment board” for the neo-Nazi, racist, and other far-right interests of many who use the website (Anonymous /q/ poster 2012). Someone who posts a blatantly political message on the /tv/ – Television and Film board, for instance, may be told “go back to your containment board.” One could argue, as well, that the popular and rarely moderated /b/ – Random board was originally a “containment board” for all of the off-topic discussion that would otherwise have derailed the specific niche or hobby boards.

    Moderators as Humans

    Jay Irwin, a moderator of 4chan and an advertising technology professional, wrote an article for The Observer.[2] The piece was published April 25, 2017, arguing that unwelcome “liberal agenda” in entertainment was serving to inspire greater conservatism on 4chan’s traditionally apolitical boards. Generalizations about the nature of 4chan’s userbase can be difficult, but Irwin’s status as a moderator means he has the ability to remove certain discussion threads while allowing others to flourish, shaping the discourse and apparent consensus of the website’s users.

    Irwin’s writing in The Observer shows a clear personal distaste for what he perceives as a liberal political agenda: in this specific case, Bill Nye’s assertion, backed up by today’s scientific consensus regarding human biology, that gender is a spectrum and not a binary:

    The show shuns any scientific approach to these topics, despite selling itself—and Bill Nye—as rigorously reason-based. Rather than providing evidence for the multitude of claims made on the show by Nye and his guests, the series relies on the kind of appeals to emotion one would expect in a gender studies class…The response on /tv/ was swift. The most historically apolitical 4channers are almost unanimously and vehemently opposed to the liberal agenda and lack of science on display in what is billed as a science talk show. Scores of 4chan users who have always avoided and discouraged political conversations have expressed horror at what they see as a significant uptick in the entertainment industry’s attempts to indoctrinate viewers with leftist ideology. (Irwin 2017)

    As Irwin believes the users of /tv/ are becoming less tolerant of liberal media, he expects them to also become warmer to far-right ideas and discussions that they once would have dismissed as off-topic and out of place on a television and film discussion board. Whether or not this is true of the /tv/ userbase, his obvious bias in favor of these ideas is able to inform the moderation that is applied when determining just how “off-topic” an anti-liberal thread might be.

    On the other end of the spectrum, a 4chan moderator was previously removed from the moderation team after issuing a warning against a user with explicitly political reasoning. In the aftermath of the December 2, 2016 fatal fire at the Ghost Ship warehouse, an artist’s space and venue in Oakland, California that killed thirty-six people, users of /pol/ attempted to organize a campaign to shut down DIY (“Do-it-yourself”) spaces across the United States by reporting noncompliance with fire codes to local authorities, in order to “crush the radical left” (KnowYourMeme 2017). As another moderator confirmed in a thread on /qa/, the board designed for discussions about 4chan, the fired moderator clearly stated their belief that the campaign to shut down DIY spaces is an attack on marginalized communities by neo-Nazis. (Anonymous##Mod 2016).

    The anti-DIY campaign is a clear example of the kind of “brigading”—use of /pol/ as an organizational and propaganda hub for right-wing political activities on other sites or in real life—that regularly occurs on the mostly-anonymous imageboard. The fired moderator’s error was not having an political agenda—as Irwin’s writing in The Observer demonstrates, he has an agenda of his own—but expressing it directly. They could have done as Irwin has the capacity to do, selectively deleting threads not to their liking with no justification required, so as to continue to maintain a facade of neutrality that is so important for the financially struggling site’s brand.

    He Will Not Divide Us

    Another such example of brigading activities would be the harassment surrounding the art project “He Will Not Divide Us” (HWNDU) by Shia LaBeouf, Nastja Säde Rönkkö & Luke Turner. Launched during the inauguration of President Trump on January 20, 2017, the project was to broadcast a 24-hour live stream for four years from outside of the Museum of the Moving Image in New York City. LaBeouf was frequently at the location leading crowds in relatively inoffensive chants: “he will not divide us,” and the like.

    LaBeouf, Rönkkö & Turner, HE WILL NOT DIVIDE US (2017)
    LaBeouf, Rönkkö & Turner, HE WILL NOT DIVIDE US (2017). Image source: Nylon

    Within a day, threads calling for raids against the exhibit on /pol/ were amassing hundreds of replies, with suggestions ranging from leaving booby-trapped racist posters taped on top of razor blades so as to cut people who tried to remove them, to simply sending in “the right wing death squads” (Anonymous /pol/ poster 2017). Notably, in part because it was noted by the /pol/ brigaders, two of the three HWNDU artists, LaBeouf and Turner, are Jewish.

    Raid participants who coordinated on /pol/ and other far-right websites flashed white nationalist paraphernalia, neo-Nazi tattoos, and within five days of opening, directly told LaBeouf “Hitler did nothing wrong” while he was present at the exhibit (Horton 2017). LaBeouf was later arrested and charged with misdemeanor assault against one of the people who went to his art exhibit with the intent of disrupting it, though the charges were later dismissed (France 2017).

    On February 10, less than a month into the intended four-year run of the project, the Museum of the Moving Image released a statement declaring its intent to shut down HWNDU, perhaps at the urging of the NYPD, which had to dedicate resources to monitoring the space after regular clashes:

    The installation created a serious and ongoing public safety hazard for the museum, its visitors, its staff, local residents and businesses. The installation had become a flashpoint for violence and was disrupted from its original intent. While the installation began constructively, it deteriorated markedly after one of the artists was arrested at the site of the installation and ultimately necessitated this action. (Saad 2017)

    High-profile liberal advocates of free speech causes did not draw attention to the implications of a Jewish artist’s exhibit being cancelled due to constant harassment by neo-Nazis and other far-right elements. New York magazine’s Jonathan Chait, one of the most high-profile liberal opponents of “politically correct” suppression of speech, spent his time policing the limits of discourse by criticizing anti-fascist political activists (Chait 2017). The American Civil Liberties Union spent its energy defending former right-wing celebrity and noted pederasty advocate Milo Yiannopoulos against his critics (NPR 2017).

    Containment Failure

    Among those who sincerely believed themselves to be politically neutral or at least not far-right, 4chan’s leadership was mistaken to view far-right politics as simply another hobby, rather than the basis of an ideology.

    Ideology is not easily compartmentalized. Unlike a hobby, an ideology has the power to follow its adherents into all areas of their lives. Whether that ideology is cultivated in a “safe space” that is digital or physical, it is nonetheless brought with its possessor out into the world.

    Attempting to contain far-right ideology in physical and virtual spaces provides its followers with one of the essential requirements it needs to thrive and contribute to society’s reactionary movements.

    By way of comparison, the users of /mlp/ or other successful containment boards do not use their discussion space to organize raids and targeted harassment campaigns because, basically, hobbies do not traditionally have antagonists (with Gamergate being a notable exception). Adherents to far-right ideology, on the other hand, see liberal protesters, Hollywood activists, “cultural Marxists,” “globalist Jews,” white people comfortable with interracial marriages, black and brown people of all persuasions, and anti-fascist street fighters to be in direct opposition to their interests. When gathered with like-minded people, they will discuss the urgency of combating these forces and, if possible, encourage one another to act against these enemies.

    It seems obvious that a board which has been documented organizing campaigns to harass a Jewish artist until his art exhibit is shut down, or to attempt to force the closure of spaces they believe belong to the “far left,” is anything but contained.

    If anything, the DIY venue example shows exactly how the average /pol/ user views designated ideological spaces: leftists will use those venues to organize, they assert, and if we take that away, we can decrease their capacity. If a DIY venue meant the leftists would be contained, then it would be advantageous for them to remain and let leftists keep talking among themselves. Rather, the far-right /pol/ userbase demonstrates through their actions that they believe leftists use their political spaces in the same way as they do, as a base for launching attacks against their enemies.

    Countdown: What Comes Next

    The political right in the United States remains divided in tactics, aesthetics, and capacity.

    Footage surfaced of a June 10, 2017 rally in Houston, Texas, of an alt-right activist being choked by an Oath Keeper—a member of a right-wing paramilitary organization—following a disagreement (Kragie and Lewis 2017). The alt-right activist is clearly signaling his affiliation with the internet-fueled right one might find in or inspired by /pol/, displaying posters that represent several recognizable 4chan memes (Pepe, Wojak/”feels guy”, Baneposting), in addition to neo-Nazi imagery (a stylized SS in the words “The Fire Rises,” an American flag modified to contain the Nazi-associated Black Sun or Sonnenrad). Which element of his approach provoked the ire of the Oathkeepers—identified by the SPLC as one of the largest anti-government organizations in the country—is not clear (Southern Poverty Law Center 2017b). The differences between the far-right inspired by 4chan and the paramilitary far-right mostly derived from ex-military and ex-police may be mostly aesthetic, but these differences nonetheless matter.[3]

    None of this is to discount the threat to life posed by the young and awkward meme-spouting members of the far-right. Brandon Russell, aged 21, was found in possession of bomb-making materials including explosive chemicals and radioactive materials, and arrested by authorities in Florida. He admitted his affiliation with an online neo-nazi group called Atom Waffen, German for “Atomic weapon,” an SPLC-identified hate group (Southern Poverty Law Center 2017a).

    Russell was not found due to an investigation into terroristic far-right groups, but because of a bizarre series of events in which one of his three roommates, who claimed to have originally shared the neo-Nazi beliefs of the others, allegedly converted to Islam and murdered the other two for disrespecting his new faith. Police only found Russell’s bomb and radioactive materials while examining this crime scene (Elfrink 2017).

    The Trump regime and its Department of Justice, then headed by Jefferson Beauregard Sessions, indicated that it plans to cut off what little funding has been directed towards investigating far-right and white supremacist extremist groups, instead focusing purely on the specter of Islamic extremism (Pasha-Robinson 2017).

    By several metrics, far-right terrorism is a greater threat to Americans than terrorism connected to Islamism, and seems on track to maintain this record (Parkin et al. 2017).

    A federal judge ruled that Russell, who was found to own a framed photograph of Oklahoma City bomber Timothy McVeigh—whose ammonium nitrate bomb killed 168 people in 1995—may be released on bond, writing that there was no evidence that he used or planned to use a homemade radioactive bomb (Phillips 2017). Admitted affiliation with neo-Nazi ideology, which glorifies a regime known for massacring leftists, minorities, and Jews, was not taken as evidence of a desire to maim or kill leftists, minorities, or Jews.

    Just like the well-intentioned 4chan moderators who believed in the compartmentalization or “containability” of ideology, U.S. Magistrate Judge Thomas McCoun III seemed to believe that neo-Nazi ideology is little more than a hobby that can be pursued separately from one’s procurement and assembly of chemical bombs. McCoun did not consider that far-right politics is not a simple interest, but produces a worldview that generates answers to why one assembles a dirty bomb and how it is ultimately used.

    Judge McCoun only changed his mind and revoked the order to grant Russell bail after seeing video testimony from Russell’s former roommate, who claimed Russell planned to use a radioactive bomb to attack a nuclear power plant in Florida with the intention of irradiating ocean water and wiping out “parts of the Eastern Seaboard” (Sullivan 2017). Living with other neo-Nazis, it seems, gave Russell the confidence and safe space he needed to plan to carry out a McVeigh-style attack to inflict massive loss of life.[4]

    Finally, one should note that Russell, who may still be free were it not for the brash murders allegedly committed by his roommate, is also a member of the Florida National Guard. The internet far-right may look and sound quite differently from the paramilitary Oathkeepers today, but that difference may change in time, as well.

    _____

    Dennis Erasmus (pseudonym) (@erasmusNYT) lived in Charlottesville, Virginia for six years prior to 2016. He has studied political theory and was active on 4chan for roughly eight years.

    Back to the essay

    _____

    Notes
    [1] Statement posted by moot on Nov at the /tmp/ board at http://content.4chan.org/tmp/r9knew.txt, and previously archived at the Webcite 4chan archive http://www.webcitation.org/6159jR9pC, and accessed by the author on July 9, 2017. The archive was deleted in early 2019.

    [2] The New York Observer, now a web-only publication, came under the ownership of Jared Kushner, President Donald J. Trump’s son-in-law, in 2006. The Observer is one of relatively few papers to have endorsed Trump during the 2016 Republican primary.

    [3] The alt-right activist who said “these are good memes” is supposedly William Fears, who was present at the Charlottesville 2017 riot and was arrested later that year in connection with a shooting directed at anti-racist protesters in Florida. While Fears’ brother plead guilty to accessory after the fact for attempted first degree murder, charges were dropped against Fears so he could be extradited for Texas for hitting and choking his ex-girlfriend. See Brett Barrouquere, “Texas Judge Hikes Bond on White Supremacist William Fears” (SPLC, Apr 17, 2018) and Brett Barrouquere, “Cops Say Richard Spencer Supporter William Fears IV Choked Girlfriend Days Before Florida Shooting” (SPLC, Jan 23, 2018).

    [4] Russell pled guilty to possession of a unlicensed destructive device and improper storage of explosive materials. He was sentenced to five years in prison. U.S. District Judge Susan Bucklew said “it’s a difficult case” and that Russell seemed “like a very smart young man.” See “Florida Neo-Nazi Leader Gets 5 Years for Having Explosive Material” (AP, Jan 9, 2018).
    _____

    Works Cited

     

  • Leif Weatherby — Irony and Redundancy: The Alt Right, Media Manipulation, and German Idealism

    Leif Weatherby — Irony and Redundancy: The Alt Right, Media Manipulation, and German Idealism

    Leif Weatherby

    This essay has been peer-reviewed by “The New Extremism” special issue editors (Adrienne Massanari and David Golumbia), and the b2o: An Online Journal editorial board.

    Take three minutes to watch this clip from a rally in New York City just after the 2016 presidential election.[i] In the impromptu interview, we learn that Donald Trump is going to “raise the ancient city of Thule” and “complete the system of German Idealism.” In what follows, I’m going to interpret what the troll in the video—known only by his twitter handle, @kantbot2000—is doing here. It involves Donald Trump, German Idealism, metaphysics, social media, and above all irony. It’s a diagnosis of the current relationship between mediated speech and politics. I’ll come back to Kantbot presently, but first I want to lay the scene he’s intervening in.

    A small but deeply networked group of self-identifying trolls and content-producers has used the apparently unlikely rubric of German philosophy to diagnose our media-rhetorical situation. There’s less talk of trolls now than there was in 2017, but that doesn’t mean they’re gone.[ii] Take the recent self-introductory op-ed by Brazil’s incoming foreign minister, Ernesto Araùjo, which bizarrely accuses Ludwig Wittgenstein of undermining the nationalist identity of Brazilians (and everyone else). YouTube remains the global channel of this Alt Right[iii] media game, as Andre Pagliarini has documented: one Olavo de Carvalho, whose channel is dedicated to the peculiar philosophical obsessions of the global Alt Right, is probably responsible for this foreign minister taking the position, apparently intended as policy, “I don’t like Wittgenstein,” and possibly for his appointment in the first place. The intellectuals playing this game hold that Marxist and postmodern theory caused the political world to take its present shape, and argue that a wide variety of theoretical tools should be reappropriated to the Alt Right. This situation presents a challenge to the intellectual Left on both epistemological and political grounds.

    The core claim of this group—one I think we should take seriously—is that mediated speech is essential to politics. In a way, this claim is self-fulfilling. Araùjo, for example, imagines that Wittgenstein’s alleged relativism is politically efficacious; Wittgenstein arrives pre-packaged by the YouTube phenomenon Carvalho; Araùjo’s very appointment seems to have been the result of Carvalho’s influence. That this tight ideological loop should realize itself by means of social media is not surprising. But in our shockingly naïve public political discussions—at least in the US—emphasis on the constitutive role of rhetoric and theory appears singular. I’m going to argue that a crucial element of this scene is a new tone and practice of irony that permeates the political. This political irony is an artefact of 2016, most directly, but it lurks quite clearly beneath our politics today. And to be clear, the self-styled irony of this group is never at odds with a wide variety of deeply held, and usually vile, beliefs. This is because irony and seriousness are not, and have never been, mutually exclusive. The idea that the two cannot cohabit is one of the more obvious weak points of our attempt to get an analytical foothold on the global Alt Right—to do so, we must traverse the den of irony.

    Irony has always been a difficult concept, slippery to the point of being undefinable. It usually means something like “when the actual meaning is the complete opposite from the literal meaning,” as Ethan Hawke tells Wynona Ryder in 1994’s Reality Bites. Ryder’s plaint, “I know it when I see it” points to just how many questions this definition raises. What counts as a “complete opposite”? What is the channel—rhetorical, physical, or otherwise—by which this dual expression can occur? What does it mean that what we express can contain not only implicit or connotative content, but can in fact make our speech contradict itself to some communicative effect? And for our purposes, what does it mean when this type of question embeds itself in political communication?

    Virtually every major treatment of irony since antiquity—from Aristotle to Paul de Man—acknowledges these difficulties. Quintilian gives us the standard definition: that the meaning of a statement is in contradiction to what it literally extends to its listener. But he still equivocates about its source:

    eo vero genere, quo contraria ostenduntur, ironia est; illusionem vocant. quae aut pronuntiatione intelligitur aut persona aut rei nature; nam, si qua earum verbis dissentit, apparet diversam esse orationi voluntatem. Quanquam id plurimis id tropis accidit, ut intersit, quid de quoque dicatur, quia quoddicitur alibi verum est.

    On the other hand, that class of allegory in which the meaning is contrary to that suggested by the words, involve an element of irony, or, as our rhetoricians call it, illusio. This is made evident to the understanding either by the delivery, the character of the speaker or the nature of the subject. For if any one of these three is out of keeping with the words, it at once becomes clear that the intention of the speaker is other than what he actually says. In the majority of tropes it is, however, important to bear in mind not merely what is said, but about whom it is said, since what is said may in another context be literally true. (Quintilian 1920, book VIII, section 6, 53-55)

    Speaker, ideation, context, addressee—all of these are potential sources for the contradiction. In other words, irony is not limited to the intentional use of contradiction, to a wit deploying irony to produce an effect. Irony slips out of precise definition even in the version that held sway for more than a millennium in the Western tradition.

    I’m going to argue in what follows that irony of a specific kind has re-opened what seemed a closed channel between speech and politics. Certain functions of digital, and specifically social, media enable this kind of irony, because the very notion of a digital “code” entailed a kind of material irony to begin with. This type of irony can be manipulated, but also exceeds anyone’s intention, and can be activated accidentally (this part of the theory of irony comes from the German Romantic Friedrich Schlegel, as we will see). It not only amplifies messages, but does so by resignifying, exploiting certain capacities of social media. Donald Trump is the master practitioner of this irony, and Kantbot, I’ll propose, is its media theorist. With this irony, political communication has exited the neoliberal speech regime; the question is how the Left responds.

    i. “Donald Trump Will Complete the System of German Idealism”

    Let’s return to our video. Kantbot is trolling—hard. There’s obvious irony in the claim that Trump will “complete the system of German Idealism,” the philosophical network that began with Immanuel Kant’s Critique of Pure Reason (1781) and ended (at least on Kantbot’s account) only in the 1840s with Friedrich Schelling’s philosophy of mythology. Kant is best known for having cut a middle path between empiricism and rationalism. He argued that our knowledge is spontaneous and autonomous, not derived from what we observe but combined with that observation and molded into a nature that is distinctly ours, a nature to which we “give the law,” set off from a world of “things in themselves” about which we can never know anything. This philosophy touched off what G.W.F. Hegel called a “revolution,” one that extended to every area of human knowledge and activity. History itself, Hegel would famously claim, was the forward march of spirit, or Geist, the logical unfolding of self-differentiating concepts that constituted nature, history, and institutions (including the state). Schelling, Hegel’s one-time roommate, had deep reservations about this triumphalist narrative, reserving a place for the irrational, the unseen, the mythological, in the process of history. Hegel, according to a legend propagated by his students, finished his 1807 Phenomenology of Spirit while listening to the guns of the battle of Auerstedt-Jena, where Napoleon defeated the Germans and brought a final end to the Holy Roman Empire. Hegel saw himself as the philosopher of Napoleon’s moment, at least in 1807; Kantbot sees himself as the Hegel to Donald Trump (more on this below).

    Rumor has it that Kantbot is an accountant in NYC, although no one has been able to doxx him yet. His twitter has more than 26,000 followers at the time of writing. This modest fame is complemented by a deep lateral network among the biggest stars on the Far Right. To my eye he has made little progress in gaining fame—but also in developing his theory, on which he has recently promised a book “soon”—in the last year. Conservative media reported that he was interviewed by the FBI in 2018. His newest line of thought involves “hate hoaxes” and questioning why he can’t say the n-word—a regression to platitudes of the extremist Right that have been around for decades, as David Neiwert has extensively documented (Neiwert 2017). Sprinkled between these are exterminationist fantasies—about “Spinozists.” He toggles between conspiracy, especially of the false-flag variety, hate-speech-flirtation, and analysis. He has recently started a podcast. The whole presentation is saturated in irony and deadly serious:

    Asked how he identifies politically, Kantbot recently claimed to be a “Stalinist, a TERF, and a Black Nationalist.” Mike Cernovich, the Alt Right leader who runs the website Danger and Play, has been known to ask Kantbot for advice. There is also an indirect connection between Kantbot and “Neoreaction” or NRx, a brand of “accelerationism” which itself is only blurrily constituted by the blog-work of Curtis Yarvin, aka Mencius Moldbug and enthusiasm for the philosophy of Nick Land (another reader of Kant). Kantbot also “debated” White Nationalist thought leader Richard Spencer, presenting the spectacle of Spencer, who wrote a Masters thesis on Adorno’s interpretation of Wagner, listening thoughtfully to Kantbot’s explanation of Kant’s rejection of Johann Gottfried Herder, rather than the body count, as the reason to reject Marxism.

    When conservative pundit Ann Coulter got into a twitter feud with Delta over a seat reassignment, Kantbot came to her defense. She retweeted the captioned image below, which was then featured on Breitbart News in an article called “Zuckerberg 2020 Would be a Dream Come True for Republicans.”

    Kantbot’s partner-in-crime, @logo-daedalus (the very young guy in the maroon hat in the video) has recently jumped on a minor fresh wave of ironist political memeing in support of UBI-focused presidential candidate, Andrew Yang – #yanggang. He was once asked by Cernovich if he had read Michael Walsh’s book, The Devil’s Pleasure Palace: The Cult of Critical Theory and the Subversion of the West:

    The autodidact intellectualism of this Alt Right dynamic duo—Kantbot and Logodaedalus—illustrates several roles irony plays in the relationship between media and politics. Kantbot and Logodaedalus see themselves as the avant-garde of a counterculture on the brink of a civilizational shift, participating in the sudden proliferation of “decline of the West” narratives. They alternate targets on Twitter, and think of themselves as “producers of content” above all. To produce content, according to them, is to produce ideology. Kantbot is singularly obsessed the period between about 1770 and 1830 in Germany. He thinks of this period as the source of all subsequent intellectual endeavor, the only period of real philosophy—a thesis he shares with Slavoj Žižek (Žižek 1993).

    This notion has been treated monographically by Eckart Förster in The Twenty-Five Years of Philosophy, a book Kantbot listed in May of 2017 under “current investigations.” His twist on the thesis is that German Idealism is saturated in a form of irony. German Idealism never makes culture political as such. Politics comes from a culture that’s more capacious than any politics, so any relation between the two is refracted by a deep difference that appears, when they are brought together, as irony. Marxism, and all that proceeds from Marxism, including contemporary Leftism, is a deviation from this path.


    This reading of German Idealism is a search for the metaphysical origins of a common conspiracy theory in the Breitbart wing of the Right called “cultural Marxism” (the idea predates Breibart: see Jay 2011; Huyssen 2017; Berkowitz 2003. Walsh’s 2017 The Devil’s Pleasure Palace, which LogoDaedalus mocked to Cernovich, is one of the touchstones of this theory). Breitbart’s own account states that there is a relatively straight line from Hegel’s celebration of the state to Marx’s communism to Woodrow Wilson’s and Franklin Delano Roosevelt’s communitarianism—and on to critical theory of Theodor W. Adorno and Herbert Marcuse (this is the actual “cultural Marxism,” one supposes), Saul Alinsky’s community organizing, and (surprise!) Barack Obama’s as well (Breitbart 2011, 105-37). The phrase “cultural Marxism” is a play on the Nazi phrase “cultural Bolshevism,” a conspiracy theory that targeted Jews as alleged spies and collaborators of Stalin’s Russia. The anti-Semitism is only slightly more concealed in the updated version. The idea is that Adorno and Marcuse took control of the cultural matrix of the United States and made the country “culturally communist.” In this theory, individual freedom is always second to an oppressive community in the contemporary US. Between Breitbart’s adoption of critical theory and NRx (see Haider 2017; Beckett 2017; Noys 2014)—not to mention the global expansion of this family of theories by figures like Carvalho—it’s clear that the “Alt Right” is a theory-deep assemblage. The theory is never just analysis, though. It’s always a question of intervention, or media manipulation (see Marwick and Lewis 2017).

    Breitbart himself liked to capture this blend in his slogan “politics is downstream from culture.” Breitbart’s news organization implicitly cedes the theoretical point to Adorno and Marcuse, trying to build cultural hegemony in the online era. Reform the cultural, dominate the politics—all on the basis of narrative and media manipulation. For the Alt Right, politics isn’t “online” or “not,” but will always be both.

    In mid-August of 2017, a flap in the National Security Council was caused by a memo, probably penned by staffer Rich Higgins (who reportedly has ties to Cernovich), that appeared to accuse then National Security Adviser, H. R. McMaster, of supporting or at least tolerating Cultural Marxism’s attempt to undermine Trump through narrative (see Winter and Groll 2017). Higgins and other staffers associated with the memo were fired, a fact which Trump learned from Sean Hannity and which made him “furious.” The memo, about which the president “gushed,” defines “the successful outcome of cultural Marxism [as] a bureaucratic state beholden to no one, certainly not the American people. With no rule of law considerations outside those that further deep state power, the deep state truly becomes, as Hegel advocated, god bestriding the earth” (Higgins 2017). Hegel defined the state as the goal of all social activity, the highest form of human institution or “objective spirit.” Years later, it is still Trump vs. the state, in its belated thrall to Adorno, Marcuse, and (somehow) Hegel. Politics is downstream from German Idealism.

    Kantbot’s aspiration was to expand and deepen the theory of this kind of critical manipulation of the media—but he wants to rehabilitate Hegel. In Kantbot’s work we begin to glimpse how irony plays a role in this manipulation. Irony is play with the very possibility of signification in the first place. Inflected through digital media—code and platform—it becomes not just play but its own expression of the interface between culture and politics, overlapping with one of the driving questions of the German cultural renaissance around 1800. Kantbot, in other words, diagnosed and (at least at one time) aspired to practice a particularly sophisticated combination of rhetorical and media theory as political speech in social media.

    Consider this tweet:



    After an innocuous webcomic frog became infamous in 2016, after the Clinton campaign denounced its use and the Anti-Defamation League took the extraordinary step of adding the meme to its Hate Database, Pepe the Frog gained a kind of cult status. Kantbot’s reading of the phenomenon is that the “point is demonstration of power to control meaning of sign in modern media environment.” If this sounds like French Theory, then one “Johannes Schmitt” (whose profile thumbnail appears to be an SS officer) agrees. “Starting to sound like Derrida,” he wrote. To which Kantbot responds, momentously: “*schiller.”



    The asterisk-correction contains multitudes. Kantbot is only too happy to jettison the “theory,” but insists that the manipulation of the sign in its relation to the media environment maintains and alters the balance between culture and politics. Friedrich Schiller, whose classical aesthetic theory claims just this, is a recurrent figure for Kantbot. The idea, it appears, is to create a culture that is beyond politics and from which politics can be downstream. To that end, Kantbot opened his own online venue, the “Autistic Mercury,” named after Der teutsche Merkur, one of the German Enlightenment’s central organs.[iv] For Schiller, there was a “play drive” that mediated between “form” and “content” drives. It preserved the autonomy of art and culture and had the potential to transform the political space, but only indirectly. Kantbot wants to imitate the composite culture of the era of Kant, Schiller, and Hegel—just as they built their classicism on Johann Winckelmann’s famous doctrine that an autonomous and inimitable culture must be built on imitation of the Greeks. Schiller was suggesting that art could prevent another post-revolutionary Terror like the one that had engulfed France. Kantbot is suggesting that the metaphysics of communication—signs as both rhetoric and mediation—could resurrect a cultural vitality that got lost somewhere along the path from Marx to the present. Donald Trump is the instrument of that transformation, but its full expression requires more than DC politics. It requires (online) culture of the kind the campaign unleashed but the presidency has done little more than to maintain. (Kantbot uses Schiller for his media analysis too, as we will see.) Spencer and Kanbot agreed during their “debate” that perhaps Trump had done enough before he was president to justify the disappointing outcomes of his actual presidency. Conservative policy-making earns little more than scorn from this crowd, if it is detached from the putative real work of building the Alt Right avant-garde.



    According to one commenter on YouTube, Kantbot is “the troll philosopher of the kek era.” Kek is the god of the trolls. His name is based on a transposition of the letters LOL in the massively-multiplayer online role-playing game World of Warcraft. “KEK” is what the enemy sees when you laugh out loud to someone on your team, in an intuitively crackable code that was made into an idol to worship. Kek—a half-fake demi-God—illustrates the balance between irony and ontology in the rhetorical media practice known as trolling.


    The name of the idol, it turned out, was also the name of an actual ancient Egyptian demi-god (KEK), a phenomenon that confirmed his divine status, in an example of so-called “meme magic.” Meme magic is when—often by praying to KEK or relying on a numerological system based on the random numbers assigned to users of 4Chan and other message boards—something that exists only online manifests IRL, “in real life” (Burton 2016). Examples include Hillary Clinton’s illness in the late stages of the campaign (widely and falsely rumored—e.g. by Cernovich—before a real yet minor illness was confirmed), and of course Donald Trump’s actual election. Meme magic is everywhere: it names the channel between online and offline.

    Meme magic is both drenched in irony and deeply ontological. What is meant is just “for the lulz,” while what is said is magic. This is irony of the rhetorical kind—right up until it works. The case in point is the election, where the result, and whether the trolls helped, hovers between reality and magic. First there is meme generation, usually playfully ironic. Something happens that resembles the meme. Then the irony is retroactively assigned a magical function. But statements about meme magic are themselves ironic. They use the contradiction between reality and rhetoric (between Clinton’s predicted illness and her actual pneumonia) as the generator of a second-order irony (the claim that Trump’s election was caused by memes is itself a meme). It’s tempting to see this just as a juvenile game, but we shouldn’t dismiss the way the irony scales between the different levels of content-production and interpretation. Irony is rhetorical and ontological at once. We shouldn’t believe in meme magic, but we should take this recursive ironizing function very seriously indeed. It is this kind of irony that Kantbot diagnoses in Trump’s manipulation of the media.

    ii. Coding Irony: Friedrich Schlegel, Claude Shannon, and Twitter

    The ongoing inability of the international press to cover Donald Trump in a way that measures the impact of his statements rather than their content stems from this use of irony. We’ve gotten used to fake news and hyperbolic tweets—so used to these that we’re missing the irony that’s built in. Every time Trump denies something about collusion or says something about the coal industry that’s patently false, he’s exploiting the difference between two sets of truth-valuations that conflict with one another (e.g. racism and pacifism). That splits his audience—something that the splitting of the message in irony allows—and works both to fight his “enemies” and to build solidarity in his base. Trump has changed the media’s overall expression, making not his statements but the very relation between content and platform ironic. This objective form of media irony is not to be confused with “wit.” Donald Trump is not “witty.” He is, however, a master of irony as a tool for manipulation built into the way digital media allow signification to occur. He is the master of an expanded sense of irony that runs throughout the history of its theory.

    When White Nationalists descended on Charlottesville, Virginia, on August 11, 2017, leading to the death of one counter-protester the next day, Trump dragged his feet in naming “racism.” He did, eventually, condemn the groups by name—prefacing his statements with a short consideration of the economy, a dog-whistle about what comes first (actually racism, for which “economy” has become an erstwhile cipher). In the interim, however, his condemnations of violence “as such” led Spencer to tweet this:

    Of course, two days later, Trump would explicitly blame the “Alt Left” for violence it did not commit. Before that, however, Spencer’s irony here relied on Trump’s previous—malicious—irony. By condemning “all” violence when only one kind of violence was at issue, Trump was attempting to split the signal of his speech. The idea was to let the racists know that they could continue through condemnation of their actions that pays lip service to the non-violent ideal of the liberal media. Spencer gleefully used the internal contradiction of Trump’s speech, calling attention to the side of the message that was supposed to be “hidden.” Even the apparently non-ironic condemnation of “both sides” exploited a contradiction not in the statement itself, but in the way it is interpreted by different outlets and political communities. Trump’s invocation of the “Alt Left” confirmed the suspicions of those on the Right, panics the Center, and all but forced the Left to adopt the term. The filter bubbles, meanwhile, allowed this single message to deliver contradictory meanings on different news sites—one reason headlines across the political spectrum are often identical as statements, but opposite in patent intent. Making the dog whistle audible, however, doesn’t spell the “end of the ironic Nazi,” as Brian Feldman commented (Feldman 2017). It just means that the irony isn’t opposed to but instead part of the politics. Today this form of irony is enabled and constituted by digital media, and it’s not going away. It forms an irreducible part of the new political situation, one that we ignore or deny at our own peril.

    Irony isn’t just intentional wit, in other words—as Quintilian already knew. One reason we nevertheless tend to confuse wit and irony is that the expansion of irony beyond the realm of rhetoric—usually dated to Romanticism, which also falls into Kantbot’s period of obsession—made irony into a category of psychology and style. Most treatments of irony take this as an assumption: modern life is drenched in the stuff, so it isn’t “just” a trope (Behler 1990). But it is a feeling, one that you get from Weird Twitter but also from the constant stream of Facebooks announcements about leaving Facebook. Quintilian already points the way beyond this gestural understanding. The problem is the source of the contradiction. It is not obvious what allows for contradiction, where it can occur, what conditions satisfy it, and thus form the basis for irony. If the source is dynamic, unstable, then the concept of irony, as Paul de Man pointed out long ago, is not really a concept at all (de Man 1996).

    The theoretician of irony who most squarely accounts for its embeddedness in material and media conditions is Friedrich Schlegel. In nearly all cases, Schlegel writes, irony serves to reinforce or sharpen some message by means of the reflexivity of language: by contradicting the point, it calls it that much more vividly to mind. (Remember when Trump said, in the 2016 debates, that he refused to invoke Bill Clinton’s sexual history for Chelsea’s sake?) But there is another, more curious type:

    The first and most distinguished [kind of irony] of all is coarse irony; to be found most often in the actual nature of things and which is one of its most generally distributed substances [in der wirklichen Natur der Dinge und ist einer ihrer allgemein verbreitetsten Stoffe]; it is most at home in the history of humanity (Schlegel 1958-, 368).





    In other words, irony is not merely the drawing of attention to formal or material conditions of the situation of communication, but also a widely distributed “substance” or capacity in material. Twitter irony finds this substance in the platform and its underlying code, as we will see. If irony is both material and rhetorical, this means that its use is an activation of a potential in the interface between meaning and matter. This could allow, in principle, an intervention into the conditions of signification. In this sense, irony is the rhetorical term for what we could call coding, the tailoring of language to channels in technologies of transmission. Twitter reproduces an irony that built into any attempt to code language, as we are about to see. And it’s the overlap of code, irony, and politics that Kantbot marshals Hegel to address.

    Coded irony—irony that is both rhetorical and digitally enabled—exploded onto the political scene in 2016 through Twitter. Twitter was the medium through which the political element of the messageboards has broken through (not least because of Trump’s nearly 60 million followers, even if nearly half of them are bots). It is far from the only politicized social medium, as a growing literature is describing (Philips and Milner, 2017; Phillips 2016; Milner 2016; Goerzen 2017). But it has been a primary site of the intimacy of media and politics over the course of 2016 and 2017, and I think that has something to do with twitter itself, and with the relationship between encoded communications and irony.

    Take this retweet, which captures a great deal about Twitter:

    “Kim Kierkegaardashian,” or @KimKierkegaard, joined twitter in June 2012 and has about 259,00 followers at the time of writing. The account mashes up Kardashian’s self- and brand-sales oriented tweet style with the proto-existentialism of Søren Kierkegaard. Take, for example, an early tweet from 8 July, 2012: “I have majorly fallen off my workout-eating plan! AND it’s summer! But to despair over sin is to sink deeper into it.” The account sticks close to Kardashian’s actual tweets and Kierkegaard’s actual words. In the tweet above, from April 2017, @KimKierkegaard has retweeted Kardashian herself incidentally formulating one of Kierkegaard’s central ideas in the proprietary language of social media. “Omg” as shorthand takes the already nearly entirely secular phrase “oh my god” and collapses any trace of transcendence. The retweet therefore returns us to the opposite extreme, in which anxiety points us to the finitude of human existence in Kierkegaard. If we know how to read this, it is a performance of that other Kierkegaardian bellwether, irony.

    If you were to encounter Kardashian’s tweet without the retweet, there would be no irony at all. In the retweet, the tweet is presented as an object and resignified as its opposite. Note that this is a two-way street: until November 2009, there were no retweets. Before then, one had to type “RT” and then paste the original tweet in. Twitter responded, piloting a button that allows the re-presentation of a tweet (Stone 2009). This has vastly contributed to the sense of irony, since the speaker is also split between two sources, such that many accounts have some version of “RTs not endorsements” in their description. Perhaps political scandal is so often attached to RTs because the source as well as the content can be construed in multiple different and often contradictory ways. Schlegel would have noted that this is a case where irony swallows the speaker’s authority over it. That situation was forced into the code by the speech, not the other way around.

    I’d like to call the retweet a resignificatory device, distinct from amplificatory. Amplificatory signaling cannibalizes a bit of redundancy in the algorithm: the more times your video has been seen on YouTube, the more likely it is to be recommended (although the story is more complicated than that). Retweets certainly amplify the original message, but they also reproduce it under another name. They have the ability to resignify—as the “repost” function on Facebook also does, to some extent.[v] Resignificatory signaling takes the unequivocal messages at the heart of the very notion of “code” and makes them rhetorical, while retaining their visual identity. Of course, no message is without an effect on its receiver—a point that information theory made long ago. But the apparent physical identity of the tweet and the retweet forces the rhetorical aspect of the message to the fore. In doing so, it draws explicit attention to the deep irony embedded in encoded messages of any kind.

    Twitter was originally written in the object-oriented programming language and module-view-controller (MVC) framework Ruby on Rails, and the code matters. Object-oriented languages allow any term to be treated either as an object or as an expression, making Shannon’s observations on language operational.[vi] The retweet is an embedding of this ability to switch any term between these two basic functions. We can do this in language, of course (that’s why object-oriented languages are useful). But when the retweet is presented not as copy-pasted but as a visual reproduction of the original tweet, the expressive nature of the original tweet is made an object, imitating the capacity of the coding language. In other words, Twitter has come to incorporate the object-oriented logic of its programming language in its capacity to signify. At the level of speech, anything can be an object on Twitter—on your phone, you literally touch it and it presents itself. Most things can be resignified through one more touch, and if not they can be screencapped and retweeted (for example, the number of followers one has, a since-deleted tweet, etc.). Once something has come to signify in the medium, it can be infinitely resignified.

    When, as in a retweet, an expression is made into an object of another expression, its meaning is altered. This is because its source is altered. A statement of any kind requires the notion that someone has made that statement. This means that a retweet, by making an expression into an object, exemplifies the contradiction between subject and object—the very contradiction on which Kant had based his revolutionary philosophy. Twitter is fitted, and has been throughout its existence retrofitted, to generalize this speech situation. It is the platform of the subject-object dialectic, as Hegel might have put it. By presenting subject and object in a single statement—the retweet as expression and object all at once—Twitter embodies what rhetorical theory has called irony since the ancients. It is irony as code. This irony resignifies and amplifies the rhetorical irony of the dog whistle, the troll, the President.

    Coding is an encounter between two sets of material conditions: the structure of a language, and the capacity of a channel. This was captured in truly general form for the first time in Claude Shannon’s famous 1948 paper, “A Mathematical Theory of Communication,” in which the following diagram is given:

    Shannon’s achievement was a general formula for the relation between the structure of the source and the noise in the channel.[vii] If the set of symbols can be fitted to signals complex or articulated enough to arrive through the noise, then nearly frictionless communication could be engineered. The source—his preferred example was written English—had a structure that limited its “entropy.” If you’re looking at one letter in English, for example, and you have to guess what the next one will be, you theoretically have 26 choices (including a space). But the likelihood, if the letter you’re looking at is, for example, “q,” that the next letter will be “u” is very high. The likelihood for “x” is extremely low. The higher likelihood is called “redundancy,” a limitation on the absolute measure of chaos, or entropy, that the number of elements imposes. No source for communication can be entirely random, because without patterns of one kind or another we can’t recognize what’s being communicated.[viii]

    We tend to confuse entropy and the noise in the channel, and it is crucial to see that they are not the same thing. The channel is noisy, while the source is entropic. There is, of course, entropy in the channel—everything is subject to the second law of thermodynamics, without exception. But “entropy” is not in any way comparable to noise in Shannon, because “entropy” is a way of describing the conditional restraints on any structured source for communication, like the English language, the set of ideas in the brain, or what have you. Entropy is a way to describe the opposite of redundancy in the source, it expresses probability rather than the slow disintegration, the “heat death,” with which it is usually associated.[ix] If redundancy = 1, we have a kind of absolute rule or pure pattern. Redundancy works syntactically, too: “then” or “there” after the phrase “see you” is a high-level redundancy that is coded into SMS services.

    This is what Shannon calls a “conditional restraint” on the theoretical absolute entropy (based on number of total parts), or freedom in choosing a message. It is also the basis for autocorrect technologies, which obviously have semantic effects, as the genre of autocorrect bloopers demonstrates.

    A large portion of Shannon’s paper is taken up with calculating the redundancy of written English, which he determines to be nearly 50%, meaning that half the letters can be removed from most sentences or distorted without disturbing our ability to understand them.[x]

    The general process of coding, by Shannon’s lights, is a manipulation of the relationship between the structure of the source and the capacity of the channel as a dynamic interaction between two sets of evolving rules. Shannon’s statement that the “semantic aspects” of messages were “irrelevant to the engineering problem” has often been taken to mean he played fast and loose with the concept of language (see Hayles 1999; but see also Liu 2010; and for the complex history of Shannon’s reception Floridi 2010). But rarely does anyone ask exactly what Shannon did mean, or at least conceptually sketch out, in his approach to language. It’s worth pointing to the crucial role that source-structure redundancy plays in his theory, since it cuts close to Schlegel’s notion of material irony.

    Neither the source nor the channel is static. The scene of coding is open to restructuring at both ends. English is evolving; even its statistical structure changes over time. The channels, and the codes use to fit source to them, are evolving too. There is no guarantee that integrated circuits will remain the hardware of the future. They did not yet exist when Shannon published his theory.

    This point can be hard to see in today’s world, where we encounter opaque packets of already-established code at every turn. It would have been less hard to see for Shannon and those who followed him, since nothing was standardized, let alone commercialized, in 1948. But no amount of stack accretion can change the fact that mediated communication rests on the dynamic relation between relative entropy in the source and the way the channel is built.

    Redundancy points to this dynamic by its very nature. If there is absolute redundancy, nothing is communicated, because we already know the message with 100% certainty. With no redundancy, no message arrives at all. In between these two extremes, messages are internally objectified or doubled, but differ slightly from one another, in order to be communicable. In other words, every interpretable signal is a retweet. Redundancy, which stabilizes communicability by providing pattern, also ensures that the rules are dynamic. There is no fully redundant message. Every message is between 0 and 1, and this is what allows it to function as expression or object. Twitter imitates the rules of source structure, showing that communication is the locale where formal and material constraints encounter one another. It illustrates this principle of communication by programming it into the platform as a foundational principle. Twitter exemplifies the dynamic situation of coding as Shannon defined it. Signification is resignification.

    If rhetoric is embedded this deeply into the very notion of code, then it must possess the capacity to change the situation of communication, as Schlegel suggested. But it cannot do this by fiat or by meme magic. The retweeted “this anxiety omg” hardly stands to change the statistical structure of English much. It can, however, point to the dynamic material condition of mediated signification in general, something Warren Weaver, who wrote a popularizing introduction to Shannon’s work, acknowledged:

    anyone would agree that the probability is low for such a sequence of words as “Constantinople fishing nasty pink.” Incidentally, it is low, but not zero; for it is perfectly possible to think of a passage in which one sentence closes with “Constantinople fishing,” and the next begins with “Nasty pink.” And we might observe in passing that the unlikely four-word sequence under discussion has occurred in a single good English sentence, namely the one above. (Shannon and Weaver 1964, 11)

    There is no further reflection in Weaver’s essay on this passage, but then, that is the nature of irony. By including the phrase “Constantinople fishing nasty pink” in the English language, Weaver has shifted its entropic structure, however slightly. This shift is marginal to our ability to communicate (I am amplifying it very slightly right now, as all speech acts do), but some shifts are larger-scale, like the introduction of a word or concept, or the rise of a system of notions that orient individuals and communities (ideology). These shifts always have the characteristic that Weaver points to here, which is that they double as expressions and objects. This doubling is a kind of generalized redundancy—or capacity for irony—built into semiotic systems, material irony flashing up into the rhetorical irony it enables. That is a Romantic notion enshrined in a founding document of the digital age.

    Now we can see one reason that retweeting is often the source of scandal. A retweet or repetition of content ramifies the original redundancy of the message and fragments the message’s effect. This is not to say it undermines that effect. Instead, it uses the redundancy in the source and the noise in the channel to split the message according to any one of the factors that Quintilian announced: speaker, audience, context. In the retweet, this effect is distributed across more than one of these areas, producing more than one contrary item, or internally multiple irony. Take Trump’s summer 2016 tweet of this anti-Semitic attack on Clinton—not a proper retweet, but a resignfication of the same sort:



    The scandal that ensued mostly involved the source of the original content (white supremacists), and Trump skated through the incident by claiming that it wasn’t anti-Semitic anyway, it was a sheriff’s star, and that he had only “retweeted” the content. In disavowing the content in separate and seemingly contradictory ways,[xi] he signaled that he was still committed to its content to his base, while maintaining that he wasn’t at the level of statement. The effect was repeated again and again, and is a fundamental part of our government now. Trump’s positions are neither new nor interesting. What’s new is the way he amplifies his rhetorical maneuvers in social media. It is the exploitation of irony—not wit, not snark, not sarcasm—at the level of redundancy to maintain a signal that is internally split in multiple ways. This is not bad faith or stupidity; it’s an invasion of politics by irony. It’s also a kind of end to the neoliberal speech regime.

    iii. Irony and Politics after 2016, or Uncommunicative Capitalism

    The channel between speech and politics is open—again. That channel is saturated in irony, of a kind we are not used to thinking about. In 2003, following what were widely billed as the largest demonstrations in the history of the world, with tens of millions gathering in the streets globally to resist the George W. Bush administration’s stated intent to go to war, the United States did just that, invading Iraq on 20 March of that year. The consequences of that war have yet to be fully assessed. But while it is clear that we are living in its long foreign policy shadow, the seemingly momentous events of 2016 echo 2003 in a different way. 2016 was the year that blew open the neoliberal pax between the media, speech, and politics.

    No amount of noise could prevent the invasion of Iraq. As Jodi Dean has shown, “communicative capitalism” ensured that the circulation of signs was autotelic, proliferating language and ideology sealed off from the politics of events like war or even domestic policy. She writes that:

    In communicative capitalism, however, the use value of a message is less important than its exchange value, its contribution to a larger pool, flow or circulation of content. A contribution need not be understood; it need only be repeated, reproduced, forwarded. Circulation is the context, the condition for the acceptance or rejection of a contribution… Some contributions make a difference. But more significant is the system, the communicative network. (Dean 2005, 56)

    This situation no longer entirely holds. Dean’s brilliant analysis—along with those of many others who diagnosed the situation of media and politics in neoliberalism (e.g. Fisher 2009; Liu 2004)—forms the basis for understanding what we are living through and in now, even as the situation has changed. The notion that the invasion of Iraq could have been stopped by the protests recalls the optimism about speech’s effect on national politics of the New Left in the 1960s and after (begging the important question of whether the parallel protests against the Vietnam War played a causal role in its end). That model of speech is no longer entirely in force. Dean’s notion of a kind of metastatic media with few if any contributions that “make a difference” politically has yielded to a concerted effort to break through that isolation, to manipulate the circulatory media to make a difference. We live with communicative capitalism, but added to it is the possibility of complex rhetorical manipulation, a political possibility that resides in the irony of the very channels that made capitalism communicative in the first place.

    We know that authoritarianism engages in a kind of double-speak, talks out of “both sides of its mouth,” uses the dog whistle. It might be unusual to think of this set of techniques as irony—but I think we have to. Trump doesn’t just dog-whistle, he sends cleanly separate messages to differing effect through the same statement, as he did after Charlottesville. This technique keeps the media he is so hostile to on the hook, since their click rates are dependent on covering whatever extreme statement he’d made that day. The constant and confused coverage this led to was then a separate signal sent through the same line—by means of the contradiction between humility and vanity, and between content and effect—to his own followers. In other words, he doesn’t use Twitter only to amplify his message, but to resignify it internally. Resignificatory media allows irony to create a vector of efficacy through political discourse. That is not exactly “communicative capitalism,” but something more like the field-manipulations recently described by Johanna Drucker: affective, indirect, non-linear (Drucker 2018). Irony happens to be the tool that is not instrumental, a non-linear weapon, a kind of material-rhetorical wave one can ride but not control. As Quinn Slobodian has been arguing, we have in no way left the neoliberal era in economics. But perhaps we have left its speech regime behind. If so, that is a matter of strategic urgency for the Left.

    iv. Hegelian Media Theory

    The new Right is years ahead on this score, in practice but also in analysis. In one of the first pieces in what has become a truly staggering wave of coverage of the NRx movement, Rosie Gray interviewed Kantbot extensively (Gray 2017). Gray’s main target was the troll Mencius Moldbug (Curtis Yarvin) whose political philosophy blends the Enlightenment absolutism of Frederick the Great with a kind of avant-garde corporatism in which the state is run not on the model of a corporation but as a corporation. On the Alt Right, the German Enlightenment is unavoidable.

    In his prose, Kantbot can be quite serious, even theoretical. He responded to Gray’s article in a Medium post with a long quotation from Schiller’s 1784 “The Theater as Moral Institution” as its epigraph (Kanbot 2017b). For Schiller, one had to imitate the literary classics to become inimitable. And he thought the best means of transmission would be the theater, with its live audience and electric atmosphere. The Enlightenment theater, as Kantbot writes, “was not only a source of entertainment, but also one of radical political education.”

    Schiller argued that the stage educated more deeply than secular law or morality, that its horizon extended farther into the true vocation of the human. Culture educates where the law cannot. Schiller, it turns out, also thought that politics is downstream from culture. Kantbot finds, in other words, a source in Enlightenment literary theory for Breitbart’s signature claim. That means that narrative is crucial to political control. But Kantbot extends the point from narrative to the medium in which narrative is told.

    Schiller gives us reason to think that the arrangement of the medium—its physical layout, the possibilities but also the limits of its mechanisms of transmission—is also crucial to cultural politics (this is why it makes sense to him to replace a follower’s reference to Derrida with “*schiller”). He writes that “The theater is the common channel through which the light of wisdom streams down from the thoughtful, better part of society, spreading thence in mild beams throughout the entire state.” Story needs to be embedded in a politically effective channel, and politically-minded content-producers should pay attention to the way that channel works, what it can do that another means of communication—say, the novel—can’t.

    Kantbot argues that social media is the new Enlightenment Stage. When Schiller writes that the stage is the “common channel” for light and wisdom, he’s using what would later become Shannon’s term—in German, der Kanal. Schiller thought the channel of the stage was suited to tempering barbarisms (both unenlightened “savagery” and post-enlightened Terrors like Robespierre’s). For him, story in the proper medium could carry information and shape habits and tendencies, influencing politics indirectly, eventually creating an “aesthetic state.” That is the role that social media have today, according to Kantbot. In other words, the constraints of a putatively biological gender or race are secondary to their articulation through the utterly complex web of irony-saturated social media. Those media allow the categories in the first place, but are so complex as to impose their own constraint on freedom. For those on the Alt Right, accepting and overcoming that constraint is the task of the individual—even if it is often assigned mostly to non-white or non-male individuals, while white males achieve freedom through complaint. Consistency aside, however, the notion that media form their own constraint on freedom, and the tool for accepting and overcoming that constraint is irony, runs deep.

    Kantbot goes on to use Schiller to critique Gray’s actual article about NRx: “Though the Altright [sic] is viewed primarily as a political movement, a concrete ideology organizing an array of extreme political positions on the issues of our time, I believe that understanding it is a cultural phenomena [sic], rather than a purely political one, can be an equally valuable way of conceptualizing it. It is here that the journos stumble, as this goes directly to what newspapers and magazines have struggled to grasp in the 21st century: the role of social media in the future of mass communication.” It is Trump’s retrofitting of social media—and now the mass media as well—to his own ends that demonstrates, and therefore completes, the system of German Idealism. Content production on social media is political because it is the locus of the interface between irony and ontology, where meme magic also resides. This allows the Alt Right to sync what we have long taken to be a liberal form of speech (irony) with extremist political commitments that seem to conflict with the very rhetorical gesture. Misogyny and racism have re-entered the public sphere. They’ve done so not in spite of but with the explicit help of ironic manipulations of media.

    The trolls sync this transformation of the media with misogynist ontology. Both are construed as constraints in the forward march of Trump, Kek, and culture in general. One disturbing version of the essentialist suggestion for understanding how Trump will complete the system of German Idealism comes from one “Jef Costello” (a troll named for a character in Alain Delon’s 1967 film, Le Samouraï)

    Ironically, Hegel himself gave us the formula for understanding exactly what must occur in the next stage of history. In his Philosophy of Right, Hegel spoke of freedom as “willing our determination.” That means affirming the social conditions that make the array of options we have to choose from in life possible. We don’t choose that array, indeed we are determined by those social conditions. But within those conditions we are free to choose among certain options. Really, it can’t be any other way. Hegel, however, only spoke of willing our determination by social conditions. Let us enlarge this to include biological conditions, and other sorts of factors. As Collin Cleary has written: Thus, for example, the cure for the West’s radical feminism is for the feminist to recognize that the biological conditions that make her a woman—with a woman’s mind, emotions, and drives—cannot be denied and are not an oppressive “other.” They are the parameters within which she can realize who she is and seek satisfaction in life. No one can be free of some set of parameters or other; life is about realizing ourselves and our potentials within those parameters.

    As Hegel correctly saw, we are the only beings in the universe who seek self-awareness, and our history is the history of our self-realization through increased self-understanding. The next phase of history will be one in which we reject liberalism’s chimerical notion of freedom as infinite, unlimited self-determination, and seek self-realization through embracing our finitude. Like it or not, this next phase in human history is now being shepherded by Donald Trump—as unlikely a World-Historical Individual as there ever was. But there you have it. Yes! Donald Trump will complete the system of German Idealism. (Costello 2017)

    Note the regular features of this interpretation: it is a nature-forward argument about social categories, universalist in application, misogynist in structure, and ultra-intellectual. Constraint is shifted not only from the social into the natural, but also back into the social again. The poststructuralist phrase “embracing our finitude” (put into the emphatic italics of Theory) underscores the reversal from semiotics to ontology by way of German Idealism. Trump, it seems, will help us realize our natural places in an old-world order even while pushing the vanguard trolls forward into the utopian future. In contrast to Kantbot’s own content, this reading lacks irony. That is not to say that the anti-Gender Studies and generally viciously misogynist agenda of the Alt Right is not being amplified throughout the globe, as we increasingly hear. But this dry analysis lack the lacks the manipulative capacity that understanding social media in German Idealist terms brings with it. It does not resignify.

    Costello’s understanding is crude compared with that of Kantbot himself. The constraints, for Kantbot, are not primarily those of a naturalized gender, but instead the semiotic or rhetorical structure of the media through which any naturalization flows. The media are not likely, in this vision, to end any gender regimes—but recognizing that such regimes are contingent on representation and the manipulation of signs has never been the sole property of the Left. That manipulation implies a constrained, rather than an absolute, understanding of freedom. This constraint is an important theoretical element of the Alt Right, and in some sense they are correct to call on Hegel for it. Their thinking wavers—again, ironically—between essentialism about things like gender and race, and an understanding of constraint as primarily constituted by the media.

    Kantbot mixes his andrism and his media critique seamlessly. The trolls have some of their deepest roots in internet misogyny, including so-called Men Right’s Activism and the hashtag #redpill. The red pill that Neo takes in The Matrix to exit the collective illusion is here compared to “waking up” from the “culturally Marxist” feminism that inflects the putative communism that pervades contemporary US culture. Here is Kantbot’s version:

    The tweet elides any difference between corporate diversity culture and the Left feminism that would also critique it, but that is precisely the point. Irony does not undermine (it rather bolsters) serious misogyny. When Angela Nagle’s book, Kill All Normies: Online Culture Wars from 4Chan and Tumblr to Trump and the Alt-Right, touched off a seemingly endless Left-on-Left hot-take war, Kantbot responded with his own review of the book (since taken down). This review contains a plea for a “nuanced” understanding of Eliot Rodger, who killed six people in Southern California in 2014 as “retribution” for women rejecting him sexually.[xii] We can’t allow (justified) disgust at this kind of content to blind us to the ongoing irony—not jokes, not wit, not snark—that enables this vile ideology. In many ways, the irony that persists in the heart of this darkness allows Kantbot and his ilk to take the Left more seriously than the Left takes the Right. Gender is a crucial, but hardly the only, arena in which the Alt Right’s combination of essentialist ontology and media irony is fighting the intellectual Left.

    In the sub-subculture known as Men Going Their Own Way, or MGTOW, the term “volcel” came to prominence in recent years. “Volcel” means “voluntarily celibate,” or entirely ridding one’s existence of the need for or reliance on women. The trolls responded to this term with the notion of an “incel,” someone “involuntarily celibate,” in a characteristically self-deprecating move. Again, this is irony: none of the trolls actually want to be celibate, but they claim a kind of joy in signs by recoding the ridiculous bitterness of the Volcel.

    Literalizing the irony already partly present in this discourse, sometime in the fall of 2016 the trolls started calling the Left –in particular the members of the podcast team Chapo Trap House and the journalist and cultural theorist Sam Kriss (since accused of sexual harassment)—“ironycels.” The precise definition wavers, but seems to be that the Leftists are failures at irony, “irony-celibate,” even “involuntarily incapable of irony.”

    Because the original phrase is split between voluntary and involuntary, this has given rise to reappropriations, for example Kriss’s, in which “doing too much irony” earns you literal celibacy.

    Kantbot has commented extensively, both in articles and on podcasts, on this controversy. He and Kriss have even gone head-to-head.[xiii]




    In the ironycel debate, it has become clear that Kantbot thinks that socialism has kneecapped the Left, but only sentimentally. The same goes for actual conservatism, which has prevented the Right from embracing its new counterculture. Leaving behind old ideologies is a symptom for standing at the vanguard of a civilizational shift. It is that shift that makes sense of the phrase “Trump will Complete the System of German Idealism.”

    The Left, LogoDaedalus intoned on a podcast, is “metaphysically stuck in the Bush era.” I take this to mean that the Left is caught in an endless cycle of recriminations about the neoliberal model of politics, even as that model has begun to become outdated. Kantbot writes, in an article called “Chapo Traphouse Will Never Be Edgy”:

    Capturing the counterculture changes nothing, it is only by the diligent and careful application of it that anything can be changed. Not politics though. When political ends are selected for aesthetic means, the mismatch spells stagnation. Counterculture, as part of culture, can only change culture, nothing outside of that realm, and the truth of culture which is to be restored and regained is not a political truth, but an aesthetic one involving the ultimate truth value of the narratives which pervade our lived social reality. Politics are always downstream. (Kantbot 2017a)

    Citing Breitbart’s motto, Kantbot argues that continents of theory separate him and LogoDaedalus from the Left. That politics is downstream from culture is precisely what Marx—and by extension, the contemporary Left—could not understand. On several recent podcasts, Kantbot has made just this argument, that the German Enlightenment struck a balance between the “vitality of aesthetics” and political engagement that the Left lost in the generation after Hegel.

    Kantbot has decided, against virtually every Hegel reader since Hegel and even against Hegel himself, that the system of German Idealism is ironic in its deep structure. It’s not a move we can afford to take lightly. This irony, generalized as Schlegel would have it, manipulates the formal and meta settings of communicative situations and thus is at the incipient point of any solidarity. It gathers community through mediation even as it rejects those not in the know. It sits at the membrane of the filter bubble, and—correctly used—has the potential to break or reform the bubble. To be clear, I am not saying that Kantbot has done this work. It is primarily Donald Trump, according to Kantbot’s own argument, who has done this work. But this is exactly what it means to play Hegel to Trump’s Napoleon: to provide the metaphysics for the historical moment, which happens to be the moment where social media and politics combine. Philosophy begins only after an early-morning sleepless tweetstorm once again determines a news cycle. Irony takes its proper place, as Schlegel had suggested, in human history, becoming a political weapon meant to manipulate communication.

    Kantbot was the media theorist of Trump’s ironic moment. The channeling of affect is irreducible, but not unchangeable: this is both the result of some steps we can only wish we’d taken in theory and used in politics before the Alt Right got there, and the actual core of what we might call Alt Right Media Theory. When they say “the Left can’t meme,” in other words, they’re accusing the socialist Left of being anti-intellectual about the way we communicate now, about the conditions and possibilities of social media’s amplifications of the capacity called irony that is baked in to cognition and speech so deeply that we can barely define it even partially. That would match the sense of medium we get from looking at Shannon again, and the raw material possibility with which Schlegel infused the notion of irony.

    This insight, along with its political activation, might have been the preserve of Western Marxism or the other critical theories that succeeded it. Why have we allowed the Alt Right to pick up our tools?

    Kantbot takes obvious pleasure in the irony of using poststructuralist tools, and claiming in a contrarian way that they really derive from a broadly construed German Enlightenment that includes Romanticism and Idealism. Irony constitutes both that Enlightenment itself, on this reading, and the attitude towards it on the part of the content-producers, the German Idealist Trolls. It doesn’t matter if Breitbart was right about the Frankfurt School, or if the Neoreactionaries are right about capitalism. They are not practicing what Hegel called “representational thinking,” in which the goal is to capture a picture of the world that is adequate to it. They are practicing a form of conceptual thinking, which in Hegel’s terms is that thought that is embedded in, constituted by, and substantially active within the causal chain of substance, expression, and history.[xiv] That is the irony of Hegel’s reincarnation after the end of history.

    In media analysis and rhetorical analysis, we often hear the word “materiality” used as a substitute for durability, something that is not easy to manipulate. What is material, it is implied, is a stabilizing factor that allows us to understand the field of play in which signification occurs. Dean’s analysis of the Iraq War does just this, showing the relationship of signs and politics that undermines the aspirational content of political speech in neoliberalism. It is a crucial move, and Dean’s analysis remains deeply informative. But its type—and even the word “material,” used in this sense—is, not to put too fine a point on it, neo-Kantian: it seeks conditions and forms that undergird spectra of possibility. To this the Alt Right has lodged a Hegelian eppur si muove, borrowing techniques that were developed by Marxists and poststructuralists and German Idealists, and remaking the world of mediated discourse. That is a political emergency in which the humanities have a special role to play—but only if we can dispense with political and academic in-fighting and turn our focus to our opponents. What Mark Fisher once called the “Vampire castle” of the Left on social media is its own kind of constraint on our progress (Fisher 2013). One solvent for it is irony in the expanded field of social media—not jokes, not snark, but dedicated theoretical investigation and exploitation of the rhetorical features of our systems of communication. The situation of mediated communication is part of the objective conjuncture of the present, one that the humanities and the Left cannot afford to ignore, and cannot avoid by claiming not to participate. The alternative to engagement is to cede the understanding, and quite possibly the curve, of civilization, to the global Alt Right.

    _____

    Leif Weatherby is Associate Professor of German and founder of the Digital Theory Lab at NYU. He is working on a book about cybernetics and German Idealism.

    Back to the essay

    _____

    Notes
    [i] Video here. The comment thread on the video generated a series of unlikely slogans for 2020: “MAKE TRANSCENDENTAL IDENTITY GREAT AGAIN,” “Make German Idealism real again,” and the ideological non sequitur “Make dialectical materialism great again.”

    [ii] Neiwert (2017) tracks the rise of extreme Right violence and media dissemination from the 1990s to the present, and is particularly good on the ways in which these movements engage in complex “double-talk” and meta-signaling techniques, including irony in the case of the Pepe meme.

    [iii] I’m going to use this term throughout, and refer readers to Chip Berlet’s useful resource: I’m hoping this article builds on a kind of loose consensus that the Alt Right “talks out of both sides of its mouth,” perhaps best crystallized in the term “dog whistle.” Since 2016, we’ve seen a lot of regular whistling, bigotry without disguise, alongside the rise of the type of irony I’m analyzing here.

    [iv] There is, in this wing of the Online Right, a self-styled “autism” that stands for being misunderstood and isolated.

    [v] Thanks to Moira Weigel for a productive exchange on this point.

    [vi] See the excellent critique of object-oriented ontologies on the basis of their similarities with object-oriented programming languages in Galloway 2013. Irony is precisely the condition that does not reproduce code representationally, but instead shares a crucial condition with it.

    [vii] The paper is a point of inspiration and constant return for Friedrich Kittler, who uses this diagram to demonstrate the dependence of culture on media, which, as his famous quip goes, “determine our situation.” Kittler 1999, xxxix.

    [viii] This kind of redundancy is conceptually separate from signal redundancy, like the strengthening or reduplicating of electrical impulses in telegraph wires. The latter redundancy is likely the first that comes to mind, but it is not the only kind Shannon theorized.

    [ix] This is because Shannon adopts Ludwig Boltzmann’s probabilistic formula for entropy. The formula certainly suggests the slow simplification of material structure, but this is irrelevant to the communications engineering problem, which exists only so long as there are the very complex structures called humans and their languages and communications technologies.

    [x] Shannon presented these findings at one of the later Macy Conferences, the symposia that founded the movement called “cybernetics.” For an excellent account of what Shannon called “Printed English,” see Liu 2010, 39-99.

    [xi] The disavowal follows Freud’s famous “kettle logic” fairly precisely. In describing disavowal of unconscious drives unacceptable to the ego and its censor, Freud used the example of a friend who returns a borrowed kettle broken, and goes on to claim that 1) it was undamaged when he returned it, 2) it was already damaged when he borrowed it, and 3) he never borrowed it in the first place. Zizek often uses this logic to analyze political events, as in Zizek 2005. Its ironic structure usually goes unremarked.

    [xii] Kantbot, “Angela Nagle’s Wild Ride,” http://thermidormag.com/angela-nagles-wild-ride/, visited August 15, 2017—link currently broken.

    [xiii] Kantbot does in fact write fiction, almost all of which is science-fiction-adjacent retoolings of narrative from German Classicism and Romanticism. The best example is his reworking of E.T.A. Hoffmann’s “A New Year’s Eve Adventure,” “Chic Necromancy,” Kantbot 2017c.

    [xiv] I have not yet seen a use of Louis Althusser’s distinction between representation and “theory” (which relies on Hegel’s distinction) on the Alt Right, but it matches their practice quite precisely.

    _____

    Works Cited

    • Beckett, Andy. 2017. “Accelerationism: How a Fringe Philosophy Predicted the Future We Live In.” The Guardian (May 11).
    • Behler, Ernst. 1990. Irony and the Discourse of Modernity. Seattle: University of Washington.
    • Berkowitz, Bill. 2003. “ ‘Cultural Marxism’ Catching On.” Southern Poverty Law Center.
    • Breitbart, Andrew. 2011. Righteous Indignation: Excuse Me While I Save the World! New York: Hachette.
    • Burton, Tara. 2016. “Apocalypse Whatever: The Making of a Racist, Sexist Religion of Nihilism on 4chan.” Real Life Mag (Dec 13).
    • Costello, Jef. 2017. “Trump Will Complete the System of German Idealism!” Counter-Currents Publishing (Mar 10).
    • de Man, Paul. 1996. “The Concept of Irony.” In de Man, Aesthetic Ideology. Minneapolis: University of Minnesota. 163-185.
    • Dean, Jodi. 2005. “Communicative Capitalism: Circulation and the Foreclosure of Politics.” Cultural Politics 1:1. 51-74.
    • Drucker, Johanna. The General Theory of Social Relativity. Vancouver: The Elephants.
    • Feldman, Brian. 2017. “The ‘Ironic’ Nazi is Coming to an End.” New York Magazine.
    • Fisher, Mark. 2009. Capitalist Realism: Is There No Alternative? London: Zer0.
    • Fisher, Mark. 2013. “Exiting the Vampire Castle.” Open Democracy (Nov 24).
    • Floridi, Luciano. 2010. Information: A Very Short Introduction. Oxford: Oxford.
    • Galloway, Alexander. 2013. “The Poverty of Philosophy: Realism and Post-Fordism.” Critical Inquiry 39:2. 347-66.
    • Goerzen, Matt. 2017. “Notes Towards the Memes of Production.” texte zur kunst (Jun).
    • Gray, Rosie. 2017. “Behind the Internet’s Dark Anti-Democracy Movement.” The Atlantic (Feb 10).
    • Haider, Shuja. 2017. “The Darkness at the End of the Tunnel: Artificial Intelligence and Neorreaction.” Viewpoint Magazine.
    • Hayles, N. Katherine. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.
    • Higgins, Richard. 2017. “POTUS and Political Warfare.” National Security Council Memo.
    • Huyssen, Andreas. 2017. “Breitbart, Bannon, Trump, and the Frankfurt School.” Public Seminar (Sep 28).
    • Jay, Martin. 2011. “Dialectic of Counter-Enlightenment: The Frankfurt School as Scapegoat of the Lunatic Fringe.” Salmagundi 168/169 (Fall 2010-Winter 2011). 30-40. Excerpt at Canisa.Org.
    • Kantbot (as Edward Waverly). 2017a. “Chapo Traphouse Will Never Be Edgy
    • Kantbot. 2017b. “All the Techcomm Blogger’s Men.” Medium.
    • Kantbot. 2017c. “Chic Necromancy.” Medium.
    • Kittler, Friedrich. 1999. Gramophone, Film, Typewriter. Translated by Geoffrey Winthrop-Young and Michael Wutz. Stanford: Stanford University Press.
    • Liu, Alan. 2004. “Transcendental Data: Toward a Cultural History and Aesthetics of the New Encoded Discourse.” Critical Inquiry 31:1. 49-84.
    • Liu, Lydia. 2010. The Freudian Robot: Digital Media and the Future of the Unconscious. Chicago: University of Chicago Press.
    • Marwick, Alice and Rebecca Lewis. 2017. “Media Manipulation and Disinformation Online.” Data & Society.
    • Milner, Ryan. 2016. The World Made Meme: Public Conversations and Participatory Media. Cambridge: MIT.
    • Neiwert, David. 2017. Alt-America: The Rise of the Radical Right in the Age of Trump. New York: Verso.
    • Noys, Benjamin. 2014. Malign Velocities: Accelerationism and Capitalism. London: Zer0.
    • Phillips, Whitney and Ryan M. Milner. 2017. The Ambivalent Internet: Mischief, Oddity, and Antagonism Online. Cambridge: Polity.
    • Phillips, Whitney. 2016. This is Why We Can’t Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture. Cambridge: The MIT Press.
    • Quintilian. 1920. Institutio Oratoria, Book VIII, section 6, 53-55.
    • Schlegel, Friedrich. 1958–. Kritische Friedrich-Schlegel-Ausgabe. Vol. II. Edited by Ernst Behler, Jean Jacques Anstett, and Hans Eichner. Munich: Schöningh.
    • Shannon, Claude, and Warren Weaver. 1964. The Mathematical Theory of Communication. Urbana: University of Illinois Press.
    • Stone, Biz. 2009. “Retweet Limited Rollout.” Press release. Twitter (Nov 6).
    • Walsh, Michael. 2017. The Devil’s Pleasure Palace: The Cult of Critical Theory and the Subversion of the West. New York: Encounter Books.
    • Winter, Jana and Elias Groll. 2017. “Here’s the Memo that Blew Up the NSC.” Foreign Policy (Aug 10).
    • Žižek, Slavoj. 1993. Tarrying with the Negative: Kant, Hegel and the Critique of Ideology. Durham: Duke, 1993.
    • Žižek, Slavoj. 2005. Iraq: The Borrowed Kettle. New York: Verso.

     

  • Michelle Moravec — The Endless Night of Wikipedia’s Notable Woman Problem

    Michelle Moravec — The Endless Night of Wikipedia’s Notable Woman Problem

    Michelle Moravec

    Millions of the sex whose names were never known beyond the circles of their own home influences have been as worthy of commendation as those here commemorated. Stars are never seen either through the dense cloud or bright sunshine; but when daylight is withdrawn from a clear sky they tremble forth. (Hale 1853, ix)

    As this poetic quote by Sarah Josepha Hale, nineteenth-century author and influential editor, reminds us, context is everything.   The challenge, if we wish to write women back into history via Wikipedia, is to figure out how to shift the frame of reference so that our stars can shine, since the problem of who precisely is “worthy of commemoration” so often seems to exclude women.  This essay takes on one of the “tests” used to determine whether content is worthy of inclusion in Wikipedia, notability, to explore how the purportedly neutral concept works against efforts to create entries about female historical figures.

    According to Wikipedia “notability,” a subject is considered notable if it  “has received significant coverage in reliable sources that are independent of the subject.” (“Wikipedia:Notability” 2017)   To a historian of women, the gender biases implicit in these criteria are immediately recognizable; for most of written history, women were de facto considered unworthy of consideration (Smith 2000). Unsurprisingly, studies have pointed to varying degrees of bias in coverage of female figures in Wikipedia compared to male figures.  One study of Encyclopedia Britannica and Wikipedia concluded,

    Overall, we find evidence of gender bias in Wikipedia coverage of biographies. While Wikipedia’s massive reach in coverage means one is more likely to find a biography of a woman there than in Britannica, evidence of gender bias surfaces from a deeper analysis of those articles each reference work misses. (Reagle and Rhue 2011)

    Five years later, another study found this bias persisted; women constituted only 15.5 percent of the biographical entries on the English Wikipedia, and that for women born prior to the 20th century, the problem of exclusion was wildly exacerbated by “sourcing and notability issues” (“Gender Bias on Wikipedia” 2017).

    One potential source for buttressing the case of notable women has been identified by literary scholar Alison Booth.  Booth identified more than 900 volumes of prosopography published during what might be termed the heyday of the genre, 1830-1940, when the rise of the middle class and increased literacy combined with relatively cheap production of books to make such volumes both practicable and popular (Booth 2004). Booth also points out that, lest we consign the genre to the realm of mere curiosity, the volumes were “indispensable aids in the formation of nationhood” (Booth 2004, 3).

    To reveal the historical contingency of the purportedly neutral criteria of notability, I utilized longitudinal data compiled by Booth which reveals that notability has never been the stable concept Wikipedia’s standards take it to be.  Since notability alone cannot explain which women make it into Wikipedia, I then turn to a methodology first put forth by historian Mary Ritter Beard in her critique of the Encyclopedia Britannica to identify missing entries (Beard 1977). Utilizing Notable American Women, as a reference corpus, I calculated the inclusion of individual women from those volumes in Wikipedia (Boyer and James 1971).  In this essay I extend that analysis to consider the difference between notability and notoriety from a historical perspective.  One might be well known while remaining relatively unimportant from a historical perspective.  Such distinctions are collapsed in Wikipedia, assuming that a body of writing about a historical subject stands as prima facie evidence of notability.

    While inclusion in Notable American Women does not necessarily translate into presence in Wikipedia, looking at the categories of women that have higher rates of inclusion offers insights into how female historical figures do succeed in Wikipedia.  My analysis suggests that criterion of notability restricts the women who succeed in obtaining pages in Wikipedia to those who mirror “the ‘Great Man Theory’ of history (Mattern 2015)  or are “notorious”  (Lerner 1975).

    Alison Booth has compiled a list of the most frequently mentioned women in a subset of female prosopographical volumes and tracked their frequency over time (2004, 394–396).   She made this data available on the web, allowing for the creation of Figure 1 which focuses on the inclusion of US historical figures in volumes published from 1850 to 1930.

    Figure 1. US women by publication date of books that included them (image source: author)
    Figure 1. US women by publication date of books that included them (image source: author)

    This chart clarifies what historians already know: notability is historically specific and contingent. For example, Mary Washington, mother of the first president, is notable in the nineteenth century but not in the twentieth. She drops off because over time, motherhood alone ceases to be seen as a significant contribution to history.  Wives of presidents remain quite popular, perhaps because they were at times understood as playing an important political role, so Mary Washington’s daughter-in-law Martha still appears in some volumes in the latter period. A similar pattern may be observed for foreign missionary Anne Hasseltine Judson in the twentieth century.  The novelty of female foreign missionaries like Judson faded as more women entered the field.  Other figures, like Laura Bridgman, “the first deaf-blind American child to gain a significant education in the English language,” were supplanted by later figures in what might be described as the “one and done” syndrome, where only a single spot is allotted for a specific kind of notable woman (“Laura Bridgman” 2017). In this case, Bridgman likely fell out of favor as Helen Keller’s fame rose.

    Although their notability changed over time, all the women depicted in figure 1 have Wikipedia pages; this is unsurprising as they were among the most mentioned women in the sort of volumes Wikipedia considers “reliable sources.” But what about more contemporary examples?  Does inclusion in a relatively recent work that declares women as notable mean that these women would meet Wikipedia’s notability standards? To answer this question, I relied on a methodology of calculating missing biographies in Wikipedia, utilizing a reference corpus to identify women who might reasonably be expected to appear in Wikipedia and to calculate the percentage that do not. Working with the digitized copy of Notable American Women in the Women and Social Movements database, I compiled a missing biographies quotient for individuals in selected sections of the “classified list of biographies” that appear at the end of the third volume of Notable American Women. The eleven categories with no missing entries offer some insights into how women do succeed in Wikipedia (Table 1).

    Classification % missing
    Astronomers 0
    Biologists 0
    Chemists & Physicists 0
    Heroines 0
    Illustrators 0
    Indian Captives 0
    Naturalists 0
    Psychologists 0
    Sculptors 0
    Wives of Presidents 0

    Table 1. Classifications from Notable American Women with no missing biographies in Wikipedia

    Characteristics that are highly predictive of success in Wikipedia for women include association with a powerful man, as in the wives of presidents, and recognition in a male-dominated field of science, social science and art. Additionally, extraordinary women, such as heroines, and those who are quite rare, such as Indian captives, also have a greater chance of success in Wikipedia.[1]

    Further analysis of the classifications with greater proportions of missing women reflects Gerda Lerner’s complaint that the history of notable women is the story of exceptional or deviant women (Lerner 1975).  “Social worker,” which has the highest percentage of missing biographies at 67%, illustrates that individuals associated with female-dominated endeavors are less likely to be considered notable unless they rise to a level of exceptionalism (Table 2).

    Name Included?
    Dinwiddie, Emily Wayland

    no

    Glenn, Mary Willcox Brown

    no

    Kingsbury, Susan Myra

    no

    Lothrop, Alice Louise Higgins

    no

    Pratt, Anna Beach

    no

    Regan, Agnes Gertrude

    no

    Breckinridge, Sophonisba Preston

    page

    Richmond, Mary Ellen

    page

    Smith, Zilpha Drew

    stub

    Table 2. Social Workers from Notable American Women by inclusion in Wikipedia

    Sophonisba Preston Breckinridge’s Wikipedia entry describes her as “an American activist, Progressive Era social reformer, social scientist and innovator in higher education” who was also “the first woman to earn a Ph.D. in political science and economics then the J.D. at the University of Chicago, and she was the first woman to pass the Kentucky bar” (“Sophonisba Breckinridge” 2017). While the page points out that “She led the process of creating the academic professional discipline and degree for social work,” her page is not linked to the category of American social workers (“Category:American Social Workers” 2015).  If a female historical figure isn’t as exceptional as Breckinridge, she needs to be a “first” like Mary Ellen Richmond who makes it into Wikipedia as the  “social work pioneer” (“Mary Richmond” 2017).

    This conclusion that being a “first” facilitates success in Wikipedia is supported by analysis of the classification of nurses. Of the ten nurses who have Wikipedia entries, 80% are credited with some sort of temporally marked achievement, generally a first or pioneering role (Table 3).

    Individual Was she a first? Was she a participant in a male-dominated historical event? Was she a founder?
    Delano, Jane Arminda leading pioneer World War I founder of the American Red Cross Nursing Service
    Fedde, Sister Elizabeth* established the Norwegian Relief Society
    Maxwell, Anna Caroline pioneering activities Spanish-American War
    Nutting, Mary Adelaide world’s first professor of nursing World War I founded the American Society of superintendents of Training Schools for Nurses
    Richards, Linda first professionally trained American nurse, pioneering modern nursing in the United States No Richards pioneered the founding and superintending of nursing training schools across the nation.
    Robb, Isabel Adams Hampton early leader (held many “first” positions) No helped to found …the National League for Nursing, the International Council of Nurses, and the American Nurses Association.
    Stimson, Julia Catherine first woman to attain the rank of Major World War I
    Wald, Lillian D. coined the term “public health nurse” & the founder of American community nursing No founded Henry Street Settlement
    Mahoney, Mary Eliza first African American to study and work as a professionally trained nurse in the US No co-founded the National Association of Colored Graduate Nurses
    Thoms, Adah B. Samuels World War I co-founded the National Association of Colored Graduate Nurses

    * Fredde appears in Wikipedia primarily as a Norwegian Lutheran Deaconess. The word “nurse” does not appear on her page.

    Table 3. Classifications from Notable American Women with no missing biographies in Wikipedia

    As the entries for nurses reveal, in addition to being first, a combination of several additional factors work in a female subject’s favor in achieving success in Wikipedia.  Nurses who founded an institution or organization or participated in a male-dominated event already recognized as historically significant, such as war, were more successful than those who did not.

    If distinguishing oneself, by being “first” or founding something, as part of a male-dominated event facilitates higher levels of inclusion in Wikipedia for women in female dominated fields, do these factors also explain how women from classifications that are not female-dominated succeed? Looking at labor leaders, it appears these factors can offer only a partial explanation (Table 4).

    Individual Was she a first? Was she a participant in a male-dominated historical event? Was she a founder? Description from Wikipedia
    Bagley, Sarah G. “probably the first”  No formed the Lowell Female Labor Reform Association headed up female department of newspaper until fired because “a female department. … would conflict with the opinions of the mushroom aristocracy … and beside it would not be dignified”
    Barry, Leonora Marie Kearney “only woman” “first woman” KNIGHTS OF LABOR “difficulties faced by a woman attempting to organize men in a male-dominated society.
     Employers also refused to allow her to investigate their factories.”
    Bellanca, Dorothy Jacobs  “first full-time female organizer”  No 0rganized the Baltimore buttonhole makers into Local 170 of the United Garment Workers of America, one of four women who attended founding convention of Amalgamated Clothing Workers of America   “ “men resented” her
    Haley, Margaret Angela “pioneer leader”  No  No dubbed the “lady labor slugger”
    Jones, Mary Harris  No KNIGHTS OF LABOR IWW “most dangerous woman in America”
    Nestor, Agnes  No WOMEN’S TRADE UNION LEAGUE founded  International Glove Workers Union
    O’Reilly, Leonora  No WOMEN’S TRADE UNION LEAGUE founded the Wage Earners Suffrage League “O’Reilly as a public speaker was thought to be out of place for women at this time in New York’s history.”
    O’Sullivan, Mary Kenney the first woman AFL employed WOMEN’S TRADE UNION LEAGUE founder of the Women’s Trade Union League
    Stevens, Alzina Parsons first probation officer KNIGHTS OF LABOR

    Table 4. Classifications from Notable American Women with no missing biographies in Wikipedia

    In addition to being a “first” or founding something, two other variables emerge from the analysis of labor leaders that predict success in Wikipedia.  One is quite heartening: affiliation with the Women’s Trade Union League (WTUL), a significant female-dominated historical organization, seems to translate into greater recognition as historically notable.  Less optimistically, it also appears that what Lerner labeled as “notorious” behavior predicts success: six of the nine women were included for a wide range of reasons, from speaking out publicly to advocating resistance.

    The conclusions here can be spun two ways. If we want to get women into Wikipedia, to surmount the obstacle of notability, we should write about women who fit well within the great man school of history. This could be reinforced within the architecture of Wikipedia by creating links within a woman’s entry to men and significant historical events, while also making sure that the entry emphasizes a woman’s “firsts” and her institutional ties. Following these practices will make an entry more likely to overcome challenges and provide a defense against proposed deletion.  On the other hand, these are narrow criteria for meeting notability that will likely not encompass a wide range of female figures from the past.

    The larger question remains: should we bother to work in Wikipedia at all? (Raval 2014). Wikipedia’s content is biased not only by gender, but also by race and region (“Racial Bias on Wikipedia” 2017).   A concrete example of this intersectional bias can be seen if the fact that “only nine of Haiti’s 37 first ladies have Wikipedia articles, whereas all 45 first ladies of the United States have entries” (Frisella 2017).  Critics have also pointed to the devaluation of Indigenous forms of knowledge within Wikipedia (Senier 2014; Gallart and van der Velden 2015).

    Wikipedia, billed as “the encyclopedia anyone can edit” and purporting to offer “the sum of all human knowledge,” is notorious for achieving neither goal. Wikipedia’s content suffers from systemic bias related to the unbalanced demographics of its contributor base (Wikipedia, 2004, 2009c). I have highlighted here disparities in gendered content, which parallel the well-documented gender biases against female contributors (“Wikipedia:WikiProject Countering Systemic Bias” 2017).   The average editor of Wikipedia is white, from Western Europe or the United States, between 30-40, and overwhelmingly male.   Furthermore,  “super users” contribute most of Wikipedia’s content.  A 2014 analysis revealed that  “the top 5,000 article creators on English Wikipedia have created 60% of all articles on the project.  The top 1,000 article creators account for 42% of all Wikipedia articles alone.”   A study of a small sample of these super users revealed that they are not writing about women.  “The amount of these super page creators only exacerbates the [gender] problem, as it means that the users who are mass-creating pages are probably not doing neglected topics, and this tilts our coverage disproportionately towards male-oriented topics” (Hale 2014).  For example, the “List of Pornographic Actresses” on Wikipedia is lengthier and more actively edited than the “List of Female Poets” (Kleeman 2015).

    The hostility within Wikipedia against female contributors remains a significant barrier to altering its content since the major mechanism for rectifying the lack of entries about women is to encourage women to contribute them (New York Times 2011; Peake 2015; Paling 2015).   Despite years of concerted efforts to make Wikipedia more hospitable toward women, to organize editathons, and place Wikipedians in residencies specifically designed to add women to the online encyclopedia, the results have been disappointing (MacAulay and Visser 2016; Khan 2016). Authors of a recent study of  “Wikipedia’s infrastructure and the gender gap” point to “foundational epistemologies that exclude women, in addition to other groups of knowers whose knowledge does not accord with the standards and models established through this infrastructure” which includes “hidden layers of gendering at the levels of code, policy and logics” (Wajcman and Ford 2017).

    Among these policies is the way notability is implemented to determine whether content is worthy of inclusion.  The issues I raise here are not new; Adrianne Wadewitz, an early and influential feminist Wikipedian, noted in 2013 “A lack of diversity amongst editors means that, for example, topics typically associated with femininity are underrepresented and often actively deleted”(Wadewitz 2013). Wadewitz pointed to efforts to delete articles about Kate Middleton’s wedding gown, as well as the speedy nomination for deletion of an entry for reproductive rights activist Sandra Fluke.   Both pages survived, Wadewicz emphasized, reflecting the way in which Wikipedia guidelines develop through practice, despite their ostensible stability.

    This is important to remember – Wikipedia’s policies, like everything on the site, evolves and changes as the community changes. … There is nothing more essential than seeing that these policies on Wikipedia are evolving and that if we as feminists and academics want them to evolve in ways we feel reflect the progressive politics important to us, we must participate in the conversation. Wikipedia is a community and we have to join it. (Wadewitz 2013)

    While I have offered some pragmatic suggestions here about how to surmount the notability criteria in Wikipedia, I want to close by echoing Wadewitz’s sentiment that the greater challenge must be to question how notability is implemented in Wikipedia praxis.

    _____

    Michelle Moravec is an associate professor of history at Rosemont College.

    Back to the essay

    _____

    Notes

    [1] Seven of the eleven categories in my study with fewer than ten individuals have no missing individuals.

    _____

    Works Cited

  • Joseph Erb, Joanna Hearne, and Mark Palmer with Durbin Feeling — Origin Stories in the Genealogy of Cherokee Language Technology

    Joseph Erb, Joanna Hearne, and Mark Palmer with Durbin Feeling — Origin Stories in the Genealogy of Cherokee Language Technology

    Joseph Erb, Joanna Hearne, and Mark Palmer with Durbin Feeling [*]

    The intersection of digital studies and Indigenous studies encompasses both the history of Indigenous representation on various screens, and the broader rhetorics of Indigeneity, Indigenous practices, and Indigenous activism in relation to digital technologies in general. Yet the surge of critical work in digital technology and new media studies has rarely acknowledged the centrality of Indigeneity to our understanding of systems such as mobile technologies, major programs such as Geographic Information Systems (GIS), digital aesthetic forms such as animation, or structural and infrastructural elements of hardware, circuitry, and code. This essay on digital Indigenous studies reflects on the social, historical, and cultural mediations involved in Indigenous production and uses of digital media by exploring moments in the integration of the Cherokee syllabary onto digital platforms. We focus on negotiations between the Cherokee Nation’s goal to extend their language and writing system, on the one hand, and the systems of standardization upon which digital technologies depend, such as Unicode, on the other.  The Cherokee syllabary is currently one of the most widely available North American Indigenous language writing systems on digital devices. As the language has become increasingly endangered, the Cherokee Nation’s revitalization efforts have expanded to include the embedding of the Cherokee syllabary in the Windows Operating System, Google search engine, Gmail, Wikipedia, Android, iPhone and Facebook.

    Figure 1. Wikipedia in Cherokee
    Figure 1. Wikipedia in Cherokee

    With the successful integration of the syllabary onto multiple platforms, the digital practices of Cherokees suggest the advantages and limitations of digital technology for Indigenous cultural and political survivance (Vizenor 2000).

    Our collaboration has resulted in a multi-voiced analysis across several essay sections. Hearne describes the ways that engaging with specific problems and solutions around “glitches” at the intersection of Indigenous and technological protocols opens up issues in the larger digital turn in Indigenous studies. Joseph Erb (Cherokee) narrates critical moments in the adoption of the Cherokee syllabary onto digital devices, drawn from his experience leading this effort at the Cherokee Nation language technology department. Connecting our conceptual work with community history, we include excerpts from an interview with Cherokee linguist Durbin Feeling—author of the Cherokee-English Dictionary and Erb’s close collaborator—about the history, challenges, and possibilities of Cherokee language technology use and experience. In the final section, Mark Palmer (Kiowa) presents an “indigital” framework to describe a range of possibilities in the amalgamations of Indigenous and technological knowledge systems (2009, 2012). Fragmentary, contradictory, and full of uncertainties, indigital constructs are hybrid and fundamentally reciprocal in orientation, both ubiquitous and at the same time very distant from the reality of Indigenous groups encountering the digital divide.

    Native to the Device

    Indigenous people have always been engaged with technological change. Indigenous metaphors for digital and networked space—such as the web, the rhizome, and the river—describe longstanding practices of mnemonic retrieval and communicative innovation using sign systems and nonlinear design (Hearne 2017). Jason Lewis describes the “networked territory” and “shared space” of digital media as something that has “always existed for Aboriginal people as the repository of our collected and shared memory. That hardware technology has made it accessible through a tactile regime in no way diminishes its power as a spiritual, cosmological, and mythical ‘realm’” (175). Cherokee scholar (and former programmer) Brian Hudson includes Sequoyah in a genealogy of Indigenous futurism as a representative of “Cherokee cyberpunk.” While retaining these scholars’ understanding of the technological sophistication and adaptability of Indigenous peoples historically and in the present, taking up a heuristic that recognizes the problems and disjunction between Indigenous knowledge and digital development also enables us to understand the challenges faced by communities encountering unequal access to computational infrastructures such as broadband, hardware, and software design. Tracing encounters between the medium specificity of digital devices and the specificity of Indigenous epistemologies returns us to the incommensurate purposes of the digital as both a tool for Indigenous revitalization and as a sociopolitical framework that makes users do things according to a generic pattern.

    The case of the localization of Cherokee on digital devices offers insights into the paradox around the idea of the “digital turn” explored in this b2o: An Online Journal special issue—that on the one hand, the digital turn “suggests that the objects of our world are becoming better versions of themselves. On the other hand, it suggests that these objects are being transformed so completely that they are no longer the things they were to begin with.” While the former assertion is reflected in the techno-positive orientation of much news coverage of the Cherokee adoption on the iPhone (Evans 2011) as well as other Indigenous initiatives such as video game production (Lewis 2014), the latter description of transformation beyond recognizable identity resembles the goals of various historical programs of assimilation, one of the primary “logics of elimination” that Patrick Wolfe identifies in his seminal essay on settler colonialism.

    The material, representational, and participatory elements of digital studies have particular resonance in Indigenous studies around issues of land, language, political sovereignty, and cultural practice. In some cases the digital realm hosts or amplifies the imperial imaginaries pre-existing in the mediascape, as Jodi Byrd demonstrates in her analyses of colonial narratives—narratives of frontier violence in particular—normalized and embedded in the forms and forums of video games (2015). Indigeneity is also central to the materialities of global digitality in the production and dispensation of the machines themselves. Internationally, Indigenous lands are mined for minerals to make hardware and targeted as sites for dumping used electronics. Domestically in the United States, Indigenous communities have provided the labor to produce delicate circuitry (Nakamura 2014), even as rural, remote Indigenous communities and reservations have been sites of scarcity for digital infrastructure access (Ginsburg 2008). Indigenous communities such as those in the Cherokee Nation are rightly on guard against further colonial incursions, including those that come with digital environments. Communities have concerns about language localization projects: how are we going to use this for our own benefit? If it’s not for our benefit, then why not compute in the colonial language? Are they going to steal our medicine? Is this a further erosion of what we have left?

    Lisa Nakamura (2013) has taken up the concept of the glitch as a way of understanding online racism, first as it is understood by some critics as a form of communicative failure or “glitch racism,” and second as the opposite, “not as a glitch but as part of the signal,” an “effect of internet on a technical level” that comprises “a discursive act in itself, not an obstruction to that act.”  In this article we offer another way of understanding the glitch as a window onto the obstacles, refusals, and accommodations that take place at an infrastructural level in Indigenous negotiations of the digital. Olga Goriunova and Alexei Shulgin define “glitch” as “an unpredictable change in the system’s behavior, when something obviously goes wrong” (2008, 110).

    A glitch is a singular dysfunctional event that allows insight beyond the customary, omnipresent, and alien computer aesthetics. A glitch is a mess that is a moment, a possibility to glance at software’s inner structure, whether it is a mechanism of data compression or HTML code. Although a glitch does not reveal the true functionality of the computer, it shows the ghostly conventionality of the forms by which digital spaces are organized. (114)

    Attending to the challenges that arise in Indigenous-settler negotiations of structural obstacles—the work-arounds, problem-solving, false starts, failures of adoption—reveals both the adaptations summoned forth by the standardization built into digital platforms and the ways that Indigenous digital activists have intervened in digital homogeneity. By making visible the glitches—ruptures and mediations of rupture—in the granular work of localizing Cherokee, we arrive again and again at the cultural and political crossroads where Indigenous boundaries become visible within infrastructures of settler protocol (Ginsburg 1991). What has to be done, what has to be addressed, before Cherokee speakers can use digital devices in their own language and their own writing system, and what do those obstacles reveal about the larger orientation of digital environments? In particular, new digital platforms channel adaptations towards the bureaucratization of language, dictating the direction of language change through conventions like abbreviations, sorting requirements, parental controls and autocorrect features.

    Within the framework of computational standardization, Indigenous distinctiveness—Indigenous sovereignty itself—becomes a glitch. We can see instantiations of such glitches arising from moments of politicized refusal, as defined by Mohawk scholar Audra Simpson’s insight that “a good is not a good for everyone” (1). Yet we can also see moments when Indigenous refusals “to stop being themselves” (2) lead to strategies of negotiation and adoption, and even, paradoxically, to a politics of accommodation (itself a form of agency) in the uptake of digital technologies. Michelle Raheja takes up the intellectual and aesthetic iterations of sovereignty to theorize Indigenous media production in terms of “visual sovereignty,” which she defines as “the space between resistance and compliance” within which Indigenous media-makers “revisit, contribute to, borrow from, critique, and reconfigure” film conventions, while still “operating within and stretching the boundaries of those same conventions” (1161). We suggest that like Indigenous self-representation on screen, Indigenous computational production occupies a “space between resistance and compliance,” a space which is both sovereigntist and, in its lived reality at the intersection of software standardization and Indigenous language precarity, glitchy.

    Our methodology, in the case study of Cherokee language technology development that follows, might be called “glitch retrieval.”  We focus on pulse points, moments, stories and small landmarks of adaptation, accommodation, and refusal in the adoption of Sequoyah’s Cherokee syllabary to mobile digital devices. In the face of the wave of publicity around digital apps (“there’s an app for that!”), the story of the Cherokee adoption is not one of appendage in the form of downloadable apps but rather the localization of the language as “native to the device.” Far from being a straightforward development, the process moved in fits and starts, beset with setbacks and surprises, delineating unique minority and endangered Indigenous language practices within majoritarian protocols. To return to Goriunova and Shulgin’s definition, we explore each glitch as an instance of “a mess” that is also “a moment, a possibility,” one that “allows insight” (2008). Each of the brief moments narrated below retrieves an intersection of problem and solution that reveals Indigenous presence as well as “the ghostly conventionality of the forms by which digital spaces are organized” (114). Retrieving the origin stories of Cherokee language technology—the stories of the glitches—gives us new ways to see both the limits of digital technology as it has been imagined and built within structures of settler colonialism, and the action and shape of Indigenous persistence through digital practices.

    Cherokee Language Technology and Mobile Devices

    Each generation is crucial to the survival of Indigenous languages. Adaptation, and especially adaptation to new technologies, is an important factor in Indigenous language persistence (Hermes et al 2016). The Cherokee, one of the largest of the Southeast tribes, were early adopters of language technologies, beginning with the syllabary writing system developed by Sequoyah between 1809 and 1820 and presented to the Cherokee Council in 1821. The circumstances of the development of the Cherokee syllabary are nearly unique in that 1) the writing system originated from the work of one man, and in the space of a single decade; and 2) in the fact that it was initiated and ultimately widely adopted from within the Indigenous community itself rather than being developed and introduced by non-Native missionaries, linguists or other outsiders.

    Unlike alphabetic writing based on individual phonemes, a syllabary consists of written symbols indicating whole syllables, which can be more easily developed and learned than alphabetic systems due to the stability of each syllable sound. The Cherokee Syllabary system uses written characters that represent consonant and vowel sounds, such as “Ꮉ”, which is the sound of “ma,” and Ꮀ, for the sound “ho.” The original writing of Sequoyah was done with a quill and pen, an inking process that involved cursive characters, but this handwritten orthography gave way to a block print character set for the Cherokee printing press (Cushman 2011). The Cherokee Phoenix was the first Native American newspaper in the Americas, published in Cherokee and English beginning in 1828. Since then, Cherokee people have adapted their language and writing system early and often to new technologies, from typewriters to dot matrix printers. This historical adaptation includes a millennial transformation from technologies that required training to access machines like specially-designed typewriters with Cherokee characters, to the embedding of the syllabary as a standard feature on all platforms for commercially available computers and mobile devices. Very few Indigenous languages have this level of computational integration—in part because very few Indigenous languages have their own writing systems—and the historical moments we present here in the technologization of the Cherokee language illustrate both problems and possibilities of language diversity in standardization-dependent platforms. In the following section, we offer a community-based history of Cherokee language technology in stories of the transmission of knowledge between two generations—Cherokee linguist Durbin Feeling, who began teaching and adapting the language in the 1960s, and Joseph Erb, who worked on digital language projects starting in the early 2000s—focusing on shifts in the uptake of language technology.

    In the early and mid-twentieth century, churches in the Cherokee Nation were among the sites for teaching and learning Cherokee literacy. Durbin Feeling grew up speaking Cherokee at home, and learned to read the language as a boy by following along as his father read from the Cherokee New Testament. He became fluent in writing the language while serving in the US military in Vietnam, when he would read the Book of Psalms in Cherokee. His curiosity about the language grew as he continued to notice the differences between the written Cherokee usage of the 1800s—codified in texts like the New Testament—and the Cherokee spoken by his community in the 1960s. Beginning with the bilingual program at Northeastern University (translating syllabic writing into phonetic writing), Feeling worked on Cherokee language lessons and a Cherokee dictionary, for which he translated words from a Webster’s dictionary, on handwritten index cards, to a recorder. Feeling recalls that in the early 1970s,

    Back then they had reel to reel recorders and so I asked for one of those and talked to anybody and everybody and mixed groups, men and women, men with men, women with women. Wherever there were Cherokees, I would just walk up and say do you mind if I just kind of record while you were talking, and they didn’t have a problem with that. I filled up those reel to reel tapes, five of them….I would run it back and forth every word, and run it forward and back again as many times as I had to, and then I would hand write it on a bigger card.

    So I filled, I think, maybe about five of those in a shoe box and so all I did was take the word, recorded it, take the next word, recorded it, and then through the whole thing…

    There was times the churches used to gather and cook some hog meat, you know. It would attract the people and they would just stand around and joke and talk Cherokee. Women would meet and sew quilts and they’d have some conversations going, some real funny ones. Just like that, you know? Whoever I could talk with. So when I got done with that I went back through and noticed the different kinds of sounds…the sing song kind of words we had when we pronounced something (Erb and Feeling 2016).

    The project began with handwriting in syllabary, but the dictionary used phonetics with tonal markers, so Feeling went through each of five boxes of index cards again, labeling them with numbers to indicate the height of sounds and pitches.

    Feeling and his team experimented with various machines, including manual typewriters with syllabary keys (manufactured by the well-known Hermes typewriter company), new fonts using a dot matrix printer, and electric typewriters with Cherokee syllabary in the ball key—the typist had to memorize the location of all 85 keys. Early attempts to build computer programs allowing users to type in Cherokee resulted in documents that were confined to one computer and could not be easily shared except through printing documents.

    Figure 2, Typewriter keyboard in Cherokee
    Figure 2. Typewriter keyboard in Cherokee (image source: authors)

    Beginning around 1990, a number of linguists and programmers with interests in Indigenous languages began working with the Cherokee, including Al Webster, who used Mac computers to create a program that, as Feeling described it, “introduced what you could do with fonts with a fontographer—he’s the one who made those fonts that were just like the old print, you know way back in the eighteen hundreds.” Then in the mid-1990s Michael Everson began working with Feeling and others to integrate Cherokee glyphs into Unicode, the primary system for software internationalization. Arising from discussions between engineers at Apple and Xerox, Unicode began in late 1987 as a project to standardize languages for computation. Although the original goal of Unicode was to encode all world writing systems, major languages came first. Michael Everson’s company Evertype has been critical to broader language inclusion, encoding minority and Indigenous languages such as Cherokee, which was added to the Unicode Standard in 1999 with the release of version 3.0.

    Having begun language work with handwritten index cards in the 1960s, and later typewriters available to only one or two people with specialized skills, Feeling saw Cherokee adopted into Unicode in 1999, and integrated into Apple computer operating systems in 2003. When Apple and the Cherokee Nation publicized the new localization of Cherokee on the 4.1 iPhone in December 2010, the story was picked up internationally, as well as locally among Cherokee communities. By 2013, users could text, email, and search Google in the syllabary on smartphones and laptops, devices that came with the language already embedded as a standardized feature and that were available at chain stores like Walmart. This development involved different efforts at multiple locations, sometimes simultaneously, and over time. While Apple adopted Unicode-compliant Cherokee glyphs to the Macintosh in 2003, the Cherokee Nation, as a government entity, used PC computers rather than Macs. PCs had yet to implement Unicode-compliant Cherokee Fonts, so there was little access to the writing system on their computers and no known community adoption. At the time, the Cherokee Nation was already using an adapted English font that displayed Cherokee characters but was not Unicode compliant.

    One of the first attempts to introduce Unicode-compliant Cherokee font and keyboard came with the Indigenous Language Institute conference at Northeastern State University in Oklahoma in 2006, where the Institute made the font available on flash drives and provided training to language technologists at the Cherokee Nation. However, the program was not widely adopted due to anticipated wait times in getting the software installed on Cherokee Nation computers. Further, the majority of users did not understand the difference between the new Unicode compliant fonts and the non-Unicode fonts they were already using. The non-Unicode Cherokee font and keyboard adapted the same keystrokes, and looked the same on screen as the Unicode compliant system, but certain keys (especially those for punctuation) produced glyphs that would not transfer between computers, so files could not be sent and re-opened on another computer without requiring extensive corrections. The value of Unicode compliance involves the additional interoperability to move between systems, the crucial first step towards integration with mobile devices, which are more useful in remote communities than desktop computers. Addition to Unicode is the first of five steps—including development of CLDR, open source font, keyboard layout design, and a word frequency list—before companies can encode a new language into their platforms for computer operating systems. These five steps act as space of exchange between Indigenous writing systems and digital platforms, within which differences are negotiated.

    CLDR

    The Common Local Data Repository (CLDR) is a set of key terms for localization, including months, days, years, countries, and currencies, as well as their abbreviations. This core information is localized on the iPhone and becomes the base which calendars and other native and external apps feed from on the device. Many Indigenous languages, including Cherokee, don’t have bureaucratic language, such as abbreviations for days of the week, and need to create them—Translation Department and Language Technology Department worked together to create new Cherokee abbreviations for calendrical terms.

    Figure 3. Weather in Cherokee
    Figure 3. Weather in Cherokee (image source: authors)

    Open Source Font

    Small communities don’t have budgets to purchase fonts for their languages, and such fonts also aren’t financially viable for commercial companies to develop, so the challenge for minority language activists is to find sponsorship for the creation of an open source font that will work across systems, available for anyone to adopt into any computer or device system. Working with Feeling, Michael Everson developed the open source font for Cherokee. Plantagenet font (designed by Ross Mills) was the first to adopt Cherokee into Windows (Vista) and Mac (Panther).  If there is no font on a Unicode-compliant device—that is, the device does not have the language glyphs embedded—then users will see a string of boxes, the default filler for Unicode points that are not showing up in the system.

    Keyboard Layout

    New languages need an input method, and companies generally want the most widely used versions made available in open source. Cherokee has both a QWERTY keyboard, which is a phonetically-based Cherokee language keyboard, and a “Cherokee Nation” layout using the syllabary. Digital keyboards for mobile technologies are more complicated to create than physical keyboards and involve intricate collaboration between language specialists and developers. When developing the Cherokee digital keyboard for the iPhone, Apple worked in conjunction with the Translation Department and Language Technology Department at the Cherokee Nation, experimenting with several versions to accommodate the 85 Cherokee characters in the syllabary without creating too many alternate keyboards (the Cherokee Nation’s original involved 13 keyboards, whereas English has 3). Apple ultimately adapted a keyboard that involved two different ways of typing on the same keyboard, combining pop-up keys and an autocomplete system.

    Figure 4, Mobile device keyboard in Cherokee
    Figure 4. Mobile device keyboard in Cherokee (image source: authors)

    Word Frequency List

    The word frequency list is a standard requirement for most operating systems to support autocorrect spelling and other tasks on digital devices. Programmers need a word database, in Unicode, large enough to adequately source programs such as autocomplete. In order to generate the many thousands of words needed to seed the database, the Cherokee Nation had to provide Cherokee documents typed in the Unicode version of the language. But as with other languages, there were many older attempts to embed Cherokee in typewriters and computers that predate Unicode, leading to a kind of catch 22: The Cherokee Nation needed to already have documents produced in Unicode in order to get the language into computer and operating systems and adopted for mobile technologies, but they didn’t have many documents in Unicode because the language hadn’t yet been integrated into those Unicode-compliant systems. In the end the CN employed Cherokee speakers to create new documents in Unicode—re-typing the Cherokee Bible and other documents—to create enough words for a database. Their efforts were complicated by the existence of multiple versions of the language and spelling, and previous iterations of language technology and infrastructure.

    Translation

    Many of the English language words and phrases that are important to computational concepts, such as “security,” don’t have obvious equivalents in Cherokee (or as Feeling said, “we don’t have that”). How does one say “error message” in Cherokee? The CN Translation Department invented words—striving for both clarity and agreement—in order to address coding concepts for operating systems, error messages, and other phrases (which are often confusing even in English) as well as more general language such as the abbreviations discussed above. Feeling and Erb worked together with elders, CN staff, and professional Cherokee translators to invent descriptive Cherokee words for new concepts and technologies, such as ᎤᎦᏎᏍᏗ (u-ga-ha-s-di) or “to watch over something” for security; ᎦᎵᏓᏍᏔᏅ ᏓᎦᏃᏣᎳᎬᎯ (ga-li-da-s-ta-nv da-ga-no-tsa-la-gv-hi) or “something is wrong” for error message; ᎠᎾᎦᎵᏍᎩ ᎪᏪᎵ (a-na-ga-li-s-gi go-we-li) or “lightning paper” for email; and ᎠᎦᏙᎥᎯᏍᏗ ᎠᏍᏆᏂᎪᏗᏍᎩ (a-ga-no-v-hi-s-di a-s-qua-ni-go-di-s-gi) or “knowledge keeper” for computers. For English words like “luck” (as in “I’m feeling lucky,” a concept which doesn’t exist in Cherokee), they created new idioms, such as “ᎡᎵᏊ ᎢᎬᏱᏊ ᎠᏆᏁᎵᏔᏅ ᏯᏂᎦᏛᎦ” (e-li-quu i-gv-yi-quu a-qua-ne-li-ta-na ya-ni-ga-dv-ga) or “I think I’ll find it on the first try.”

    Sorting

    When the Unicode-compliant Plantagenet Cherokee font was first introduced in Microsoft Windows OS in Vista (2006), the company didn’t add Cherokee to the sorting function (the ability to sort files by numeric or alphabetic order) in its system. When Cherokee speakers named files in the language, they arrived at the limits of the language technology. These limits determine parameters in a user’s personal computing, the point at which naming files in Cherokee or keeping a computer calendar in Cherokee become forms of language activism that reveal the underlying dominance of English in the deeper infrastructure of computational systems. When a user sent a file with Cherokee characters, such as “ᏌᏊ” (sa-quu, or “one”) and “ᏔᎵ” (ta-li or “two”), receiving computers could not put the file into one place or another because the core operating system had no sorting order for the Unicode points of Cherokee, and the computer would crash. Sorting orders in Cherokee were not added to Microsoft until Windows 8.

    Parental Controls

    Part of the protocol for operating systems involves standard protections like parental controls—the ability to enable a program to automatically censor inappropriate language. In order to integrate Cherokee into an OS, the company needed lists of offensive language or “curse words” that could be flagged in parental restrictions settings for their operating system. Meeting the needs of these protocols was difficult linguistically and culturally, because Cherokee does not have the same cultural taboos as English around words for sexual acts or genitals; most Cherokee words are “clean words,” with offensive speech communicated through context rather than the words themselves. Also, because the Cherokee language involves tones, inappropriate meanings can arise from alternate tonal emphases (and the tone is not reflected in the syllabary). Elder Cherokee speakers found it culturally difficult to speak aloud those elements of Cherokee speech that are offensive, while non-Cherokee speaking computer company employees who had worked with other Indigenous languages did not always understand that not all Indigenous languages are alike—“curse words” in one language are not inappropriate in others. Finally, almost all of the potentially offensive Cherokee words that certain technology companies sought not only did not carry the same offensive connotation as its translation in English, but also carried dual or multiple meanings, and if blocked would also block a common word that had no inappropriate meaning.

    Mapping and Place Names

    One of the difficulties for Cherokees working to create Cherokee language country names and territories was the Cherokee Nation’s own exclusion from the lists. Speakers translated the names of even tiny nations into Cherokee for lists and maps in which the Cherokee Nation itself did not appear. Discussion of terminologies for countries and territories were frustrating because Cherokee themselves were not included, making colonial erasure of Indigenous nationhood and territories visible to Cherokee speakers as they did the translations. Erb is currently working with Google Maps to revise their digital maps to show federally recognized tribal nations’ territories.

    Passwords and Security

    One of the first attempts to introduce Unicode-compliant Cherokee on computers for the Immersion School, ᏣᎳᎩ ᏧᎾᏕᎶᏆᏍᏗ (tsa-la-gi tsu-na-de-lo-qua-s-di), involved problems and glitches that temporarily set back adoption of Unicode systems. The CN Language Technology Department added the Unicode-compliant font and keyboards on an Immersion School curriculum developer’s computer. However, at the time computers could only accept English passwords. After the curriculum developer had been typing in Cherokee and left their desk, their computer automatically logged off (auto-logoff is standard security for government computers). Temporarily locked out of their computer, they couldn’t switch their keyboard back to English to type the English password. Other teachers and translators heard about this “lockout” and most decided against having the new Unicode compliant fonts on their computers. Glitches like these slowed the roll out of Unicode-compliant fonts and set back the adoption process in the short term.

    Community Adoption

    When computers began to enter Cherokee communities, Feeling recalls his own hesitation about social media sites like Facebook: “I was afraid to use that.” When in 2011 there was a contested election for Chief of the Nation, and social media provided faster updates than traditional media, many community members signed up for Facebook accounts so they could keep abreast of the latest news about the election.

    Figure 5, Facebook in Cherokee
    Figure 5. Facebook in Cherokee (image source: authors)

    Similarly, when Cherokee first became available on the iPhone 4.1, many Cherokee people were reluctant to use it. Feeling says he was “scared that it wouldn’t work, like people would get mad or something.” But older speakers wanted to communicate with family members in Cherokee, and they provided the pressure for others to begin using mobile devices in the language. Feeling’s older brother, also a fluent speaker, bought an iPhone just to text with his brother in Cherokee, because his Android phone wouldn’t properly display the language.

    In 2009, the Cherokee Nation introduced Macintosh computers in a 1:1 computer-to-student ratio for the second and third grades of the Cherokee Immersion school, and gave students air cards to get wireless internet service at home through cell towers (because internet was unavailable in many rural Cherokee homes). Up to this point the students spoke in Cherokee at school, but rarely generalized their Cherokee language outside of school or spoke it at home. With these tools, students could—and did—get on FaceTime and iChat from home and in other settings to talk with classmates in Cherokee. For some parents, it was the first time they had heard their children speaking Cherokee at home. This success convinced many in the community of the worth of Cherokee language technologies for digital devices.

    The ultimate community adoption of Cherokee in digital forms—computers, mobile devices, search engines and social media—came when the technologies were most applicable to community needs. What worked was not clunky modems for desktops but iPhones that could function in communities without internet infrastructure. The story of Cherokee adoption into digital devices illustrates the pull towards English-language structures of standardization for Indigenous and minority language speakers, who are faced with challenges of skill acquisition and adaptation; language development histories that involve versions of orthographies, spellings, neologisms and technologies; and problems of abstraction from community context that accompany codifying practices. Facing the precarity of an eroding language base and the limitations and possibilities digital devices, the Cherokee and other Indigenous communities have strategically adapted hardware and software for cultural and political survivance. Durbin Feeling describes this adaptation as a Cherokee trait: “It’s the type of people that are curious or are willing to learn. Like we were in the old times, you know? I’m talking about way back, how the Cherokees adapted to the English way….I think it’s those kind of people that have continued in a good way to use and adapt to whatever comes along, be it the printing press, typewriters, computers, things like that. … Nobody can take your language away. You can give it away, yeah, or you can let it die, but nobody can take it away.”

    Indigital Frameworks

    Our case study reveals important processes in the integration of Cherokee knowledge systems with the information and communication technologies that have transformed notions of culture, society and space (Brey 2003). This kind of creative fusion is nothing new—Indigenous peoples have been encountering and exchanging with other peoples from around the world and adopting new materials, technologies, ideas, standards, and languages to meet their own everyday needs for millennia. The emerging concept indigital describes such encounters and collisions between the digital world and Indigenous knowledge systems, as highlighted in The Digital Arts and Humanities (Travis and von Lünen 2016). Indigital describes the hybrid blending or amalgamation of Indigenous knowledge systems including language, storytelling, calendar making, and song and dance, with technologies such as computers, Internet interfaces, video, maps, and GIS (Palmer 2009, 2012, 2013, 2016). Indigital constructs are forms of what Bruno Latour calls technoscience (1987), the merging of science, technology, and society—but while Indigenous peoples are often left out of global conversations regarding technoscience, the indigital framework attempts to bridge such conversations.

    Indigital constructs exist because knowledge systems like language are open, dynamic, and ever-changing; are hybrid as two or more systems mix, producing a third; require the sharing of power and space which can lead to reciprocity; and are simultaneously everywhere and nowhere (Palmer 2012). Palmer associates indigital frameworks with Indigenous North Americans and the mapping of Indigenous lands by or for Indigenous peoples using maps and GIS (2009; 2012; 2016). GIS is a digital mapping and database software used for collecting, manipulating, analyzing, and mapping various spatial phenomena. Indigenous language, place-names, and sacred sites often converge with GIS resulting in indigital geographic information networks. The indigital framework, however, can be applied to any encounter and exchange involving Indigenous peoples, technologies, and cultures.

    First, indigital constructs emerge locally, often when individuals or groups of individuals adopt and experiment with culture and technology within spaces of exchange, as happens in the moments of challenge and success in the integration of Cherokee writing systems to digital devices outlined in this essay. Within spaces of exchange, cultural systems like language and technology do not stand alone as dichotomous entities. Rather, they merge together creating multiplicity, uncertainty, and hybridization. Skilled humans, typewriters, index cards, file cabinets, language orthographies, Christian Bibles, printers, funding sources, transnational corporations, flash drives, computers, and cell-phones all work to stabilize and mobilize the digitization of the Cherokee language. Second, indigital constructs have the potential to flow globally; Indigenous groups and communities tap into power networks constructed by global transnational corporations, like Apple, Google, or IBM. Apple and Google are experts at creating standardized computer designs while connecting with a multitude of users. During negotiations with Indigenous communities, digital technologies are transformative and can be transformed. Finally, indigital constructs introduce different ways that languages can be represented, understood, and used. Differences associated with indigital constructs include variations in language translations, multiple meanings of offensive language, and contested place-names. Members of Indigenous communities have different experiences and reasons for adopting or rejecting the use of indigital constructs in the form of select digital devices like personal computers and cell-phones.

    One hopeful aspect in this process is the fact that Indigenous knowledge systems and digital technologies are combinable. The idea of combinability is based on the convergent nature of digital technologies and the creative intention of the artist-scientist. In fact, electronic technologies enable new forms from such combinations, like Cherokee language keyboards, Kiowa story maps and GIS, or Maori language dictionaries. Digital recordings of community members or elders telling important stories that hold lessons for future generations are becoming more widely available, made either using audio or visual devices or combination of both formats. Digital prints of maps can be easily carried to roundtables for discussion about the environment (Palmer 2016), with audiovisual images edited on digital devices and uploaded or downloaded to other digital devices and eventually connected to websites. The mapping of place-names, creation of Indigenous language keyboards, and integration of stories into GIS require standardization, yet those standards are often defined by technocrats far removed from Indigenous communities, with a lack of input from community members and elders. Whatever the intention of the elders telling the story or the digital artist creating the construction, this is an opportunity for the knowledge system and its accompanying information to be shared.

    Ultimately, how do local negotiations on technological projects influence final designs and representations? Indigital constructions (and spaces) are hybrid and require mixing at least two things to create a new third construct or third space (Bhabha 2006). Creation of a new Cherokee bureaucratic language to meet the needs of the iPhone CLDR requirements for representing calendar elements, with the negotiations between Cherokee language specialists and computer language specialists, resulted in hybrid space-times; a hybrid calendar shared as a form Cherokee-constructed technoscience. The same process applied to the development of specialized and now standardized Cherokee fonts and keyboards for the iPhone. A question for future research might be how much Unicode standardization transforms the Cherokee language in terms of meaning and understanding. What elements of Cherokee are altered and how are the new constructs interpreted by community members? How might Cherokee fonts and keyboards contribute to the sustainability of Indigenous culture and put language into practice?

    Survival of indigital constructs requires reciprocity between systems. Indigital constructions are not set up as one-way flows of knowledge and information. Rather, indigital constructions are spaces for negotiation, featuring the ideas and thoughts of the participants. Reciprocity in this sense means cross-cultural exchange on equal footing, as having too much power will consume any kind of rights-based approach to building bridges among all participants. One-way flows of knowledge are revealed when Cherokee or other Indigenous informants providing place-names to Apple, Microsoft, or Google realize that their own geographies are not represented. They are erased from the maps. Indigenous geographies are often trivialized as being local, vernacular, and particular to a culture which goes against the grain of technoscience standardization and universalization. The trick of indigital reciprocity is shared power, networking (Latour 2005), assemblages (Deleuze and Guattari 1988), decentralization, trust, and collective responsibility. If all these relations are in place, rights-based approaches to community problems have a chance of success.

    Indigital constructions are everywhere—Cherokee iPhone language applications or Kiowa stories in GIS are just a few examples, and many more occur in film, video, and other digital media types not discussed in this article. Yet, ironically, indigital constructions are also very distant from the reality of many Indigenous people on a global scale. Indigital constructions are primarily composed in the developed world, especially what is referred to as the global north. There is still a deep digital divide among Indigenous peoples and many Indigenous communities do not have access to digital technologies. How culturally appropriate are digital technologies like video, audio recordings, or digital maps? The indigital is distant in terms of addressing social problems within Indigenous communities. Oftentimes, there is a fear of the unknown in communities like the one described by Durbin Feeling in reference to adoption of social media applications like Facebook. Some Indigenous communities consider carefully the implications of adopting social media or language applications created for community interactions. Adoption may be slow, or not meet the expectations of software developers. Many questions arise in this process. Do creativity and social application go hand in hand? Sometimes we struggle to understand how our work can be applied to everyday problems. What is the potential of indigital constructions being used for rights-based initiatives?

    Conclusion

    English-speakers don’t often pause to consider how their language comes to be typed, displayed, and shared on digital devices. For Indigenous communities, the dominance of majoritarian languages on digital devices has contributed to the erosion of their language. While the isolation of many Indigenous communities in the past helped to protect their languages, that same isolation has required incredible efforts for minority language speakers to assert their presence in the infrastructures of technological systems. The excitement over the turn to digital media in Indian country is an easy story to tell to a techno-positive public, but in fact this turn involves a series of paradoxes: we take materials out of Indigenous lands to make our devices, and then we use them to talk about it; we assert sovereignty within the codification of standardized practices; we engage new technologies to sustain Indigenous cultural practices even as technological systems demand cultural transformation. Such paradoxes get to the heart of deeper questions about culturally-embedded technologies, as the modes and means of our communication shift to the screen. To what extent does digital media re-make the Indigenous world, or can it function just as a tool? Digital media are functionally inescapable and have come to constitute elements of our self-understanding; how might such media change the way Indigenous participants understand the world, even as they note their own absences from the screen? The insights from the technologization of Cherokee writing engage us with these questions along with closer insights into multiple forms of Indigenous information and communications technology and the emergence of indigital creations, inventing the next generation of language technology.

    _____

    Joseph Lewis Erb is a computer animator, film producer, educator, language technologist and artist enrolled in the Cherokee Nation. He earned his MFA from the University of Pennsylvania, where he created the first Cherokee animation in the Cherokee language, “The Beginning They Told.” He has used his artistic skills to teach Muscogee Creek and Cherokee students how to animate traditional stories. Most of this work is created in the Cherokee Language, and he has spent many years working on projects that will expand the use of Cherokee ​​language in technology and the arts. Erb is an assistant professor at the University of Missouri, teaching digital storytelling and animation.

    Joanna Hearne is associate professor in the English Department at the University of Missouri, where she teaches film studies and digital storytelling. She has published a number of articles on Indigenous film and digital media, animation, early cinema, westerns, and documentary, and she edited the 2017 special issue of Studies in American Indian Literatures on “Digital Indigenous Studies: Gender, Genre and New Media.” Her two books are Native Recognition: Indigenous Cinema and the Western (SUNY Press, 2012) and Smoke Signals: Native Cinema Rising (University of Nebraska Press, 2012).

    Mark H. Palmer is associate professor in the Department of Geography at the University of Missouri who has published research on institutional GIS and the mapping of Indigenous territories. Palmer is a member of the Kiowa Tribe of Oklahoma.

    Back to the essay

    _____

    Acknowledgements

    [*] The authors would like to thank Durbin Feeling for sharing his expertise and insights with us, and the University of Missouri Peace Studies Program for funding interviews and transcriptions as part of the “Digital Indigenous Studies” project.

    _____

    Works Cited

    • Bhabha, Homi K. and J. Rutherford. 2006. “Third Space.” Multitudes 3. 95-107.
    • Brey, P. 2003. “Theorizing Modernity and Technology.” In Modernity and Technology, edited by T.J. Misa, P. Brey, and A. Feenberg, 33-71. Cambridge: MIT Press.
    • Byrd, Jodi A. 2015. “’Do They Not Have Rational Souls?’: Consolidation and Sovereignty in Digital New Worlds.” Settler Colonial Studies: 1-15.
    • Cushman, Ellen. 2011. The Cherokee Syllabary: Writing the People’s Perseverance. Norman: University of Oklahoma Press.
    • Deleuze, Gilles, and Félix Guattari. 1988. A Thousand Plateaus: Capitalism and Schizophrenia. New York: Bloomsbury Publishing.
    • Feeling, Durbin and Joseph Erb. 2016. Interview with Durbin Feeling, Tahlequah, Oklahoma. 30 July.
    • Evans, Murray. 2011. “Apple Teams Up to Use iPhone to Save Cherokee Language.” Huffington Post (May 25).
    • Feeling, Durbin. 1975. Cherokee-English Dictionary. Tahlequah: Cherokee Nation of Oklahoma.
    • Ginsburg, Faye. 1991. “Indigenous Media: Faustian Contract or Global Village?” Cultural Anthropology 6:1. 92-112.
    • Ginsburg, Faye. 2008. “Rethinking the Digital Age.” In Global Indigenous Media: Culture, Poetics, and Politics, edited by Pamela Wilson and Michelle Stewart. Durham: Duke University Press. 287-306.
    • Goriunova, Olga and Alexei Shulgin. 2008. “Glitch.” In Software Studies: A Lexicon, edited by David Fuller. Cambridge, MA: MIT Press. 110-18.
    • Hearne, Joanna. 2017. “Native to the Device: Thoughts on Digital Indigenous Studies.” Studies in American Indian Literatures 29:1. 3-26.
    • Hermes, Mary, et al. 2016. “New Domains for Indigenous Language Acquisition and Use in the USA and Canada.” In Indigenous Language Revitalization in the Americas, edited by Teresa L. McCarty and Serafin M. Coronel-Molina. London: Routledge. 269-291.
    • Hudson, Brian. 2016. “If Sequoyah Was a Cyberpunk.” 2nd Annual Symposium on the Future Imaginary, August 5th, University of British Columbia-Okanagan, Kelowna, B.C.
    • Latour, Bruno. 1987. Science in Action: How to Follow Scientists and Engineers through Society. Cambridge, MA: Harvard University Press.
    • Latour, Bruno. 2005. Reassembling the Social: An Introduction to Actor-Network Theory. Oxford: Oxford University Press.
    • Lewis, Jason. 2014. “A Better Dance and Better Prayers: Systems, Structures, and the Future Imaginary in Aboriginal New Media.” In Coded Territories: Tracing Indigenous Pathways in New Media Art, edited by Steven Loft and Kerry Swanson. Calgary: University of Calgary Press. 49-78.
    • Manovich, Lev. 2002. The Language of New Media. Cambridge, MA: MIT Press.
    • Nakamura, Lisa. 2013. “Glitch Racism: Networks as Actors within Vernacular Internet Theory.” Culture Digitally.
    • Nakamura, Lisa. 2014. “Indigenous Circuits: Navajo Women and the Racialization of Early Electronic Manufacture.” American Quarterly 66:4. 919-941.
    • Palmer, Mark. 2016. “Kiowa Storytelling around a Map.” In Travis and von Lunen (2016). 63-73.
    • Palmer, Mark. 2013. “(In)digitizing Cáuigú Historical Geographies: Technoscience as a Postcolonial Discourse”. In History and GIS: Epistemologies, Considerations and Reflections, edited by A. von Lunen and C. Travis. Dordrecht, NLD: Springer Publishing. 39-58.
    • Palmer, Mark. 2012. “Theorizing Indigital Geographic Information Networks.“ Cartographica: The International Journal for Geographic Information and Geovisualization 47:2. 80-91.
    • Palmer, Mark. 2009. “Engaging with Indigital Geographic Information Networks.” Futures: The Journal of Policy, Planning and Futures Studies 41. 33-40.
    • Palmer, Mark and Robert Rundstrom. 2013. “GIS, Internal Colonialism, and the U.S. Bureau of Indian Affairs.” Annals of the Association of American Geographers 103:5. 1142-1159.
    • Raheja, Michelle. 2011. Reservation Reelism: Redfacing, Visual Sovereignty, and Representations of Native Americans in Film. Lincoln: University of Nebraska Press.
    • Simpson, Audra. 2014. Mohawk Interruptus: Political Life Across the Borders of Settler States. Durham: Duke University Press.
    • Travis, C. and A. von Lünen. 2016. The Digital Arts and Humanities. Basel, Switzerland: Springer.
    • Vizenor, Gerald. 2000. Fugitive Poses: Native American Indian Scenes of Absence and Presence. Lincoln: University of Nebraska Press.
    • Wolf, Patrick. 2006. “Settler Colonialism and the Elimination of the Native.” Journal of Genocide Research 8:4. 387-409.

     

  • David Golumbia — The Digital Turn

    David Golumbia — The Digital Turn

    David Golumbia

    Is there, was there, will there be, a digital turn? In (cultural, textual, media, critical, all) scholarship, in life, in society, in politics, everywhere? What would its principles be?

    The short prompt I offered to the contributors to this special issue did not presume to know the answers to these questions.

    That means, I hope, that these essays join a growing body of scholarship and critical writing (much, though not by any means all, of it discussed in the essays that make up this collection) that suspends judgment about certain epochal assumptions built deep into the foundations of too much practice, thought, and even scholarship about just these questions.

    • In “The New Pythagoreans,” Chris Gilliard and Hugh Culik look closely at the long history of Pythagorean mystic belief in the power of mathematics and its near-exact parallels in contemporary promotion of digital technology, and especially surrounding so-called big data.
    • In “From Megatechnic Bribe to Megatechnic Blackmail: Mumford’s ‘Megamachine’ after the Digital Turn,” Zachary Loeb asks about the nature of the literal and metaphorical machines around us via a discussion of the 20th century writer and social critic (and) Lewis Mumford’s work, one of the thinkers who most fully anticipated the digital revolution and understood its likely consequences.
    • In “Digital Proudhonism,” Gavin Mueller writes that “a return to Marx’s critique of Proudhon will aid us in piercing through the Digital Proudhonist mystifications of the Internet’s effects on politics and industry and reformulate both a theory of cultural production under digital capitalism as well as radical politics of work and technology for the 21st century.”
    • In “Mapping Without Tools: What the Digital Turn Can Learn from the Cartographic Turn.” Tim Duffy pushes back “against the valorization of ‘tools’ and ‘making’ in the digital turn, particularly its manifestation in digital humanities (DH), by reflecting on illustrative examples of the cartographic turn, which, from its roots in the sixteenth century through to J.B. Harley’s explosive provocation in 1989 (and beyond) has labored to understand the relationship between the practice of making maps and the experiences of looking at and using them.  By considering the stubborn and defining spiritual roots of cartographic research and the way fantasies of empiricism helped to hide the more nefarious and oppressive applications of their work, I hope to provide a mirror for the state of the digital humanities, a field always under attack, always defining and defending itself, and always fluid in its goals and motions.”
    • Joseph Erb, Joanna Hearne, and Mark Palmer with Durbin Feeling, in “Origin Stories in the Genealogy of Cherokee Language Technology,” argue that “the surge of critical work in digital technology and new media studies has rarely acknowledged the centrality of Indigeneity to our understanding of systems such as mobile technologies, major programs such as Geographic Information Systems (GIS), digital aesthetic forms such as animation, or structural and infrastructural elements of hardware, circuitry, and code.”
    • In “Artificial Saviors,” tante connects the pseudo-religious and pseudo-scientific rhetoric found at a surprising rate among digital technology developers and enthusiasts: “When AI morphed from idea or experiment to belief system, hackers, programmers, ‘data scientists,’ and software architects became the high priests of a religious movement that the public never identified and parsed as such.”
    • In “The Endless Night of Wikipedia’s Notable Woman Problem,” Michelle Moravec “takes on one of the ‘tests’ used to determine whether content is worthy of inclusion in Wikipedia, notability, to explore how the purportedly neutral concept works against efforts to create entries about female historical figures.”
    • In “The Computational Unconscious,” Jonathan Beller interrogates the “penetration of the digital, rendering early on the brutal and precise calculus of the dimensions of cargo-holds in slave ships and the sparse economic accounts of ship ledgers of the Middle Passage, double entry bookkeeping, the rationalization of production and wages in the assembly line, and more recently, cameras and modern computing.”
    • In “What Indigenous Literature Can Bring to Electronic Archives,” Siobhan Senier asks, “How can the insights of the more ethnographically oriented Indigenous digital archives inform digital literary collections, and vice versa? How do questions of repatriation, reciprocity, and culturally sensitive contextualization change, if at all, when we consider Indigenous writing?”
    • Rob Hunter provides the following abstract of “The Digital Turn and the Ethical Turn: Depoliticization in Digital Practice and Political Theory”:

      The digital turn is associated with considerable enthusiasm for the democratic or even emancipatory potential of networked computing. Free, libre, and open source (FLOSS) developers and maintainers frequently endorse the claim that the digital turn promotes democracy in the form of improved deliberation and equalized access to information, networks, and institutions. Interpreted in this way, democracy is an ethical practice rather than a form of struggle or contestation. I argue that this depoliticized conception of democracy draws on commitments—regarding personal autonomy, the ethics of intersubjectivity, and suspicion of mass politics—that are also present in recent strands of liberal political thought. Both the rhetorical strategies characteristic of FLOSS as well as the arguments for deliberative democracy advanced within contemporary political theory share similar contradictions and are vulnerable to similar critiques—above all in their pathologization of disagreement and conflict. I identify and examine the contradictions within FLOSS, particularly those between commitments to existing property relations and the championing of individual freedom. I conclude that, despite the real achievements of the FLOSS movement, its depoliticized conception of democracy is self-inhibiting and tends toward quietistic refusals to consider the merits of collective action or the necessity of social critique.

    • John Pat Leary, in “Innovation and the Neoliberal Idioms of Development,” “explores the individualistic, market-based ideology of ‘innovation’ as it circulates from the English-speaking first world to the so-called third world, where it supplements, when it does not replace, what was once more exclusively called ‘development.’” He works “to define the ideology of ‘innovation’ that undergirds these projects, and to dissect the Anglo-American ego-ideal that it circulates. As an ideology, innovation is driven by a powerful belief, not only in technology and its benevolence, but in a vision of the innovator: the autonomous visionary whose creativity allows him to anticipate and shape capitalist markets.”
    • Annemarie Perez, in “UndocuDreamers: Public Writing and the Digital Turn,” writes of a “paradox” she finds in her work with students who belong to communities targeted by recent immigration enforcement crackdowns and the default assumptions about “open” and “public” found in so much digital rhetoric: “My students should write in public. Part of what they are learning in Chicanx studies is about the importance of their voices, of their experiences and their stories are ones that should be told. Yet, given the risks in discussing migration and immigration through the use of public writing, I wonder how I as an instructor should either encourage or discourage students from writing their lives, their experiences as undocumented migrants, experiences which have touched, every aspect of their lives.”
    • Gretchen Soderlund, in “Futures of Journalisms Past (or, Pasts of Journalism’s Future),” looks at discourses of “the future” in journalism from the 19th and 20th centuries, in order to help frame current discourses about journalism’s “digital future,” in part because when “when it comes to technological and economic speedup, journalism may be the canary in the mine.”
    • In “The Singularity in the I790s: Toward a Prehistory of the Present With William Godwin and Thomas Malthus,” Anthony Galluzzo examines the often-misunderstood and misrepresented writings of William Godwin, and also those of Thomas Malthus, to demonstrate how far back in English-speaking political history go the roots of today’s technological Prometheanism, and how destructive it can be, especially for the political left.

    “Digital Turn” Table of Contents

     

     

     

  • Richard Hill — Knots of Statelike Power (Review of Harcourt, Exposed: Desire and Disobedience in the Digital Age)

    Richard Hill — Knots of Statelike Power (Review of Harcourt, Exposed: Desire and Disobedience in the Digital Age)

    a review of Bernard Harcourt, Exposed: Desire and Disobedience in the Digital Age (Harvard, 2015)

    by Richard Hill

    ~

    This is a seminal and important book, which should be studied carefully by anyone interested in the evolution of society in light of the pervasive impact of the Internet. In a nutshell, the book documents how and why the Internet turned from a means to improve our lives into what appears to be a frightening dystopia driven by the collection and exploitation of personal data, data that most of us willingly hand over with little or no care for the consequences. “In our digital frenzy to share snapshots and updates, to text and videochat with friends and lovers … we are exposing ourselves‒rendering ourselves virtually transparent to anyone with rudimentary technological capabilities” (page 13 of the hardcover edition).

    The book meets its goals (25) of tracing the emergence of a new architecture of power relations; to document its effects on our lives; and to explore how to resist and disobey (but this last rather succinctly). As the author correctly says (28), metaphors matter, and we need to re-examine them closely, in particular the so-called free flow of data.

    As the author cogently points out, quoting Media Studies scholar Siva Vaidhyanathan, we “assumed digitization would level the commercial playing field in wealthy economies and invite new competition into markets that had always had high barriers to entry.” We “imagined a rapid spread of education and critical thinking once we surmounted the millennium-old problems of information scarcity and maldistribution” (169).

    “But the digital realm does not so much give us access to truth as it constitutes a new way for power to circulate throughout society” (22). “In our digital age, social media companies engage in surveillance, data brokers sell personal information, tech companies govern our expression of political views, and intelligence agencies free-ride off e-commerce. … corporations and governments [are enabled] to identify and cajole, to stimulate our consumption and shape our desires, to manipulate us politically, to watch, surveil, detect, predict, and, for some, punish. In the process, the traditional limits placed on the state and on governing are being eviscerated, as we turn more and more into marketized malleable subjects who, willingly or unwillingly, allow ourselves to be nudged, recommended, tracked, diagnosed, and predicted by a blurred amalgam of governmental and commercial initiative” (187).

    “The collapse of the classic divide between the state and society, between the public and private sphere, is particular debilitating and disarming. The reason is that the boundaries of the state had always been imagined in order to limit them” (208). “What is emerging in the place of separate spheres [of government and private industry] is a single behemoth of a data market: a colossal market for personal data” (198). “Knots of statelike power: that is what we face. A tenticular amalgam of public and private institutions … Economy, society, and private life melt into a giant data market for everyone to trade, mine, analyze, and target” (215). “This is all the more troubling because the combinations we face today are so powerful” (210).

    As a consequence, “Digital exposure is restructuring the self … The new digital age … is having profound effects on our analogue selves. … it is radically transforming our subjectivity‒even for those, perhaps even more, who believe they have nothing to fear” (232). “Mortification of the self, in our digital world, happens when subjects voluntarily cede their private attachments and their personal privacy, when they give up their protected personal space, cease monitoring their exposure on the Internet, let go of their personal data, and expose their intimate lives” (233).

    As the book points out, quoting Software Freedom Law Center founder Eben Moglen, it is justifiable to ask whether “any form of democratic self-government, anywhere, is consistent with the kind of massive, pervasive, surveillance into which the United States government has led not only its people but the world” (254). “This is a different form of despotism, one that might take hold only in a democracy: one in which people loose the will to resist and surrender with broken spirit” (255).

    The book opens with an unnumbered chapter that masterfully reminds us of the digital society we live in: a world in which both private companies and government intelligence services (also known as spies) read our e-mails and monitor our web browsing. Just think of “the telltale advertisements popping up on the ribbon of our search screen, reminding us of immediately past Google or Bing queries. We’ve received the betraying e-mails in our spam folders” (2). As the book says, quoting journalist Yasha Levine, social media has become “a massive surveillance operation that intercepts and analyses terabytes of data to build and update complex psychological profiles on hundreds of millions of people all over the world‒all of it in real time” (7). “At practically no cost, the government has complete access to people’s digital selves” (10).

    We provide all this data willingly (13), because we have no choice and/or because we “wish to share our lives with loved ones and friends” (14). We crave digital connections and recognition and “Our digital cravings are matched only by the drive and ambition of those who are watching” (14). “Today, the drive to know everything, everywhere, at every moment is breathtaking” (15).

    But “there remain a number of us who continue to resist. And there are many more who are ambivalent about the loss of privacy or anonymity, who are deeply concerned or hesitant. There are some who anxiously warn us about the dangers and encourage us to maintain reserve” (13).

    “And yet, even when we hesitate or are ambivalent, it seems there is simply no other way to get things done in the new digital age” (14), be it airline tickets, hotel reservations, buying goods, booking entertainment. “We make ourselves virtually transparent for everyone to see, and in so doing, we allow ourselves to be shaped in unprecedented ways, intentionally or wittingly … we are transformed and shaped into digital subjects” (14). “It’s not so much a question of choice as a feeling of necessity” (19). “For adolescents and young adults especially, it is practically impossible to have a social life, to have friends, to meet up, to go on dates, unless we are negotiating the various forms of social media and mobile technology” (18).

    Most have become dulled by blind faith in markets, the neoliberal mantra (better to let private companies run things than the government), fear of terrorism‒dulled into believing that, if we have nothing to hide, then there is nothing to fear (19). Even though private companies, and governments, know far more about us than a totalitarian regime such as that of East Germany “could ever have dreamed” (20).

    “We face today, in advanced liberal democracies, a radical new form of power in a completely altered landscape of political and social possibilities” (17). “Those who govern, advertise, and police are dealing with a primary resource‒personal data‒that is being handed out for free, given away in abundance, for nothing” (18).

    According to the book “There is no conspiracy here, nothing untoward.” But the author probably did not have access to Shawn M. Powers and Michael Jablonski’s The Real Cyberwar: The Political Economy of Internet Freedom (2015), published around the same time as Harcourt’s book, which shows that actually the current situation was created, or at least facilitated, by deliberate actions of the US government (which were open, not secret), resulting in what the book calls, quoting journalist James Bamford, “a surveillance-industrial empire” (27).

    The observations and conclusions outlined above are meticulously justified, with numerous references, in the numbered chapters of the book. Chapter 1 explains how analogies of the current surveillance regime to Orwell’s 1984 are imperfect because, unlike in Orwell’s imagined world, today most people desire to provide their personal data and do so voluntarily (35). “That is primarily how surveillance works today in liberal democracies: through the simplest desires, curated and recommended to us” (47).

    Chapter 2 explains how the current regime is not really a surveillance state in the classical sense of the term: it is a surveillance society because it is based on the collaboration of government, the private sector, and people themselves (65, 78-79). Some believe that government surveillance can prevent or reduce terrorist attacks (55-56), never mind that it might violate constitutional rights (56-57), or be ineffective, or that terrorist attacks in liberal democracies have resulted in far fewer fatalities than, say, traffic accidents or opiod overdose.

    Chapter 3 explains how the current regime is not actually an instantiation of Jeremy Bentham’s Panopticon, because we are not surveilled in order to be punished‒on the contrary, we expose ourselves in order to obtain something we want (90), and we don’t necessarily realize the extent to which we are being surveilled (91). As the book puts it, Google strives “to help people get what they want” by collecting and processing as much personal data as possible (103).

    Chapter 4 explains how narcissism drives the willing exposure of personal data (111). “We take pleasure in watching [our friends], ‘following’ them, ‘sharing’ their information‒even while we are, unwittingly, sharing our every keyboard stroke” (114). “We love watching others and stalking their digital traces” (117).

    Yet opacity is the rule for corporations‒as the book says, quoting Frank Pasquale (124-125), “Internet companies collect more and more data on their users but fight regulations that would let those same users exercise some control over the resulting digital dossiers.” In this context, it is worth noting the recent proposals, analyzed here, here, and here, to the World Trade Organization that would go in the direction favored by dominant corporations.

    The book explains in summary fashion the importance of big data (137-140). For an additional discussion, with extensive references, see sections 1 of my submission to the Working Group on Enhanced Cooperation. As the book correctly notes, “In the nineteenth century, it was the government that generated data … But now we have all become our own publicists. The production of data has become democratized” (140).

    Chapter 5 explains how big data, and its analysis, is fundamentally different from the statistics that were collected, analyzed, and published in the past by governments. The goal of statistics is to understand and possibly predict the behavior of some group of people who share some characteristics (e.g. they live in a particular geographical area, or are of the same age). The goal of big data is to target and predict individuals (158, 161-163).

    Chapter 6 explains how we have come to accept the loss of privacy and control of our personal data (166-167). A change in outlook, largely driven by an exaggerated faith in free enterprise (168 and 176), “has made it easier to commodify privacy, and, gradually, to eviscerate it” (170). “Privacy has become a form of private property” (176).

    The book documents well the changes in the US Supreme Court’s views of privacy, which have moved from defending a human right to balancing privacy with national security and commercial interests (172-175). Curiously, the book does not mention the watershed Smith vs. Maryland case, in which the US Supreme Court held that telephone metadata is not protected by the right to privacy, nor the US Electronic Communications Privacy Act, under which many e-mails are not protected either.

    The book mentions the incestuous ties between the intelligence community, telecommunications companies, multinational companies, and military leadership that have facilitated the implementation of the current surveillance regime (178); these ties are exposed and explained in greater detail in Powers and Jablonski’s The Real Cyberwar. This chapter ends with an excellent explanation of how digital surveillance records are in no way comparable to the old-fashioned paper files that were collected in the past (181).

    Chapter 7 explores the emerging dystopia, engendered by the fact that “The digital economy has torn down the conventional boundaries between governing, commerce, and private life” (187). In a trend that should be frightening, private companies now exercise censorship (191), practice data mining on scales that are hard to imagine (194), control worker performance by means beyond the dreams of any Tayorlist (196), and even aspire to “predict consumer preferences better than consumers themselves can” (198).

    The size of the data brokerage market is huge and data on individuals is increasingly used to make decision about them, e.g. whether they can obtain a loan (198-208). “Practically none of these scores [calculated from personal data] are revealed to us, and their accuracy is often haphazard” (205). As noted above, we face an interdependent web of private and public interests that collect, analyze, refine, and exploit our personal data‒without any meaningful supervision or regulation.

    Chapter 8 explains how digital interactions are reconfiguring our self-images, our subjectivity. We know, albeit at times only implicitly, that we are being surveilled and this likely affects the behavior of many (218). Being deprived of privacy affects us, much as would being deprived of property (229). We have voluntarily given up much of our privacy, believing either that we have no choice but to accept surveillance, or that the surveillance is in our interests (233). So it is our society as a whole that has created, and nurtures, the surveillance regime that we live in.

    As shown in Chapter 9, that regime is a form of digital incarceration. We are surveilled even more closely than are people obliged by court order to wear electronic tracking devices (237). Perhaps a future smart watch will even administer sedatives (or whatever) when it detects, by analyzing our body functions and comparing with profiles downloaded from the cloud, that we would be better off being sedated (237). Or perhaps such a watch will be hijacked by malware controlled by an intelligence service or by criminals, thus turning a seemingly free choice into involuntary constraints (243, 247).

    Chapter 10 show in detail how, as already noted, the current surveillance regime is not compatible with democracy. The book cites Tocqueville to remind us that democracy can become despotic, and result is a situation where “people lose the will to resist and surrender with broken spirit” (255). The book summarily presents well-known data regarding the low voter turnouts in the United States, a topic covered in full detail in Robert McChesney’s  Digital Disconnect: How Capitalism is Turning the Internet Against Democracy (2014) which explains how the Internet is having a negative effect on democracy. Yet “it remains the case that the digital transparency and punishment issues are largely invisible to democratic theory and practice” (216).

    So, what is to be done? Chapter 11 extols the revelations made by Edward Snowden and those published by Julian Assange (WikiLeaks). It mentions various useful self-help tools, such as “I Fight Surveillance” and “Security in a Box” (270-271). While those tools are useful, they are not at present used pervasively and thus don’t really affect the current surveillance regime. We need more emphasis on making the tools available and on convincing more people to use them.

    As the book correctly says, an effective measure would be to carry the privatization model to its logical extreme (274): since personal data is valuable, those who use it should pay us for it. As already noted, the industry that is thriving from the exploitation of our personal data is well aware of this potential threat, and has worked hard to attempt to obtain binding international norms, in the World Trade Organization, that would enshrine the “free flow of data”, where “free” in the sense of freedom of information is used as a Trojan Horse for the real objective, which is “free” in the sense of no cost and no compensation for those the true owners of the data, we the people. As the book correctly mentions, civil society organizations have resisted this trend and made proposals that go in the opposite direction (276), including a proposal to enshrine the necessary and proportionate principles in international law.

    Chapter 12 concludes the book by pointing out, albeit very succinctly, that mass resistance is necessary, and that it need not be organized in traditional ways: it can be leaderless, diffuse, and pervasive (281). In this context, I refer to the work of the JustNet Coalition and of the fledgling Internet Social Forum (see also here and here).

    Again, this book is essential reading for anybody who is concerned about the current state of the digital world, and the direction in which it is moving.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Richard Hill — States, Governance, and Internet Fragmentation (Review of Mueller, Will the Internet Fragment?)

    Richard Hill — States, Governance, and Internet Fragmentation (Review of Mueller, Will the Internet Fragment?)

    a review of Milton Mueller, Will the Internet Fragment? Sovereignty, Globalization and Cyberspace (Polity, 2017)

    by Richard Hill

    ~

    Like other books by Milton Mueller, Will the Internet Fragment? is a must-read for anybody who is seriously interested in the development of Internet governance and its likely effects on other walks of life.  This is true because, and not despite, the fact that it is a tract that does not present an unbiased view. On the contrary, it advocates a certain approach, namely a utopian form of governance which Mueller refers to as “popular sovereignty in cyberspace”.

    Mueller, Professor of Information Security and Privacy at Georgia Tech, is an internationally prominent scholar specializing in the political economy of information and communication.  The author of seven books and scores of journal articles, his work informs not only public policy but also science and technology studies, law, economics, communications, and international studies.  His books Networks and States: The Global Politics of Internet Governance (MIT Press, 2010) and Ruling the Root: Internet Governance and the Taming of Cyberspace (MIT Press, 2002) are acclaimed scholarly accounts of the global governance regime emerging around the Internet.

    Most of Will the Internet Fragment? consists of a rigorous analysis of what has been commonly referred to as “fragmentation,” showing that very different technological and legal phenomena have been conflated in ways that do not favour productive discussions.  That so-called “fragmentation” is usually defined as the contrary of the desired situation in which “every device on the Internet should be able to exchange data packets with any other device that is was willing to receive them” (p. 6 of the book, citing Vint Cerf).  But. as Mueller correctly points out, not all end-points of the Internet can reach all other end-points at all times, and there may be very good reasons for that (e.g. corporate firewalls, temporary network outages, etc.).  Mueller then shows how network effects (the fact that the usefulness of a network increases as it becomes larger) will tend to prevent or counter fragmentation: a subset of the network is less useful than is the whole.  He also shows how network effects can prevent the creation of alternative networks: once everybody is using a given network, why switch to an alternative that few are using?  As Mueller aptly points out (pp. 63-66), the slowness of the transition to IPv6 is due to this type of network effect.

    The key contribution of this book is that it clearly identifies the real question of interest to whose who are concerned about the governance of the Internet and its impact on much of our lives.  That question (which might have been a better subtitle) is: “to what extent, if any, should Internet policies be aligned with national borders?”  (See in particular pp. 71, 73, 107, 126 and 145).  Mueller’s answer is basically “as little as possible, because supra-national governance by the Internet community is preferable”.  This answer is presumably motivated by Mueller’s view that “ institutions shift power from states to society” (p. 116), which implies that “society” has little power in modern states.  But (at least ideally) states should be the expression of a society (as Mueller acknowledges on pp. 124 and 136), so it would have been helpful if Mueller had elaborated on the ways (and there are many) in which he believes states do not reflect society and in the ways in which so-called multi-stakeholder models would not be worse and would not result in a denial of democracy.

    Before commenting on Mueller’s proposal for supra-national governance, it is worth commenting on some areas where a more extensive discussion would have been warranted.  We note, however, that the book the book is part of a series that is deliberately intended to be short and accessible to a lay public.  So Mueller had a 30,000 word limit and tried to keep things written in a way that non-specialists and non-scholars could access.  This no doubt largely explains why he didn’t cover certain topics in more depth.

    Be that as it may, the discussion would have been improved by being placed in the long-term context of the steady decrease in national sovereignty that started in 1648, when sovereigns agreed in the Treaty of Westphalia to refrain from interfering in the religious affairs of foreign states, , and that accelerated in the 20th century.  And by being placed in the short-term context of the dominance by the USA as a state (which Mueller acknowledges in passing on p. 12), and US companies, of key aspects of the Internet and its governance.  Mueller is deeply aware of the issues and has discussed them in his other books, in particular Ruling the Root and Networks and States, so it would have been nice to see the topic treated here, with references to the end of the Cold War and what appears to be re-emergence of some sort of equivalent international tension (albeit not for the same reasons and with different effects at least for what concerns cyberspace).  It would also have been preferable to include at least some mention of the literature on the negative economic and social effects of current Internet governance arrangements.

     Will the Internet Fragment? Sovereignty, Globalization and Cyberspace (Polity, 2017)It is telling that, in Will the Internet Fragment?, Mueller starts his account with the 2014 NetMundial event, without mentioning that it took place in the context of the outcomes of the World Summit of the Information Society (WSIS, whose genesis, dynamics, and outcomes Mueller well analyzed in Networks and States), and without mentioning that the outcome document of the 2015 UN WSIS+10 Review reaffirmed the WSIS outcomes and merely noted that Brazil had organized NetMundial, which was, in context, an explicit refusal to note (much less to endorse) the NetMundial outcome document.

    The UN’s reaffirmation of the WSIS outcomes is significant because, as Mueller correctly notes, the real question that underpins all current discussions of Internet governance is “what is the role of states?,” and the Tunis Agenda states: “Policy authority for Internet-related public policy issues is the sovereign right of States. They have rights and responsibilities for international Internet-related public policy issues.”

    Mueller correctly identifies and discusses the positive externalities created by the Internet (pp. 44-48).  It would have been better if he had noted that there are also negative externalities, in particular regarding security (see section 2.8 of my June 2017 submission to ITU’s CWG-Internet), and that the role of states includes internalizing such externalities, as well as preventing anti-competitive behavior.

    It is also telling the Mueller never explicitly mentions a principle that is no longer seriously disputed, and that was explicitly enunciated in the formal outcome of the WSIS+10 Review, namely that offline law applies equally online.  Mueller does mention some issues related to jurisdiction, but he does not place those in the context of the fundamental principle that cyberspace is subject to the same laws as the rest of the world: as Mueller himself acknowledges (p. 145), allegations of cybercrime are judged by regular courts, not cyber-courts, and if you are convicted you will pay a real fine or be sent to a real prison, not to a cyber-prison.  But national jurisdiction is not just about security (p. 74 ff.), it is also about legal certainty for commercial dealings, such as enforcement of contracts.  There are an increasing number of activities that depend on the Internet, but that also depend on the existence of known legal regimes that can be enforced in national courts.

    And what about the tension between globalization and other values such as solidarity and cultural diversity?  As Mueller correctly notes (p. 10), the Internet is globalization on steroids.  Yet cultural values differ around the world (p. 125).  How can we get the benefits of both an unfragmented Internet and local cultural diversity (as opposed to the current trend to impose US values on the rest of the world)?

    While dealing with these issues in more depth would have complicated the discussion, it also would have made it more valuable, because the call for direct rule of the Internet by and for Internet users must either be reconciled with the principle that offline law applies equally online, or be combined with a reasoned argument for the abandonment of that principle.  As Mueller so aptly puts it (p. 11): “Internet governance is hard … also because of the mismatch between its global scope and the political and legal institutions for responding to societal problems.”

    Since most laws, and almost all enforcement mechanisms are national, the influence of states on the Internet is inevitable.  Recall that the idea of enforceable rules (laws) dates back to at least 1700 BC and has formed an essential part of all civilizations in history.  Mueller correctly posits on p. 125 that a justification for territorial sovereignty is to restrict violence (only the state can legitimately exercise it), and wonders why, in that case, the entire world does not have a single government.  But he fails to note that, historically, at times much of the world was subject to a single government (think of the Roman Empire, the Mongol Empire, the Holy Roman Empire, the British Empire), and he does not explore the possibility of expanding the existing international order (treaties, UN agencies, etc.) to become a legitimate democratic world governance (which of course it is not, in part because the US does not want it to become one).  For example, a concrete step in the direction of using existing governance systems has recently been proposed by Microsoft: a Digital Geneva Convention.

    Mueller explains why national borders interfere with certain aspects of certain Internet activities (pp. 104, 106), but national borders interfere with many activities.  Yet we accept them because there doesn’t appear to be any “least worst” alternative.  Mueller does acknowledge that states have power, and rightly calls for states to limit their exercise of power to their own jurisdiction (p. 148).  But he posits that such power “carries much less weight than one would think” (p. 150), without justifying that far-reaching statement.  Indeed, Mueller admits that “it is difficult to conceive of an alternative” (p. 73), but does not delve into the details sufficiently to show convincingly how the solution that he sketches would not result in greater power by dominant private companies (and even corpotocracy or corporatism), increasing income inequality, and a denial of democracy.  For example, without the power of state in the form of consumer protection measures, how can one ensure that private intermediaries would “moderate content based on user preferences and reports” (p. 147) as opposed to moderating content so as to maximize their profits?  Mueller assumes that there would be a sufficient level of competition, resulting in self-correcting forces and accountability (p. 129); but current trends are just the opposite: we see increasing concentration and domination in many aspects of the Internet (see section 2.11 of my June 2017 submission to ITU’s CWG-Internet) and some competition law authorities have found that some abuse of dominance has taken place.

    It seems to me that Mueller too easily concludes that “a state-centric approach to global governance cannot easily co-exist with a multistakeholder regime” (p. 117), without first exploring the nuances of multi-stakeholder regimes and the ways that they could interface with existing institutions, which include intergovernmental bodies as well as states.  As I have stated elsewhere: “The current arrangement for global governance is arguably similar to that of feudal Europe, whereby multiple arrangements of decision-making, including the Church, cities ruled by merchant-citizens, kingdoms, empires and guilds co-existed with little agreement as to which actor was actually in charge over a given territory or subject matter.  It was in this tangled system that the nation-state system gained legitimacy precisely because it offered a clear hierarchy of authority for addressing issues of the commons and provision of public goods.”

    Which brings us to another key point that Mueller does not consider in any depth: if the Internet is a global public good, then its governance must take into account the views and needs of all the world’s citizens, not just those that are privileged enough to have access at present.  But Mueller’s solution would restrict policy-making to those who are willing and able to participate in various so-called multi-stakeholder forums (apparently Mueller does not envisage a vast increase in participation and representation in these; p. 120).  Apart from the fact that that group is not a community in any real sense (a point acknowledged on p. 139), it comprises, at present, only about half of humanity, and even much of that half would not be able to participate because discussions take place primarily in English, and require significant technical knowledge and significant time commitments.

    Mueller’s path for the future appears to me to be a modern version of the International Ad Hoc Committee (IAHC), but Mueller would probably disagree, since he is of the view that the IAHC was driven by intergovernmental organizations.  In any case, the IAHC work failed to be seminal because of the unilateral intervention of the US government, well described in Ruling the Root, which resulted in the creation of ICANN, thus sparking discussions of Internet governance in WSIS and elsewhere.  While Mueller is surely correct when he states that new governance methods are needed (p. 127), it seems a bit facile to conclude that “the nation-state is the wrong unit” and that it would be better to rely largely on “global Internet governance institutions rooted in non-state actors” (p. 129), without explaining how such institutions would be democratic and representative of all of the word’s citizens.

    Mueller correctly notes (p. 150) that, historically, there have major changes in sovereignty: emergence and falls of empires, creation of new nations, changes in national borders, etc.  But he fails to note that most of those changes were the result of significant violence and use of force.  If, as he hopes, the “Internet community” is to assert sovereignty and displace the existing sovereignty of states, how will it do so?  Through real violence?  Through cyber-violence?  Through civil disobedience (e.g. migrating to bitcoin, or implementing strong encryption no matter what governments think)?  By resisting efforts to move discussions into the World Trade Organization? Or by persuading states to relinquish power willingly?  It would have been good if Mueller had addressed, at least summarily, such questions.

    Before concluding, I note a number of more-or-less minor errors that might lead readers to imprecise understandings of important events and issues.  For example, p. 37 states that “the US and the Internet technical community created a global institution, ICANN”: in reality, the leaders of the Internet technical community obeyed the unilateral diktat of the US government (at first somewhat reluctantly and later willingly) and created a California non-profit company, ICANN.  And ICANN is not insulated from jurisdictional differences; it is fully subject to US laws and US courts.  The discussion on pp. 37-41 fails to take into account the fact that a significant portion of the DNS, the ccTLDs, is already aligned with national borders, and that there are non-national telephone numbers; the real differences between the DNS and telephone numbers are that most URLs are non-national, whereas few telephone numbers are non-national; that national telephone numbers are given only to residents of the corresponding country; and that there is an international real-time mechanism for resolving URLs that everybody uses, whereas each telephone operator has to set up its own resolving mechanism for telephone numbers.  Page 47 states that OSI was “developed by Europe-centered international organizations”, whereas actually it was developed by private companies from both the USA (including AT&T, Digital Equipment Corporation, Hewlett-Packard, etc.) and Europe working within global standards organizations (IEC, ISO, and ITU), who all happen to have secretariats in Geneva, Switzerland; whereas the Internet was initially developed and funded by an arm of the US Department of Defence and the foundation of the WWW was initially developed in a European intergovernmental organization.  Page 100 states that “The ITU has been trying to displace or replace ICANN since its inception in 1998”; whereas a correct statement would be “While some states have called for the ITU to displace or replace ICANN since its inception in 1998, such proposals have never gained significant support and appear to have faded away recently.”  Not everybody thinks that the IANA transition was a success (p. 117), nor that it is an appropriate model for the future (pp. 132-135; 136-137), and it is worth noting that ICANN successfully withstood many challenges (p. 100) while it had a formal link to the US government; it remains to be seen how ICANN will fare now that it is independent of the US government.  ICANN and the RIR’s do not have a “‘transnational’ jurisdiction created through private contracts” (p. 117); they are private entities subject to national law and the private contracts in question are also subject to national law (and enforced by national authorities, even if disputes are resolved by international arbitration).  I doubt that it is a “small step from community to nation” (p. 142), and it is not obvious why anti-capitalist movements (which tend to be internationalist) would “end up empowering territorial states and reinforcing alignment” (p. 147), when it is capitalist movements that rely on the power of territorial states to enforce national laws, for example regarding intellectual property rights.

    Despite these minor quibbles, this book, and its references (albeit not as extensive as one would have hoped), will be a valuable starting point for future discussions of internet alignment and/or “fragmentation.” Surely there will be much future discussion, and many more analyses and calls for action, regarding what may well be one of the most important issues that humanity now faces: the transition from the industrial era to the information era and the disruptions arising from that transition.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2 Review Digital Studies magazine.

    Back to the essay

  • Daniel Greene – Digital Dark Matters

    Daniel Greene – Digital Dark Matters

    a review of Simone Browne, Dark Matters: On the Surveillance of Blackness (Duke University Press, 2015)

    by Daniel Greene

    ~

    The Book of Negroes was the first census of black residents of North America. In it, the British military took down the names of some three thousand ex-slaves between April and November of 1783, alongside details of appearance and personality, destination and, if applicable, previous owner. The self-emancipated—some free, some indentured to English or German soldiers—were seeking passage to Canada or Europe, and lobbied the defeated British Loyalists fleeing New York City for their place in the Book. The Book of Negroes thus functioned as “the first government-issued document for state-regulated migration between the United States and Canada that explicitly linked corporeal markers to the right to travel” (67). An index of slave society in turmoil, its data fields were populated with careful gradations of labor power, denoting the value of black life within slave capitalism: “nearly worn out,” “healthy negress,” “stout labourer.”  Much of the data in The Book of Negroes was absorbed from so-called Birch Certificates, issued by a British Brigadier General of that name, which acted as passports certifying the freedom of ex-slaves and their right to travel abroad. The Certificates became evidence submitted by ex-slaves arguing for their inclusion in the Book of Negroes, and became sites of contention for those slave-owners looking to reclaim people they saw as property.

    If, as Simone Browne argues in Dark Matters: On the Surveillance of Blackness, “the Book of Negroes [was] a searchable database for the future tracking of those listed in it” (83), the details of preparing, editing, monitoring, sorting and circulating these data become direct matters of (black) life and death. Ex-slaves would fight for their legibility within the system through their use of Birch Certificates and the like; but they had often arrived in New York in the first place through a series of fights to remain illegible to the “many start-ups in slave-catching” that arose to do the work of distant slavers. Aliases, costumes, forged documents and the like were on the one hand used to remain invisible to the surveillance mechanisms geared towards capture, and on the other hand used to become visible to the surveillance mechanisms—like the Book—that could potentially offer freedom. Those ex-slaves who failed to appear as the right sort of data were effectively “put on a no-sail list” (68), and either held in New York City or re-rendered into property and delivered back to the slave-owner.

    Start-ups, passports, no-sail lists, databases: These may appear anachronistic at first, modern technological thinking out of sync with colonial America. But Browne deploys these labels with care and precision, like much else in this remarkable book. Dark Matters reframes our contemporary thinking about surveillance, and digital media more broadly, through a simple question with challenging answers: What if our mental map of the global surveillance apparatus began not with 9/11 but with the slave ship? Surveillance is considered here not as a specific technological development but a practice of tracking people and putting them into place. Browne demonstrates how certain people have long been imagined as out of place and that technologies of control and order were developed in order to diagnose, map, and correct these conditions: “Surveillance is nothing new to black folks. It is a fact of antiblackness” (10). That this ”fact” is often invisible even in our studies of surveillance and digital media more broadly speaks, perversely, to the power of white supremacy to structure our vision of the world. Browne’s apparent anachronisms make stranger the techniques of surveillance with which we are familiar, revealing the dark matter that has structured their use and development this whole time. Difficult to visualize, Browne shows us how to trace this dark matter through its effects: the ordering of people into place, and the escape from that order through “freedom acts” of obfuscation, sabotage, and trickery.

    This then is a book about new (and very old) methods of research in surveillance studies in particular, and digital studies in general, centered in black studies—particularly the work of critical theorists of race such as Saidiya Hartman and Sylvia Wynter who find in chattel slavery a prototypical modernity. More broadly, it is a book about new ways of engaging with our technocultural present, centered in the black diasporic experience of slavery and its afterlife. Frantz Fanon is a key figure throughout. Browne introduces us to her own approach through an early reflection on the revolutionary philosopher’s dying days in Washington, DC, overcome with paranoia over the very real surveillance to which he suspected he was subjected. Browne’s FOIA requests to the CIA regarding their tracking of Fanon during his time at the National Institutes of Health Clinical Center returned only a newspaper clipping, a book review, and a heavily redacted FBI memo reporting on Fanon’s travels. So she digs further into the archive, finding in Fanon’s lectures at the University of Tunis, delivered in the late 1950s after being expelled from Algeria by French colonial authorities, a critical exploration of policing and surveillance. Fanon’s psychiatric imagination, granting such visceral connection between white supremacist institutions and lived black experience in The Wretched of the Earth, here addresses the new techniques of ‘control by quantification’—punch clocks, time sheets, phone taps, and CCTV—in factories and department stores, and the alienation engendered in the surveilled.

    Browne’s recovery of this work grounds a creative extension of Fanon’s thinking into surveillance practices and surveillance studies. From his concept of “epidermalization”—“the imposition of race on the body” (7)—Browne builds a theory of racializing surveillance. Like many other key terms in Dark Matters, this names an empirical phenomenon—the crafting of racial boundaries through tracking and monitoring—and critiques the “absented presence” (13) of race in surveillance studies. Its opposition is found in dark sousveillance, a revision of Steve Mann’s term for watching the watchers that, again, describes both the freedom acts of black folks against a visual field saturated with racism, as well as an epistemology capable of perceiving, studying, and deconstructing apparatuses of racial surveillance.

    Each chapter of Dark Matters presents a different archive of racializing surveillance paired with reflections on black cultural production Browne reads as dark sousveillance. At each turn, Browne encourages us to see in slavery and its afterlife new modes of control, old ways of studying them, and potential paths of resistance. Her most direct critique of surveillance studies comes in Chapter 1’s precise exegesis of the key ideas that emerge from reading Jeremy Bentham’s plans for the Panopticon and Foucault’s study of it—the signal archive and theory of the field—against the plans for the slave ship Brookes. It turns out Bentham travelled on a ship transporting slaves during the trip where he sketched out the Panopticon, a model penitentiary wherein, through the clever use of lights, mirrors, and partitions, prisoners are totally isolated from one another and never sure whether they are being monitored or not. The archetype for modern power as self-discipline is thus nurtured, counter to its own telling, alongside sovereign violence. Browne’s reading of archives from the slave ship, the auction block, and the plantation reveal the careful biopolitics that created “blackness as a saleable commodity in the Western Hemisphere” (42). She asks how “the view from ‘under the hatches’” of Bentham’s Turkish ship, transporting, in his words, “18 young negresses (slaves),” might change our narrative about the emergence of disciplinary power and the modern management of life as a resource. It becomes clear that the power to instill self-governance through surveillance did not subordinate but rather partnered with the brutal spectacle of sovereign power that was intended to educate enslaved people on the limits of their humanity. This correction to the Foucauldian narrative is sorely necessary in a field, and a general political conversation about surveillance, that too often focuses on the technical novelty of drones, to give one example, without a connection to a generation learning to fear the skies.

    Stowage of the British slave ship Brookes under the regulated slave trade act of 1788
    “Stowage of the British slave ship Brookes under the regulated slave trade act of 1788.” Illustration. 1788. Library of Congress Rare Book and Special Collections Division Washington, D.C.

    These sorts of theoretical course corrections are among the most valuable lessons in Dark Matters. There is fastidious empirical work here, particularly in Chapter 2’s exploration of the Book of Negroes and colonial New York’s lantern laws requiring all black and indigenous people to bear lights after dark. But this empirical work is not the book’s focus, nor its main promise. That promise comes in prompting new empirical and political questions about how we see surveillance and what it means, and for whom, through an archaeology of black life under surveillance (indeed, Chapter 4, on airport surveillance, is the one I find weakest largely because it abandons this archaeological technique and focuses wholly on the present). Chapter 1’s reading of Charles William Tait’s prescriptions for slave management, for example, is part of a broader turn in the study of the history of capitalism where the roots of modern business practices like data-driven human resource management are traced to the supposedly pre-modern slave economy. Chapter 3’s assertion that slave branding “was a biometric technology…a measure of slavery’s making, marking, and marketing of the black subject as commodity” (91) does similar work, making strange the contemporary security technologies that purport the reveal racial truths which unwilling subjects do not give up. Facial recognition technologies and other biometrics are calibrated based on what Browne calls a “prototypical whiteness…privileged in enrollment, measurement, and recognition processes…reliant upon dark matter for its own meaning” (162). Particularly in the context of border control, these default settings reveal the calculations built into our security technologies regarding who “counts” enough to be recognized. Calculations grounded in an unceasing desire for new means with which to draw clear-cut racial boundaries.

    The point here is not that a direct line of technological development can be drawn from brands to facial recognition or from lanterns to ankle bracelets. Rather, if racism, as Ruth Wilson Gilmore argues, is “the state-sanctioned or extralegal production and exploitation of group-differentiated vulnerability to premature death,” then what Browne points to are methods of group differentiation, the means by which the value of black lives are calculated and how those calculations are stored, transmitted, and concretized in institutional life. If Browne’s cultural studies approach neglects a sustained empirical engagement with a particular mode of racializing surveillance—say, the uneven geography produced by the Fugitive Slave Act, mentioned in passing in relation to “start-ups in slave catching”—it is because she has taken on the unenviable task of shifting the focus of whole fields to dark matter previously ignored, opening a series of doors through which readers can glimpse the technologies that make race.

    Here then is a space cleared for surveillance studies, and digital studies more broadly, in an historical moment when so many are loudly proclaiming that Black Lives Matter, when the dark sousveillance of smartphone recordings has made the violence of institutional racism impossible to ignore. Work in digital studies has readily and repeatedly unearthed the capitalist imperatives built into our phones, feeds, and friends lists. Shoshanna Zuboff’s recent work on “surveillance capitalism” is perhaps a bellwether here: a rich theorization of the data accumulation imperative that transforms intra-capitalist competition, the nature of the contract, and the paths of everyday life. But her account of the growth of an extractive data economy that leads to a Big Other of behavior modification does not so far have a place for race.

    This is not a call on my part to sprinkle a missing ingredient atop a shoddy analysis in order to check a box. Zuboff is critiqued here precisely because she is among our most thoughtful, careful critics of contemporary capitalism. Rather, Browne’s account of surveillance capitalism—though she does not call it that—shows that race does not need to be introduced to the critical frame from outside. That dark matter has always been present, shaping what is visible even if it goes unseen itself. This manifests in at least two ways in Zuboff’s critique of the Big Other. First, her critique of Google’s accumulation of  “data exhaust” is framed primarily as a ‘pull’ of ever more sites and sensors into Google’s maw, passively given up users. But there is a great deal of “push” here as well. The accumulation of consumable data also occurs through the very human work of solving CAPTCHAs and scanning books. The latter is the subject of an iconic photo that shows the brown hand of a Google Books scanner—a low-wage subcontractor, index finger wrapped in plastic to avoid cuts from a day of page-turning—caught on a scanned page. Second, for Zuboff part of the frightening novelty of Google’s data extraction regime is its “formal indifference” to individual users, as well as to existing legal regimes that might impede the extraction of population-scale data. This, she argues, stands in marked contrast to the midcentury capitalist regimes which embraced a degree of democracy in order to prop up both political legitimacy and effective demand. But this was a democratic compromise limited in time and space. Extractive capitalist regimes of the past and present, including those producing the conflict minerals so necessary for hardware running Google services, have been marked by, at best, formal indifference in the North to conditions in the South. An analysis of surveillance capitalism’s struggle for hegemony would be greatly enriched by a consideration of how industrial capitalism legitimated itself in the metropole at the expense of the colony. Nor is this racial-economic dynamic and its political legitimation purely a cross-continental concern. US prisons have long extracted value from the incarcerated, racialized as second-class citizens. Today this practice continues, but surveillance technologies like ankle bracelets extend this extraction beyond prison walls, often at parolees’ expense.

    A Google Books scanner’s hand
    A Google Books scanner’s hand, caught working on WEB Du Bois’ The Souls of Black Folk. Via The Art of Google Books.

    Capitalism has always, as Browne’s notes on plantation surveillance make clear, been racial capitalism. Capital enters the world arrayed in the blood of primitive accumulation, and reproduces itself in part through the violent differentiation of labor powers. While the accumulation imperative has long been accepted as a value shaping media’s design and use, it is unfortunate that race has largely entered the frame of digital studies, and particularly, as Jessie Daniels argues, internet studies, through a study of either racial variables (e.g., “race” inheres to the body of the nonwhite person and causes other social phenomena) or racial identities (e.g., race is represented through minority cultural production, racism is produced through individual prejudice). There are perhaps good institutional reasons for this framing, owing to disciplinary training and the like, beyond the colorblind political ethic of much contemporary liberalism. But it has left us without digital stories of race (although there are certainly exceptions, particularly in the work of writers like Lisa Nakamura and her collaborators), perceived to be a niche concern, on par with our digital stories of capitalism—much less digital stories of racial capitalism.

    Browne provides a path forward for a study of race and technology more attuned to institutions and structures, to the long shadows old violence casts on our daily, digital lives. This slim, rich book is ultimately a reflection on method, on learning new ways to see. “Technology is made of people!” is where so many of our critiques end, discovering, once again, the values we build into machines. This is where Dark Matters begins. And it proceeds through slave ships, databases, branding irons, iris scanners, airports, and fingerprints to map the built project of racism and the work it takes to pass unnoticed in those halls or steal the map and draw something else entirely.

    _____

    Daniel Greene holds a PhD in American Studies from the University of Maryland. He is currently a Postdoctoral Researcher with the Social Media Collective at Microsoft Research, studying the future of work and the future of unemployment. He lives online at dmgreene.net.

    Back to the essay