b2o: boundary 2 online

The b2o Review is a non-peer reviewed publication, published and edited by the boundary 2 editorial collective and specific topic editors, featuring book reviews, interventions, videos, and collaborative projects.  

  • Andrew Martino – Exhuming the Text: Alice Kaplan’s “Looking for the Stranger: Albert Camus and the Life of a Literary Classic”

    Andrew Martino – Exhuming the Text: Alice Kaplan’s “Looking for the Stranger: Albert Camus and the Life of a Literary Classic”

    Alice Kaplan’s Looking for the Stranger: Albert Camus and the Life of a Literary Classic

    Reviewed by Andrew Martino

    Albert Camus never considered himself an existentialist. In fact, Camus never exclusively believed in any school of thought. Camus was the consummate outsider, the one who stood apart from those who subscribed to views that forced those subscribers into a narrow ideology, especially when that ideology mixed with violence, something Camus steadfastly resisted. If we had to place Camus into any category it would be that of the humanist caught in the absurd. Camus believed in life over death (without believing in an afterlife), yet this belief did not keep him from contemplating the question of suicide, the only serious philosophical problem confronting us, as he writes in The Myth of Sisyphus. Camus’ humble beginnings in extreme poverty and illiteracy in his native Algeria  testify to the power of the human spirit in the face of an indifferent world. When he was awarded the Nobel Prize for Literature in 1957 he expressed reservations and claimed that the prize should have gone to André Malraux, an early influence on his writing. Camus also realized that the Nobel would bring a certain celebrity that would complicate his life, perhaps even sabotaging his art. Add to this his “silence” on the Algerian problem and his very public and acrimonious break with Sartre, and Camus becomes a figure trapped in a world where he is increasingly unable to control his own image. Camus is a problematic figure who is claimed by both the Right and the Left, leaving the man and his writing caught in a political vortex. Focusing on the postcolonial aspect of The Stranger, Edward W. Said writes that Camus “is a moral man in an immoral situation.”[i] When Camus died at the age of 46 in a car accident in 1960, he left the world with the image of the charismatic young man, Bogart-like in his coolness, and still with the promise of great things to come. But a saint he was not. His numerous affairs and constant womanizing, his reluctance to act or speak out against French imperialism in Algeria, his disillusionment with and expulsion from the Communist Party, render him more human than academics might be comfortable with. Camus’ life was full of contradictions, full of silences. Yet, it was precisely from these contradictions and silences that Camus produced one of the most important and widely read books of the twentieth century.

     Looking back over the seven decades since the publication of The Stranger, Camus’ reluctance to situate (in the Sartrean sense of the term) himself in the bubble of existentialism, a bubble in which The Stranger and his relationship with Sartre placed him, the novel blazed a path that opened up fields where the absurd might be articulated, contemplated, and confronted from the inside (the modernist bent) rather than from above and beyond, as the canonical novels of the nineteenth century may have done. In her essay “French Existentialism,” Hannah Arendt briefly examines Sartre and Camus’ influence on the “new” movement where novels carry the weight of philosophy. Throughout that essay she also comments on Camus’ reluctance to be labeled an existentialist. “Camus has probably protested against being called an Existentialist because for him the absurdity does not lie in man as such or in the world as such but only in their being thrown together.”[ii] Here we have what is perhaps the most concise and articulate formulation of absurdist philosophy to date. Camus’ definition of absurdity, painstakingly mapped out in Caligula, The Stranger, and The Myth of Sisyphus, is not quite existentialism, but does contain existentialist DNA, especially Kierkegaardian and Dostoevskian (two of Camus’ patron saints) DNA. As Camus remarks in The Myth of Sisyphus: “I can therefore say that the Absurd is not in man (if such a metaphor could have a meaning) nor in the world, but in their presence together.”[iii] Camus’ definition of the absurd is also the epistemological curve in the road separating him from Sartre’s thinking. If Sartre’s philosophy can be distilled into his phrase “Hell is other people,” then Camus’ is a philosophy of the absurd dependent upon relationships among people. On the other hand, Camus’ articulation of the absurd, as we’ve seen above, resides in the relationship of humans with their world.

    Together, Sartre and Camus blazed a path where philosophy and art, in this case literature, met, thereby ushering in a new form of the novel, one that would examine existence from a philosophical perspective while making use of a form in which to mold these philosophical perspectives. What emerges from this is a hybrid. According to Randall Collins, “What was identified was a tradition of literary-philosophical hybrids. Sartre and Camus were key formulators of the canon, and themselves archetypes of the career overlap between academic networks and the writers’ market. The phenomenon of existentialism in the 1940s and 1950s added another layer to this overlap.”[iv] But this hybridization was more than a heady cerebral new movement in fiction; this hybrid constituted a new way of thinking about the world, a world that emerged primarily from a particular network of intellectuals at a particular time in Paris. Sartre and Camus are on the crest of this wave of existentialism and their thinking would go on to change the world.

    Alice Kaplan’s extraordinary new book Looking for the Stranger: Albert Camus and the Life of a Literary Classic, is a careful and meticulously researched examination of Camus’ 1942 novel. Kaplan is one of the leading scholars of twentieth century French culture and history. She is currently the John M. Masser Professor of French at Yale University where she also received her Ph.D. in French in 1981. She has published seven books, including: French Lessons: A Memoir (1993), The Collaborator: The Trial and Execution of Robert Brasillach (2000), and Dreaming in French: The Paris Years of Jacqueline Bouvier Kennedy, Susan Sontag, and Angela Davis (2012). In 2013 Kaplan edited and provided the introduction to The Algerian Chronicles, a collection of articles and essays Camus wrote from 1939-1958. Kaplan’s edited edition is the first time these writings have appeared in English, so she is no stranger to Camus and his place in twentieth century French culture.

    Early on Kaplan claims that Looking for the Stranger is actually a biography of Camus’ best known work, and one of the most famous and widely read texts of the twentieth-century. However, this does not mean that Kaplan foregoes a glimpse into Camus’ life, thus resurrecting the Barthesian “death of the author” debate. Instead, Kaplan goes looking for The Stranger in the author instead of the author in The Stranger; the difference is subtly stunning. In other words, her investigation is more preoccupied with the creative process and its cultural and social context than it is with getting to the author as a god-like figure. Camus always claimed that The Stranger was the second in a three part series exploring the absurd from three different perspectives: a novel (The Stranger), a dramatization (Caligula), and a philosophical work (The Myth of Sisyphus). But The Stranger is hardly a book that needs rescuing from obscurity, nor does Kaplan claim that it does. To date the novel has sold over ten million copies and is still read in over 40 languages. It is still on high school and college syllabi, thus making it required reading for young men and women. In fact, a student’s first encounter with existentialism and the absurd is likely to come from a reading of The Stranger. Instead, she offers us a more comprehensive look into the text, running down every lead, exploring every avenue that might expand our understanding of what makes The Stranger the text that it is.

    Kaplan begins by acknowledging the spectacular success of The Stranger, making it one of the most popular and important texts of the twentieth century. She quickly glosses over the critical reaction to The Stranger by pointing out that readings of the novel map some of the most important theoretical lenses that have influenced twentieth century thought. “In fact, you can construct a pretty accurate history of twentieth-century literary criticism by following the successive waves of analysis of The Stranger: existentialism, new criticism, deconstruction, feminism, postcolonial studies” (2). The Stanger, she claims, has influenced thinking of a diverse population that spans generations. Indeed, the remarkable staying power of the novel to remain relevant, perhaps even more relevant now than when it was published, is a feat that its author and its critics at the time could not have foreseen. I am not sure that students continue to read The Stranger with the commitment that they once did, but it is undeniable that the novel still matters, that it still provokes us into thinking, especially in a time when fundamentalism and terrorism are on the rise, and Europe and the United States are flirting with a new form of fascism in the guise of a renewed interest in ridged nationalism. But Kaplan is not necessarily interested in the public and academic reception of The Stranger. Instead, she claims that the novel’s readers and commentators have overlooked something from our reading of the text since its publication: that something is a biography of the novel. “Yet something essential is lacking in our understanding of the author and the book. By concentrating on themes and theories—esthetic, moral, political—critics have taken the very existence of The Stranger for granted” (2-3). She takes the unprecedented, and academically unpopular path that looks into the life of the author and the circumstances that allowed the author at a particular place and time to write one of the most powerful works of world literature. However, it is important to point out that Kaplan sets out to write a biography of the novel, and not the author. In fact, Camus’ life becomes a part of the puzzle that is The Stranger.

    Kaplan is not the first to comment on the unlikely success of The Stranger and its problematic birth. She is, however, the first to devote an entire book to an investigation, an investigation that is almost documentary-like in its approach, to the novel from conception to publication and beyond. And she accomplishes this brilliantly. Told in twenty-six short chapters, bookended by a prologue and an epilogue, Kaplan leads us into the depths of the novel in a highly engaging and thought-provoking fashion. In fact, the structure of her book presents its readers with the “life” of the novel, a life that has continued on long after the death of its creator. Drawing from a reservoir of sources, including Camus’ notebooks and her own trips to Algeria, Looking for the Stranger is a scholarly adventure story. As Kaplan claims in her acknowledgements: “I looked for The Stranger in libraries, in archives, in neighborhoods on three continents” (219). Of course, the idea of The Stranger was with her all of the time, but what makes Kaplan’s book so provocative is precisely the lengths she goes to in search of the novel. Kaplan explores The Stranger in three parts: before its publication, during its publication, and after its publication.

    In the first chapter Kaplan gives us the image of a young man in front of a bonfire burning various papers that link him to a past, a past that could be dangerous to him and those who know him. But as Kaplan tells it, the young Camus could not bring himself to burn all of his letters and writings. What he saved would act as a cache of material, both physical and remembered, from which he would extract and rework into a slim, simply told tale of a man who fails to cry at his mother’s funeral and, by a series of circumstances, ends up shooting an unnamed Arab on a beach, only to be arrested, tried, convicted and sentenced to death. Yet, the reader is never quite sure if the protagonist is convicted and sentenced to death because of the murder or his refusal to conform to the rules of a society that demands that one cry at one’s mother’s funeral. The image of the bonfire given to us by Kaplan is a powerful one. As we travel with her deeper into her investigation, we learn that the bonfire was a kind of rite Camus needed to perform in order to purge his mind and soul so that he could go on to write what he felt needed to be written—unimpeded by ghosts, but still attentive to their silences, which spoke to and through him.

    Throughout the spring of 1940, six years after the bonfire, Camus worked furiously on The Stranger, almost in total isolation holed up in his miserable hotel room in Montmartre, interrupted only to work for five hours a day at Paris-Soir. The twenty-six year old was as cut off from the world as he had ever been. Alone in a foreign city, with German bombs exploding all over France, Camus fought his loneliness and misery by throwing himself into his writing. Not yet divorced from his first wife, Simone Hié, his fiancé Francine Faure refused to accompany him to Paris. The only thing he brought with him was the first chapter of The Stranger and a few of his press clippings. Kaplan: “His sense of separation from everyone he loved put him in a state of mind that was both painful and enabling” (71). Like Camus’ biographer Olivier Todd, Kaplan highlights the importance of Camus’ isolation when he first arrives in Paris. Camus believed that the failure of A Happy Death, his abandoned first novel, was due to his inability to write without interruption. Camus’ isolation in Paris enabled him, out of necessity, to devote all of his attention to The Stranger. Kaplan’s research offers us a marvelous glimpse into the creative process Camus used, or perhaps more accurately, was host to, during his writing of the novel. Kaplan claims that Camus wrote The Stranger almost line for line, as if he were dictating a story he was seeing play out before his eyes. Where he struggled with the writing of A Happy Death, The Stranger seems to have emerged almost fully formed, complete.

    However, his writing of The Stranger does not mean that it was without its problems. In fact, the birth of The Stranger was long and fraught with difficulties both internal and external. Until his arrival in Paris, Camus struggled with getting into the narrative, creating a new story, as well as using material from A Happy Death. Interestingly, most reviewers of Kaplan’s book, Robert Zaretsky, himself an accomplished Camus scholar, and John Williams in particular[v], have devoted a majority of their reviews to the shortage of paper in France as the novel was set to go to press. “To say that the very existence of The Stranger was threatened by the material conditions of the war is no exaggeration, since paper supplies were becoming more and more precious. It looked at one point as if Camus would have to supply his own paper stock!” (136). Camus was in Oran with his family at the time, and was happy to help Gallimard with locating paper. The novel came very close to not being published, but paper stock was found at the last minute and Camus was not obliged to supply his own.

    Once the novel was published it was met with immediate success. But perhaps its success was not so unusual after all. From the beginning Camus wanted the French publishing world, located in Paris, to represent him. In the chapter “A Jealous Teacher and a Generous Comrade,” Kaplan tells the story of Camus’ almost frantic correspondence with Jean Grenier and Pascal Pia, the teacher and the comrade, respectfully, and their influence on The Stranger in its early stages. More importantly, if Camus were to move from a provincial author to a wider audience, one that would include the whole of Europe and possibly America, he would have to seek publication outside of Algeria. As Kaplan notes: “Yet Paris was still the center of book publishing in France, and if Camus wanted to publish outside Algeria, he’d eventually have to find a way to get his manuscript to the capital” (107). This, it seems to me, provides the necessary evidence that Camus was thinking bigger than his native land. He desired a world stage, a stage that would allow his work to be read by the widest possible public and Gallimard was the publisher that could provide him with that opportunity. In his book The Existentialist Moment: The Rise of Sartre as a Public Intellectual, Patrick Baert illustrates the importance of publishing, especially those publishing houses in Paris, for providing the necessary outlet for ideas. “Intellectual ideas spread mainly through publications. Whether through books, magazines, or articles, publishing is central to the rise of intellectual movements. For such movements to be successful, authors have to be well connected to the main publishers and need to have sufficient freedom and power to be able to write what they want to write.”[vi] The network Gallimard could provide Camus with would plug him into some of the most resonant writers and thinkers of the time. As mentioned above, The Stranger was not just a novel, but also an important piece of a longer meditation on the absurd. Therefore, Camus’ relationship with Gallimard, as Kaplan points out, is a key component to his rise to international prominence. Quite frankly, without Gallimard, The Stranger might not have met with its tremendous success.

    Camus’ association with Gallimard was not the only key to his success, however. Gallimard’s star and existentialism’s major voice, Jean-Paul Sartre, also had a lot to do with the success of The Stranger. In his celebrated review of The Stranger, originally published in 1943, Sartre almost single handedly anoints Camus into the French intellectual network, thus solidifying his reputation as a resonant French intellectual. Still, early on in his review Sartre points out that, like its author, The Stranger is a book from “across the sea,” highlighting Camus’ Algerian heritage. Sartre’s generous and insightful review gives a certain intellectual legitimacy to the novel. Sartre: “The Stranger is a classical work, a work of order, written about the absurd and against the absurd.”[vii] This Apollonian form, in the Nietzschean sense, of the novel further reinforces the boundary lines that mark the absurd context, a context that we might fold into the Dionysian, again in the Nietzschean sense.

    But it would be a mistake to consider The Stranger a French novel; it is, in almost every sense, an Algerian novel, a novel obsessed with the sun and the sea. What is perhaps closer to the novel’s intention is, at least in part, a Mediterranean world in a colonial context. In other words, the pied noirs who enjoy French citizenship and the protection it offers as opposed to Arab subjectivity. Arab subjectivity is one of the chief criticisms postcolonial scholars hurl at The Stranger and its author. Yet, a purely postcolonial reading of The Stranger severely limits our understanding of the novel. As David Carroll points out, “I would even say that to judge and indict Camus [as Edward Said does] for his “colonialist ideology” is not to read him; it is not to treat his literary texts in terms of the specific questions they actually raise, the contradictions they confront, and the uncertainties and dilemmas they express. It is not to read them in terms of their narrative strategies and complexity. It is to bring everything back to the same political point and ignore or underplay everything that might complicate or refute such a judgment.”[viii] The postcolonial lens that has dominated readings of The Stranger has also relegated it and its creator to a graveyard for Eurocentric authors. Kaplan’s attention to detail, however, locates the nameless murdered Arab in The Stranger in a central, one might even say, privileged, position. Almost from the beginning, Kaplan admits to being nearly obsessed with the figure of the nameless Arab. Indeed, the namelessness of this character is one of the pivotal points in her book. As Kaplan discovers, there was a nameless Arab in Camus’ life, one that would lead him straight to the central scene in The Stranger.

    In 2015 Other Press published the English translation of Kamel Daoud’s The Meursault Investigation, a retelling of The Stranger from the point of view of the brother of the Arab killed on the beach by Meursault. Daoud, an Algerian journalist living in Oran, writes for Quotidien d’Oran, a French language newspaper in Algeria. The Meursault Investigation is an interesting book that reads more in the style of Camus’ The Fall than The Stranger. The protagonist, speaking to us in the first person from a bar in Oran, informs us that there are other facts in the case that we did not hear, the chief among these is the name of his brother, Meursault’s victim, Musa: “Who was Musa? He was my brother. That’s what I’m getting at. I want to tell you the story Musa was never able to tell. When you opened the door of this bar, you opened a grave, my young friend” (4). Daoud’s text comes dangerously close to being fan fiction. However, there is something profoundly relevant in the novel. The Meursault Investigation demonstrates a deeper understanding of The Stranger, and Camus’ style. In order to write this book, Daoud proves that he knows The Stranger intimately, and his contribution to the story is, indeed, worthy of consideration. The Meursault Investigation demands to be read, digested, and then read again in the context to the cultural as well as the literary conditions of Algeria before, during, and after its independence.

    Kaplan devotes nearly an entire chapter (chapter 26) to Daoud’s novel and the figure of the unnamed Arab who appears in nearly spectral form in The Stranger. She tells us that she had a meeting with Daoud in 2014 in Oran, in which he claimed “we don’t read The Stranger the same way as Americans, French, Algerians” (210).  Kaplan’s reading of Daoud’s novel is a revelatory experience for her, and by association, for us. She strategically situates The Meursault Investigation both within and beyond the lens of postcolonial theory.

    Kaplan’s research into the source of the killing of the Arab scene in The Stranger is a remarkable piece of journalism. Her investigation led her through the towns and alleyways of Oran, to dusty archives, and populated streets, all despite an Algerian travel advisory for those holding a United States passport. “For two years, I had traveled to places in France and Algeria connected to The Stranger: I had walked down the former rue de Lyon in Algiers, past Camus’s childhood home. With photographer Kays Djilali, I climbed the steep Chemin Sidi Brahim, knocking on doors until we found the House Above the World, now the home of three generations of Kabyle women who speak neither French nor Arabic. With Father Guillaume Michel from Glycines Study Center in Algiers, I drove out to gold and blue vistas of Tipasa. In Paris, I stood in the dreary spot on the hill of Montmartre where Camus wrote in solitude” (211). At the end of the trail is a name: Kaddour Touil, and a story.

    Kaplan’s research demonstrates that it is not really Camus the author who haunts The Stranger, but rather it is the specter of Meursault who haunts Camus, both in life and after death. Meursault, as Olivier Todd informs us, is a combination of several people Camus knew. “The character of Meursault was inspired by Camus, Pascal Pia, Pierre Galindo, the Bensoussan brothers, Sauveur Galliero, and Yvonne herself. Marie was not Francine. Camus the writer mastered his novel in a way that Camus the man did not control in his life. Meursault never asked himself any questions, whereas Camus was always examining his actions and motivations.”[ix] Authors routinely use what and who they know for characters and their actions in books, but Camus’ relationship with Meursault seems to be as complicated as that character’s relationship with the reader. Kaplan’s book sheds a new light on the complexities of those relationships.

    The Stranger is truly a work of world literature, in the sense that David Damrosch defines the concept.[x] With The Stranger we have an Algerian author who wrote in French but was influenced by Danish, Russian, and German thinking, and was stylistically influenced by American authors like Hemingway and James M. Cain. Alice Kaplan gives us a view of The Stranger that joins a growing chorus of scholarship on the controversial book and its author. She provides keen insight that opens up other avenues of thinking about that book and its author. Camus’ influence seems to be growing, not diminishing as we move deeper into the twenty-first century, and this is needed, especially given the growing resurgence of nationalism and isolationist polices, i.e. Brexit and Trump. Perhaps it’s only literature, and international fiction in particular, that can save us from ourselves. In this age of social media epitomized by the egotistical selfie, international fiction has become more important than ever. Kaplan’s book reminds us that nothing exists in a vacuum, that great works of art come about contextually and pan-culturally. The Stranger might never have been a success without the French existentialist network of the time.

    Andrew Martino is Professor of English at Southern New Hampshire University where he also directs the University Honors Program. He has published on contemporary literature and is currently finishing a manuscript on the concept of security in the work of Paul Bowles.

    Notes

    [i] Edward W. Said. Culture and Imperialism. (New York: Vintage Books, 1994), 174.

    [ii] Arendt, Hannah. “French Existentialism.” Essays in Understanding: 1930-1954. (New York: Schocken Books, 1994), 192.

    [iii]Albert Camus. The Myth of Sisyphus. Trans. Justin O’Brien. ) New York: Vintage Books, 1991), 30.

    [iv] Randall Collins. The Sociology of Philosophies: A Global Theory of Intellectual Change. (Cambridge, Massachusetts: The Belknap Press of Harvard University Press, 2002), 764.

    [v] See Zaretsky’s review in Los Angeles Review of Books (https://lareviewofbooks.org/article/biography-zaretsky-kaplan-camus/) and Williams’ review in the New York Times (Sept. 15, 2016).

    [vi] Patrick Baert. The Existentialist Moment: The Rise of Sartre as a Public Intellectual. (Cambridge, England: Polity Press, 2015), 138-139.

    [vii] Jean-Paul Sartre. “The Stranger Explained.” We Have Only This Life to Live: The Selected Essays of Jean-Paul Sartre 1939-1975. Ed. Ronald Aronson and Adrian Van Den Hoven. (New York: New York Review Books, 2013), 43.

    [viii] David Carroll. Albert Camus the Algerian: Colonialism, Terrorism, Justice. (New York: Columbia University Press, 2007), 15.

    [ix] Todd, Olivier. Albert Camus: A Life. (New York: Alfred A. Knopf. 1997), 107.

    [x] Here I am thinking specifically of Damrosch’s theory of circulation. See David Damrosch’s What is World Literature. (New Jersey: Princeton University Press, 2003) for a full definition of the concept.

  • Nathan Brown — The Logic of Disintegration: On the Art Practice of Alexi Kukuljevic

    Nathan Brown — The Logic of Disintegration: On the Art Practice of Alexi Kukuljevic

    by Nathan Brown

    The body is the inscribed surface of events (traced by language and dissolved by ideas), the locus of a dissociated Self (adopting the illusion of a substantial unity), and a volume in perpetual disintegration.

                            – Michel Foucault, “Nietzsche, Genealogy, History”[i]

    A troubling and enabling fact about the body is that it is never exactly “here” nor “there.” The existence of the body evades its coincidence with language, with thought, with the I, such that it can be described as “the locus of a dissociated Self.” The body is the self, but it is the self as dissociated. Its existence is an index of the dissociation the self is, of the self’s non-identity with itself, with language, and with thought.

    Writing on Nietzsche’s physiological attunement to philosophical thinking, Foucault offers three determinations of the body: 1) the inscribed surface of events; 2) the locus of a dissociated Self; 3) a volume in perpetual disintegration. These determinations abjure the apparent self-evidence of the body’s organic integrity (“the illusion of substantial unity”) in order to consider it as the site of certain operations (inscription, dissociation, disintegration) and as a spatially extended object (surface, locus, volume). The body records events and it instantiates the self’s dissociation. It holds together the dissociated self with those events that traverse it, but the very site of this holding together, its volume, is at the same time coming apart, disintegrating. Language traces events inscribed on the body; ideas dissolve them. Language and ideas separate events from the body, from the surface upon which they are inscribed, exteriorizing their inscription (tracing) or absorbing them into thought (dissolution). The perpetual disintegration of the body is the process by which the surface of its volume ceases to make available such exteriorization or absorption. The disintegration of the body is the gradual coming undone of language and of thought, of the registration of events.

    How might we situate art with respect to these determinations of the body? As a practice, art takes place at the boundaries of language and thought: it is involved with language and thought, yet not (only) linguistic or ideational. To describe the body as “a volume in perpetual disintegration” is to consider it formally: disintegration implies a measure of integration, and this measure, considered as volume, is form. Can this disintegration of form be exteriorized? As the body disintegrates, can it produce a double of its disintegration? Or, if not a double, at least a counterpart, a semblable? If philosophy takes place as the conjunction of language and thinking, how can art, at the boundaries of philosophy, disjoin these by doubling the perpetual disintegration of the volume that the body is, by displacing the locus of a dissociated self?

    These are the questions that will guide my approach to the art practice of Alexi Kukuljevic,[ii] through which I hope to limn a certain science of the logic of disintegration.

    CAPUT MORTUUM

    At Caput Mortuum, Kukuljevic’s solo show in 2012, a plaster cast of the artist’s teeth, his bite, sits atop the highest plinth in the room, spray painted gold and titled The Subject-Object (Fig. 1). The cast displays a pronounced overbite, the upper incisors caving in at the center and thus protruding out diagonally at irregular angles. On a plinth behind and to the left, another cast of the same teeth is presented at the opening of the show, this time in frozen black ink resting on a neon green edition of Hegel’s Phänomenologie des Geistes. The piece is titled The Object-Subject (Fig. 2 & 3). As the duration of the opening unfolds, the frozen cast melts into a liquid black pool, soaking the book beneath and forming a minimally differentiated volume of black ink against the background of the black plinth below. Watching over this process of disintegration from the wall above is a silkscreen print of Hegel’s portrait, his forehead exhibiting an unseemly goiter of spray foam with a nail driven through its center, neon green paint seeping from the wounded brow of the great thinker, running over the left eye and down the philosopher’s face across the surface of the print.

    Figure 2, The Object-Subject (2012)
    Figure 3, The Object-Subject (melted)

     

     

     

     

     

     

    This configuration establishes a basic dialectic of the artist’s practice. A singular or signature trait of the artist’s embodied subjectivity—his irregular bite—is cast as a sculptural object and presented to the viewer’s eye coated in the color of value, gold. A frozen double of this object, cast in the color of negation and the medium of inscription—black ink—displays the impermanence of its objecthood, the temporal finitude of its form, by melting into an indistinct pool. The subject becomes object on the condition that the object becomes subject, yet the doubling of the object (molded in plaster as well as black ink) enables it to sustain its form even as it melts into fluidity. The formal and fluid excess of this doubling is suggested by the seepage of paint from the pierced surface of Hegel’s printed portrait, as if the provocation of the thinker’s absolute judgment—that “the being of Spirit is a bone” (Hegel 1977 [1807]: 208)—called for a trepanation, by way of verification. Can we find the substance-subject in the skull? In the Phenomenology’s chapter on “Observing Reason,” philosophy reaches the point at which thought thinks its unthinking substrate and thus sublates that substrate as thought. It then becomes the vocation of art to render the residue of this sublation—the persistence of thought’s unthinking body—as the obdurate, curiously inconceivable, condition of its possibility.

    Art thus inhabits the disjunction between the highest and the lowest, the spiritual fulfillment of self-comprehending life and the physical function, as Hegel puts it, of taking a piss (210).[iii] From the point of view of philosophy, it “must be regarded as a complete denial of Reason to pass off a bone as the actual existence of consciousness” (205). From the point of view of art, the materials in which consciousness is inscribed are the ineliminable ground of formal specificity. “The body” is a relay between subject and object, but one that cannot simply be “lived.” Thinking itself as a thing that thinks, the thinking thing finds its particularity in the material substrate and remainder of this operation: not just any skull, but this skull; this skull which is, impossibly, “mine.” “My body” is that which is not (quite) either mine or me, yet which is I. The being of Spirit is not just any bone. What dissolves into fluidity through the becoming subject of the object, or resolves into solidity through the becoming object of the subject, is the specificity of these teeth, the irregular contours of this bite, and it is on the condition of encountering resolutely material form that universality can include particularity.

    Within the cut between the Subject-Object and the Object-Subject, art tarries with this relay between the specificity of the material particular and its insistence, as specific, within the genericity of the universal. This is one of the rifts that art inhabits.

    RIFT 

    Absolute knowledge requires the reconciliation of subject and object. This is not an option for art. If art knows anything (this is unclear) it is that the subject can not even be reconciled with itself, let alone with the object. The art object is an unreconciled remainder of the rift between the I and the Me. “Me” is the object form of the pronoun “I.” When I say “I,” the “me” is the unfortunate residue of my enunciation. “I = I” enunciates the genesis of the subject, but, for better or for worse (for worse), the subject has a body that remains unequal to the equals sign, that is unreconciled with the I to which it supposedly belongs. There “I” am (me), just when I hoped to be “here.” The golden egg of self-equivalence is held aloft (Fig. 4), supported by the doubled singularity of the irregular bite, by the mold of the split jaw that is the ground of articulation, the structure of the mouth, the condition of enunciation, or “The Limits of Grammar” (as another title has it).

    I = I splits into the dissociation of the I and the Me, held together as the body, exteriorized as the art object — the residue of such dissociation. In A Little Game Played Between the I and the Me, Kukuljevic’s contribution to the Nouvelles Vagues show at Palais de Tokyo in 2013 (Fig. 5), the central piece titled The I and the Me consists of two formally similar but morphologically discrepant sculptural masses, one of which is placed solidly upon a pedestal while the other hangs precariously from its edge, as if having just climbed up on stage or about to fall off.

    Figure 4, I = I (2012)

    From a speaker within these asymmetrically relational forms, not-quite mirror images, a  slow, dry, tired voice emanates into the gallery space:

    I say: I, I, I. You say: me. Me say, you. You say: I, I, I. I say, me.

    You answer to human. You grind your teeth. You point with the jaundiced nub of a finger. Your jaw drops on its hinge. Your thumbs bend at the joint. You toss word upon word. Live in abstraction. Skip stones. Sip whiskey. Polish silver. Lay claim to the luxury of fine cotton. Vomit champagne. And know how to sharpen the blade.

    ….

    There is an unease in your cadence. Your pace is hobbled. Your bones lack alignment. Your stare a milky grey. That hole in your head oozes something unrefined. Something is making you reach for your nail file. Adjust your posture. (Kukuljevic 2013)

    Figure 5, A Little Game Played Between the I and the Me (2013), Installation View

    Art is a pastime, a distraction, an indulgence, or a chore, like skipping stones, sipping whiskey, or polishing silver. It is a luxury, a guilty pleasure, like fine cotton or champagne, yet also something of an impediment, a burden, a limp, or perhaps the cane a limp requires. The hangover after the champagne. It is at once a decadent practice and the tick of the uneasy, the correlate of both the hobbled pace and the easy profligacy of the dissolute aristocrat. A goiter. A gouty toe. An overgrowth. Something that makes you reach for your nail file. It can hardly keep its balance on the pedestal upon which it is placed. The I stands firm, but the Me falters. Or the Me pretends to solidity, as the I wavers. Art is the imbalance of their mutual reckoning, their teeter-totter, the milky grey substance of their self-regarding stare, the hinge upon which the jaw issues abstractions, the sharpened blade with which one arm stabs the other.

    The rift between the I and the Me is the rift within the I = I, and consciousness of this rift demands its object, “the locus of a dissociated Self,” in order to convey its dissociation. The recognition of this dissociation solicits its displacement. Yet the object into which this locus is displaced must itself be doubled if it is not merely to suggest an exteriorization of the self, but rather the exteriorization of the self’s dissociation. The doubling of the object is the double of the dissociated (rather than unified) subject. Art which knows the riven conditions of its own possibility duplicates the singular form of its object, breeds its replication, demands reiteration, refuses the originality of the origin. Art repeats.

    (C8H8)n

    Figure 6, The Subject’s Alchemical Residuals (2012), detail

    In his sculptural work, Kukuljevic’s preferred material is styrofoam (expanded polystyrene).[iv] The I and the Me, for example, is composed of rectangular polystyrene panels stacked unevenly or clumped together vertically, coated with cement, and globbed with spray foam. Kukuljevic shapes the material by cutting it, burning it with a blowtorch, or melting it away with acetone, a substance with roots in alchemical practices (see Gorman and Doering 1959). Thus, one of The Subject’s Alchemical Residuals (Fig. 6) is a curved wedge of styrofoam with a conical hole melted through its center, the pocked surface around the base of the conical hollow marking the damage done by splashes of the corrosive substance. Like The Subject-Object and The Object-Subject, this might be read as something of a demonstration piece, a formal synechdoche or concentrated reduction of the artist’s concerns and methods, a minimal unit of his practice.

    The subject makes a hole in the object, which thus becomes an art object. The hole is not made by digging, by a practice of removal that would merely shift its material off to the side. It is made by dissolution, dissipation, dispersion: the hole itself, not the material subtracted from it, is the visible remainder of its production. What is produced is not a pile but an absence, a negation. The material bears the trace of this negation without remainder; that which remains is spirited away. Thus the art object becomes the residue, the residual, of an act of negation, its damaged remnant. It is an alchemical residual insofar as, qua art object, it has acquired value. Value is acquired by the material remnant of the negation of matter; it is its immaterial companion, inscribed as an absence within the object that makes it art.

    If the production of acetone has its roots in premodern alchemical practices, the production of polystyrene (beginning in the 1930s at IG Farben and 1941 at Dow Chemical) can be traced to the emergence of aromatic polymer chemistry, predicated upon Kekulé’s modeling of the benzene ring in 1865, and thus coeval with Marx’s theory of the commodity. The coincidence is merely suggestive, yet the chemical fabrication of organic compounds (“synthesis”) shadows the history of real subsumption and the attendant rise of mass consumption like an uncanny double (see Leslie 2005). Not only industrially produced objects but the molecules of which they are composed become artificial. Marx tells us that

    If we subtract the total amount of useful labor of different kinds which is contained in the coat, the linen, etc., a material substratum is always left. This substratum is furnished by nature without human intervention. When man engages in production, he can only proceed as nature does herself, i.e. he can only change the  form of the materials. (Marx 1990 [1867]: 133)

    This remains the case, but with the rise of chemical synthesis the production of the material substratum itself becomes a matter of labor, such that the only remaining substratum “furnished by nature without human intervention” are atoms of carbon and hydrogen — not even the molecular forms in which these are combined. As if in uncanny response to the metaphorical provocations of Marx’s chemical analogies in the first volume of Capital, the commodity becomes artificial in its very substance. The abstraction of socially necessary labor time saturates not only the object produced from natural materials, but also the molecular structure of the materials themselves, such that even the latter are soaked in the immaterial substance of value.[v] “It is absolutely clear,” writes Marx,

    that, by his activity, man changes the forms of materials of nature in such a way as to make them useful to him. The form of wood, for instance, is altered if a table is made out of it. Nevertheless, the table continues to be wood, an ordinary, sensuous thing. But as soon as it emerges as a commodity, it changes into a thing which transcends sensuousness. (163)

    Synthetically produced organic compounds, such as polymers, are in this sense not “ordinary sensuous things” (“materials of nature”) but rather materials that already “transcend sensuousness,” materials that are never not already commodities. Not only the process of production but also the materials upon which it works are fully subsumed.

    Thus a styrofoam cup is a commodity made of a material that has no “natural” existence outside of the commodity form, as is a polyester dress. So is a rectangular panel or a molded form of expanded polystyrene packaging material, but in this case the relation of the commodity to its consumption is rather curious. Here we are dealing with a commodity whose use value is to protect commodities as they circulate. A consumer buys something else, and some styrofoam comes with it, a necessary if unwanted accompaniment. Indeed, styrofoam packaging is in a particularly abject position insofar as it does not even carry out the other functional purpose of packaging, that of advertising the product within, in the manner of the all important box. Styrofoam is a mere intermediary between the alluring surface of the disposable exterior and the desirable utility of the interior object. A material byproduct of circulation, expanded polystyrene packaging is both invisible at the point of sale and already waste at the point of consumption. Even the consumer’s cat, who loves to sleep in cardboard boxes, wants nothing to do with molded styrofoam once it has been cast aside. Artificial even in its molecular constitution, unwanted by the consumer to whom it is destined, expanded polystyrene packaging is the paradigmatically unnatural detritus of the capitalist transformation of nature.

    The rendering of this destitute material as art is its salvation, or one more indignity to which it is subjected. At last, in any case, it is put on display, forming the curious substance of something someone might even buy.

    PERSONAE

    In the hospital rooms on either side, objects—vases, ashtrays, beds—had looked wet and scary, hardly bothering to cover up their true meanings. They ran a few syringesful into me, and I felt like I’d turned from a light, Styrofoam thing into a person. I held up my hands before my eyes. The hands were as still as a sculpture’s.

                            – Denis Johnson, Jesus’ Son

    “From a light, Styrofoam thing into a person”: Kukuljevic’s art practice reverses this conversion. The movement from person to styrofoam thing is productive not only of sculpture, but also of personae: those artificial figures of personhood through which one presents oneself to the public.

     Spending some time at his 2012 solo show, Don’t Be a Dreamer, Mr. Me, one comes to feel an odd sense of consolation among its major pieces: An Orgy of Stupidity (Fig. 7); Idiot (Fig. 8); A Gangrenous Fop (Fig. 9). The titles suggest a shared lack of intelligence, foregrounding a common trait of cognitive degeneracy. Indeed, not much can be expected by way of sparkling conversation from chunks of burned and painted styrofoam. “Everything about the show appears to be unhealthy — mentally and physically,” found one reviewer (Schwartz 2013). It is true, a sojourn among these initially unattractive, mildly poisonous forms seems not to promise the edification of a trip to the gym or the library. Yet one nevertheless develops a certain fondness for them, this cast of characters; an improbable affection gradually accrues in their mere presence.

    Deleuze recognized that stupidity is both the enemy of and the condition for philosophical thinking. One thinks in order to combat stupidity, yet in order to begin thinking at all, one has to be stupid. There must be an interruption of the order of the given, of the already known, of what Deleuze called “the image of thought” in order for thought to encounter its own ungroundedness: in order for thought to know that it does not know, and thus begin to think. In order not to be stupid, one has to be stupid: this is a contradiction with which philosophy has been embroiled since Socrates. What Konrad Bayer called “the sixth sense,” says Kukuljevic, involves “knowing when to risk being a dummy” (Kukuljevic, 2013-2014). But Deleuze goes beyond merely knowing when to take this risk, claiming that “Stupidity (not error) constitutes the greatest weakness of thought, but also the source of its highest power in that which forces it to think” (Deleuze 2004 [1968]: 345). Just as “the mechanism of nonsense is the highest finality of sense,” he argues, “the mechanism of stupidity (bétise) is the highest finality of thought” (193).. Do these styrofoam forms impart some of their stupidity to the viewer? Do they thus solicit thinking?

    One notes their vaguely anthropomorphic aspect. An Orgy of Stupidity looks like an enormous malformed grey skull accosted by pink spray foam, brooding dull-wittedly upon its table. Holes are melted into the “front” of the piece, resembling hollow eyes, while deeper crevices puncture it behind and below, visible when viewed in the round. Deceptively simple, the formal construction of the piece is in fact carefully articulated. Positioned at the back of the room, the bulk of this sculpture anchors the space, at once drawing the gaze and looking on, surveying the assembled art without having much to say about it. The show seems to turn upon this piece, a dead-head like a humanoid boulder measuring the depth or frivolity of our contemplation, of our chatter, against the taciturn obduracy of its inorganic impassivity.

    Figure 7, An Orgy of Stupidity (2012)

    While An Orgy of Stupidity rests solidly upon its base, Idiot is propped against a load-bearing column, while the large, roughly rectangular form of A Gangrenous Fop balances upon a single dowel anchored in a styrofoam base resting on a plinth. The fragile support of the latter piece drives home the lightness of what seem to be massive forms, the interior airiness of imposing exteriors, often sealed with a layer of concrete. This counter-intuitive play between the heaviness of surface and the lightness of depth is mediated by the technique and motif of perforation running through Kukuljevic’s practice. It is enacted by his melting away of surfaces in order to bore into sculptural forms and also thematized in wall pieces involving concrete and chair caning (Fig. 10), a material he values for the concomitant complicity and cancellation of surface and depth suggested by its woven form.

    Figure 8, Idiot (2009)
    Figure 9, A Gangrenous Fop (2012)

    The surface is more weighty than the interior — that is the sort of judgment one might venture looking at a piece like A Gangrenous Fop, with its lightly balanced heft. Yet the concrete surface itself is punctuated by holes that confuse or undo this distinction, leading us into the form along its surface in pursuit of depth, which thus becomes surface. Likewise, the use of spray foam to combine sculptural masses and to fill in crevices between them suggests an eruption — or at least a slow, coagulating leakage — of the interior. Meanwhile, color mediates this formal dialectic. Synthetic, superficial fluorescent shades seep from interiors or coat their exposed crevices, highlighting absences opened by corrosion, or the white sublation of color constitutes a pure yet perforated surface through which solid grey concrete seeps.

    Figure 10, Concrete IV (2012)

    If the somewhat familiar sculptural forms (one of them is titled A Human-Like Creature) exhibited at Don’t Be a Dreamer, Mr. Me come to seem sympathetic, perhaps it is because they have been through so much. Punctured, corroded, seeping foam and stained with garish colors, carefully poised or precariously propped up, they have an air of weary endurance about them, as if about to collapse or retire yet in for the long haul by virtue of their molecular inertia and their improbable value as art. They seem fated to be tired for a long time, with no choice but to make a display of themselves. This wry anthropomorphism solicits transferential self-pity, such that a title like Idiot may come to feel like a way of insulting the audience — a rhetorical inclination to which Kukuljevic is happily prone. In the end one takes it well. There is something like a communal self-loathing to be gleaned from such a show, the circulation of self-recognition as the concession of its weary stupidity, its dissolution (Fig. 11).

    Given the dissociation of the self, its perpetual disintegration, perhaps an encounter with the stupidity of self-recognition is one among the most precious objects art has to offer — or at least its most sincere gift. It snaps one out of a bland tete-â-tete with oneself, or with another, such that one begins to think. We come to feel affection for the forms that gift takes.

    Figure 11, Even Misanthropes Grow Weary (2014)

    SMOKE

    Figure 12, One or Two Things I Know About A.K. (2012-2013)

    Having started with bone, why not end with breath? Both have been said to be spirit. Yet even as Hegel could subsume the materiality of the skull within the ideality of the concept, breath is a materialization of the ineffable. This is a recognition readily available amid a cloud of cigar smoke, which constitutes for Kukuljevic not only a medium in its own right but a method of attunement, a dissociated Stimmung:

    Trapped between index and middle finger, a cigar traces a delicate line, its stump more unseemly. However, if held with poise, a cigar is a simple and elegant machine, much like a crowbar, that provides the mind with the material impetus for prying off an impression of the soul, as one peels off a latex mold.

    “Each cigar is a snapshot,” he writes, “of the soul’s decomposition” (Kukuljevic 2014). The cigar is a prop, like a sculpture. Yet it is a prop whose substance becomes interchangeable with that of the subject who wields it, to the detriment of both the subject and the object. The cigar is the temporary site of a chiasmus whereby both the subject and the object burn down to a material remainder, the former more slowly than the latter but no less surely. The billowing form of the cloud of smoke “focuses the mind on life’s dissipative march” (Fig. 12).

    Marcel Duchamp understood the pitfalls of relating to art primarily through the figure of the object, or “the art object.” For if something is an object, how can it be art? And if it is art, how can it be an object? Implicit in these questions is the immaterial surplus exhaled by any object that comes to be called “art,” the ineffable imprimatur invisibly stamped upon that which the term designates, an imprimatur that converts it into something other than what it is. Duchamp thus focused his attention upon what he called the infra-thin: “when the tobacco smoke smells also of the mouth which exhales it, the two odors marry by infra-thin.” The two odors, he says. Yet this figure of the infra-thin involves not only a marriage of two odors, but also of the object, the subject, and the fumes it exhales, mediated by the corporeal hollow of the mouth. Here the infra-thin is a complex of the subject-object, or the object-subject, which entails not only the ephemeralization of the corporeal but the corporealization of the ephemeral, a physics of the metaphysical and a materialization of the ideal, like “prying off an impression of the soul, as one peels off a latex mold.”

    Figure 13, The Physiology of the Cigar, Photogram (2014)

    If the smoking of cigars is properly considered part of Kukuljevic’s art practice (evident in his habit of filling the gallery with cigar smoke before openings), the photogram is its saleable analog (Fig. 13 & 14). Like his silkscreen prints of coral, or his wall sculptures with chair caning, his photograms tarry with the perforations constitutive of surface and with the permeability of the object. Just as the cigar burns into ash, a fragile record of its temporal dispersion, the retentional action of the photogram gives us to see the legible transparency of material structure, the ghost of the incorporeal that haunts all bodies.

    Figure 14, Torn Vitola, Photogram (2014)

    Yet the record of the cigar’s dispersion, its ash, is also its material residue — like styrofoam packaging that arrives alongside the consumer’s commodity. It needs somewhere to end up, to repose, and thus calls not only for the light touch of the photogram but also the hospitable embrace of the ashtray (Fig. 15). The propped up body of the sculpture would then support the papery corpse of the cigar, leaving the viewer to contemplate the degree to which form follows function in the case of so fleshly a friend of the infra-thin. This is the highest form of practicality we will encounter in Kukuljevic’s practice: the making of a place, barely contained within itself, to put the leavings of disintegration. Perhaps “the object” is better understood as such a place — and this is the sort of place, indelicately distended and on the verge of collapse, that the artist might call art.

    Figure 15, Ashtray #3 (2015)

     

    Figure 16, Trading Places (2015)

     

    It is in this sense that I view Trading Places (Fig. 16) as a particularly notable piece in Kukuljevic’s oeuvre. Whereas most of the sculptural works are either tenuously propped or heavily settled, this one rests upon a stable base, yet one that is mobile. Its form is again vaguely anthropomorphic, but in this case diminutive — a sidekick of sorts, like Lear’s clever fool or an R2D2 suffering the fate of Tithonus. The figure is burned out, carved away, its interior exposed and its surface rough-hewn, yet its dominant shade is a light azure that lends it a certain celestial freshness amid the charred remains it barely holds together. At the center of the piece, the same thin wood stick that bends under the burden of supporting some of the sculptures in this case holds aloft its own offering, cradled in a bright yellow latex glove, as if in supplication of the viewer. Here, the piece seems to intimate, this is what I have for you.

    What is thus presented is a bit of ash, the stump of a cigar, cupped within an indeterminate grey residue. Perhaps this is a present, maybe a presentiment. Sculpture, trading places, offers up a volume in perpetual disintegration as if posing its own question to the viewer, to the body of the subject who is not allowed to touch it: what do you have to offer me?

    BIBLIOGRAPHY

    Deleuze, Gilles. Difference and Repetition. (1968). Translated by Paul Patton. London: Continuum, 2004

    Gorman, Mel and Charles Doering. “History of the Structure of Acetone.” Chymia. 5 (1959): 202-208.

    Hegel, G.W.F. Phenomenology of Spirit. (1807). Translated by A.V. Miller. Oxford: Oxford University Press, 1977

    Kukuljevic, Alexi. Audio Track, A Little Game Played Between the I and the Me. Nouvelles Vagues, Palais de Tokyo, 2013.

    Kukuljevic, Alexi. Exhibition Text for Don’t Be a Dreamer, Mr. Me. (December 6, 2013 – January 19, 2014). http://www.marginalutility.org/exhibitions/2013/alexi-kukuljevic-dont-be-a-dreamer-mr-me/.

    Kukuljevic, Alexi. “More or Less Art, More or Less a Commodity, More or Less and Object,More or Less a Subject: The Readymade and the Artist” in The Art of the Concept. Edited by Nathan Brown and Petar Milat. Frakcija 64/65 (2013): 62-70.

    Kukuljevic, Alexi. Exhibition Text for You Can’t Rely on the Joke as the Only Mode of Social Relation…. (March 14 – April 30, 2014). http://www.kunsthalle- leipzig.com/kukuljevic.html

    Leslie, Esther. Synthetic Worlds: Nature, Art, and the Chemical Industry. London: Reaktion Books, 2005.

    Marx, Karl. Capital: Volume 1. (1867). Translated by Ben Fowkes. New York: Penguin, 1990.

    Schwartz, C. “Alexi Kukuljevic Dares Not to Dream at Marginal Utility.” Knight Blog (December 10, 2013). http://www.knightfoundation.org/blogs/knightblog/2013/12/10/alexi-kukuljevic-marginal-utility/

    NOTES

    [i] Thanks to Petar Milat for drawing my attention to this passage.

    [ii] Kukuljevic’s work has been included in exhibitions at Tanya Leighton Gallery (Berlin, 2016), Kavi Gupta (Chicago, 2015), Palais de Tokyo (Paris, 2013), De Appel (Amsterdam, 2012), and has been shown in solo exhibitions at Å+ Gallery (Berlin, upcoming 2016), Kunsthalle Leipzig (2014); ICA Philadelphia (2013), Jan Van Eyke Academie (Maastrict, 2013), and SIZ Gallery (Rijeka, 2012). He holds a Ph.D. in Philosophy from Villanova University, where he wrote a dissertation titled “The Renaissance of Ontology: Kant, Heidegger, Deleuze” (2009). He was a researcher at Jan Van Eyke Academie (2012-2013). His book Liquidation World: On Forms of Dissolute Subjectivity is forthcoming with MIT Press. He is the author of an artist’s book, Cracked Fillings, available at alexikukuljevic.com.

    [iii] Hegel writes, “The infinite judgement, qua infinite, would be the fulfilment of life that comprehends itself; the consciousness of the infinite judgment that remains at the level of picture-thinking behaves as urination [verhält sich als Pissen]” (210).

    [iv] Strictly speaking, “Styrofoam” is the brand name of extruded polystyrene produced exclusively by Dow Chemical, which is used in craft and insulation applications and is usually blue or green. The term is more loosely and commonly applied to expanded polystyrene in general, such as that used for foam cups or molded packaging. Following this common usage, I will refer to expanded polystyrene and styrofoam interchangeably.

    [v] Kukuljevic has published an essay on the relationship between the commodity form, the readymade, and the figure of the artist. See Kukuljevic 2013.

     

  • Pieter Lemmens and Yuk Hui — Apocalypse, Now! Peter Sloterdijk and Bernard Stiegler on the Anthropocene

    Pieter Lemmens and Yuk Hui — Apocalypse, Now! Peter Sloterdijk and Bernard Stiegler on the Anthropocene

    by Pieter Lemmens and Yuk Hui

    ‘You really take no account of what happens to us. When I talk to young people of my generation, who are about two or three years older or younger than me, they all say the same: we no longer have the dream to found a family, to have children, or a profession, or ideals, like you did when you were teenagers. That’s all over, because we are sure that we will be the last generation, or one of the last, before the end’

    The Shock of the Anthropocene

    In the above quote from the novel L’Effondrement du temps by the anonymous writer collective L’impansable, the fifteen year-old Florian addresses the current generation of politicians and more generally of adults responsible for our world and its future (L’impansable 2006). The French philosopher Bernard Stiegler has recently quoted this statement in many of his talks and it also features prominently in his new book Dans la disruption. Comment ne pas devenir fou? (In the disruption – how not to become mad? Stiegler 2016). Florian’s remark reveals a strong sense of melancholia about the arrival of the end. For Stiegler, this is not simply rhetoric. In an interview with the French Newspaper Le Monde on 19 November 2015, shortly after the Paris attacks, Stiegler confesses that “I can no longer sleep during the night, not because of the terrorists but because of worries that my children will no longer have any future” (Stiegler 2015a). What makes Stiegler so sad, and even so pessimistic about the current situation?

    As we see it, Stiegler is not exaggerating, but rather telling the truth. It is true that he has been accused of being a pessimist, because of his statements on the future of work, automation, editorialisation, etc. The general excitement about technological developments may give the impression that the world is moving towards a brighter posthumanist or transhumanist future. Many scholars working on technology tend to be easily satisfied with the phenomena emerging out of the new digital infrastructures and hence disregard a fierce critique of technology as a gesture of a Neo-Frankfurt School. Stiegler calls this attitude dénégation (denial). In his new book, Stiegler puts Florian’s accusation on a par with the shocking revelations of global “whistleblowers” like Edward Snowden, Chelsea (formerly Bradley) Manning and Julian Assange and he characterizes it as a parrhesia in the sense made famous by the French philosopher Michel Foucault, i.e., as a “frank and free” saying things as they are, or in other words a frank and courageous speaking of the truth. In this case, the truth of our time is a truth to which, according to Stiegler, virtually everyone prefers to close their eyes since it is too traumatic, inconceivable and appalling. It speaks not just about the possible but even the rather likely and imminent end of humanity, or at least of human civilization as we know it.

    What is this truth of our time? Perhaps one can start with its causes, which are multiple: the global climate and ecological crisis, resource depletion, military development, digital industrialization and a runaway consumerism accelerating daily through the intense exploitation of people’s attention and desires – there is a whole range of phenomena that seem to inevitably lead towards an apocalyptic end. If we are not able to reverse these destructive trends, humanity may soon confront its own extinction. The principal task and first duty of philosophy today, according to Stiegler, is to give a response to the parrhesia of Florian. Let’s start by introducing the subject of the Anthropocene and the scientific debates related to it. Many climate scientists[1] talk about a large-scale shift imminent in the Earth’s biosphere whose consequences will be unpredictable but in all likelihood catastrophic, especially if nations do not get together quickly to steer the “anthropogenic impacts” on the biosphere in a more beneficial direction. This mega- or ultra-wicked problem (as it is called in policy circles) is arguably the essence as well as the urgency of what has recently become known as the “Anthropocene”. This term was introduced in 2000 by the Dutch climate scientist and atmospheric chemist Paul Crutzen to identify the new geological era that in his view we have entered at least since the Industrial Revolution in the late 18th century (Crutzen 2002). As his now widely accepted hypothesis stated, “the human” (anthropos in Greek) or at least a certain part of humanity has become the most important geological (f)actor, having more impact on the state of the biosphere than all natural factors together. The human has thereby become de facto and willy-nilly responsible for the biosphere and by implication for its own future fate.

    The so-called “great acceleration” that started after World War II is considered to be responsible for finally bringing about what French historians Christophe Bonneuil and Jean-Baptiste Fressoz have called “the shock of the Anthropocene” (Bonneuil and Fressoz 2016): the world-wide dawning of humanity’s largely destructive impact on its own planetary life-support system. The predictions of the consequences of this for humanity in the short and long run vary, but even the Intergovernmental Panel on Climate Change (IPCC), known to represent the rather cautious mainstream view, has been forced to continually adjust its forecasts to more gloomy outcomes. The most extreme predictions, like those of the American ecologist Guy McPherson, foresee a near-term human extinction event within three decades (McPherson 2013).

    We would like to address the Anthropocene from both a philosophical and a political perspective. The former concerns the existence and responsibility of humans; the latter the political struggle that we must amplify. The term Anthropocene is ambivalent, since on the one hand, it leads to the illusion that man is back in the center, as one of the scientific researchers remarked during a recent conference entitled “How to think the Anthropocene?[2]”. The researcher proudly stated for the first time after the Copernican revolution, “man” has rediscovered her/his centrality. On the other hand, this revolution is responsible for global warming, the widespread destruction of ecosystems and the alarming loss of biodiversity that some authors (like Elizabeth Kolbert) have called the “sixth mass extinction”, caused this time by human beings themselves (Kolbert 2014). In other words, if it is responsible for putting “man” back in the center, it might also lead to her/his destruction.

    But what does this “geological event” of the Anthropocene really mean? Some geologists, or authors who are aligned with the thinking of “deep time”, see the Anthropocene as an insignificant event in comparison to the hundreds of millions years of geological history. The earth is in a constant process of destruction and reconstruction, the extinction of a species is one of those contingent events that carry no significance to the life of the earth. We may want to call this attitude, exemplified for instance in the work of the Dutch geophysiologist Peter Westbroek, geo-centrism or geo-reductionism (Westbroek 1992). The problem is not that such authors are wrong concerning earth science, but rather that they are right; in fact, they are so correct about it that they don’t see the problem.

    Marxist authors like Jason Moore, Maurizio Lazzarato and Christian Parenti argue that we should talk about the Capitalocene instead of the Anthropocene since it is not so much “the human” as the capitalist mode of production that is to be held responsible for the current devastation and exhaustion of the Earth’s biosphere (Moore 2016). Like Slavoj Žižek, they promote a more class-oriented view, re-interpreting the Anthropocene as the result of capitalism’s way of organizing nature in the case of Moore, who situates the Anthropocene’s  beginning not in the 18th century but in the long 16th century of primitive accumulation and the large-scale land-grabbing by budding capitalists known as the “enclosure of the commons” (Moore 2015). McKenzie Wark, another Marxist author who is nonetheless critical about the notion of the capitalocene (Wark 2015a), develops a “labor perspective” on the epic challenge of the Anthropocene, one that is inspired by the work of early Soviet authors Alexander Bogdanov and Andrey Platonov and feminist theorist Donna Haraway and Californian writer Kim Stanley Robinson (Wark 2015b).

    Many authors contest the term Anthropocene also because it suggests the existence of one unitary subject, “the human” or “humanity”, which would be responsible for the current crisis. However, as the German philosopher Peter Sloterdijk jokingly remarked in a recent public debate with Stiegler in Nijmegen in the Netherlands on the 27th of June (Sloterdijk and Stiegler 2016), sending an e-mail to humanity@planet.earth will inevitably yield a delivery failure message: “the human” or “humanity” does not exist. It is also obvious that some parts of humanity, like those belonging to the rich and affluent societies of the West, are much more “guilty” than, say, those fractions who live in the so-called developing world, the cruel fact being that the latter are generally much more affected by the devastating consequences of climate change than the former (in India for instance temperatures have been rising to a sweltering 51 degrees Celsius and many people are expected to die due to extreme heat and drought) (Wyke 2016).

    In his 1979 book The Principle of Responsibility, the German philosopher Hans Jonas already warned for the danger of humanity’s self-destruction due to its immense technological power and ability to destroy the planet (Jonas 1985). Jonas called for a new ecological ethic of responsibility and thereby proved himself to be an Anthropocenic thinker avant la lettre. His book was published at the onset of the so-called neoliberal revolution which swept away virtually every environmental policy that had gradually gained more support in the seventies and unleashed a global economic world war in the context of which we are all forced to compete against each-other—a war that is on a fatal collision course with the earthly ecosystem. The big question is whether, and how, we can reverse this process: how we can transform our hugely destructive impact on the earth into a more constructive and responsible one in order to avert the global catastrophe of which the current global crisis is only the prelude? As geobiologist Peter Ward put it in his book The Medea Hypothesis: ‘We are in a box. Ultimately it is a lethal box, a gas chamber or fryer, depending on how things work out. If we are to survive as a species, we will have to do a Houdini act’ (Ward 2015: 141).

    Two Proposals for a Reversal: Neganthropocene and Co-immunization

    What could be the response to the Anthropocene besides emphasizing responsibility? Or is there a more primary question still: who is responsible for what? Let us look at the diagnoses of two already mentioned thinkers who have both thought extensively about the human-technology relationship in recent decades: Peter Sloterdijk and Bernard Stiegler. Both offer some insights not only into the technological but also the historical and political, and even anthropological problem of the “shock of the Anthropocene”, which could be fundamentally understood as the consequence of neoliberal globalization of technology and capital.

    Sloterdijk, who calls himself a “leftist conservative”, is gaining increased attention in the Anglophone world yet is still a relatively marginal figure in it (unlike many of his continental colleagues of the same age and stature). His philosophical perspective is decidedly Nietzschean yet he is also very much influenced by Heidegger, Foucault, Deleuze, and Lacan, as well as the German tradition of philosophical anthropology (for example, Arnold Gehlen, Max Scheler and Helmuth Plessner). He became instantly famous in Europe in 1983 with his explosive debut Critique of Cynical Reason in which he diagnosed the current Zeitgeist as one of “enlightened unhappy consciousness” (with obvious allusions to Hegel) and a systemic hyper-cynicism that he hoped to counter with a new form of non-intellectual, bodily, popular- plebeian, humorous-grotesque, dadaesque and explicitly low-brow “critique”, inspired mainly by the brilliantly shameless performances of Diogenes of Sinope. His was a “critique beyond critique” that he called “kynicism” (with a k) (Sloterdijk 1988).

    While in this huge two-volume treatise Sloterdijk still presented himself as an heir of the tradition of critical theory of his principal teachers from the Frankfurt School, notably Adorno, Horkheimer and Bloch, he was clearly a very recalcitrant and ultimately rather unfaithful one. In his 1989 book Eurotaismus. Eine Kritik der politischen Kinetik, a thesis on the postmodern condition and its discontents, he largely exchanged the Frankfurt School for the “Freiburg School” and developed a Heidegger-inspired critique of modernity’s “total mobilization” in terms of a kinetic reinterpretation of the latter’s notion of releasement [Gelassenheit]. In the later chapters of this book he too proved himself to be an Anthropocenic thinker avant la lettre by pointing toward the fragility and finitude of the Earth as the base upon which human cultural-historical projects unfolded. He proclaimed that human culture would have to be increasingly responsible for its maintenance in the future, calling for a global ecological turn of the whole human endeavor (Sloterdijk 1989).

    Yet it is only in his monumental Spheres trilogy from 1998-2004 (Sloterdijk 2004), a grand sphero-immunological reinterpretation of the evolution and history of humankind and all the religious and metaphysical systems it produced—in other words, a history that operates from the perspective of humans as self-immunizing creatures who are sphere-building, sphere-abiding and sphere-borne beings–, that Sloterdijk develops a philosophical anthropology that is able to fully account for the anthropocenic condition we are inescapably entering. In particular the post-holistic, plural spherology or polyspherology of co-isolationist co-existence that is developed in the third volume of Spheres titled Foams is eminently suited for considering the human condition in the age of the Anthropocene (Sloterdijk 2016a), as Sloterdijk’s friend Bruno Latour has justly remarked (Latour 2008).

    *

    Bernard Stiegler started his academic career as a commentator of Martin Heidegger, more specifically on the question of technology in Heidegger’s thought. Unlike Sloterdijk, who takes the question of space and topology in Heidegger’s thought further and has suggested “Being and Space” as an alternative title for his Spheres-project, Stiegler’s work centers on the question of time and time’s relation to technology through what he calls tertiary retention, a notion that completes the circle of Husserl’s theory of retentions and protentions (Stiegler 1998). The tertiary retention is the technically captured trace as well as support of both primary retention (e.g. the melody that is retained in our mind) and secondary retention (e.g. the melody that we can recall tomorrow). For Stiegler the tertiary retention is a supplement as well as “exteriorization” of memory (in the words of French paleoanthropologist André Leroi-Gourhan) through which he attempts to re-read the history of European philosophy as a history of the suppression of the question of technics – as a response to Heidegger’s critique of the forgetting of the question of Being in Western metaphysics. The history of technology for Stiegler could be described as the history of grammatization, a term coined by the French historian and linguist Sylvain Auroux, in which the organic and inorganic organs are configured and reconfigured according to the progress of technological invention (e.g. alphabetic writing, analog writing, digital writing).

    Stiegler, who became a philosopher when he was incarcerated in Toulouse for committing several armed bank robberies, is currently director of the Institute of Research and Innovation (IRI), an institute that he established in 2006 in the Centre Georges Pompidou in Paris, and president of the lobby group ars industrialis. Best-known for his magnum opus Technics and Time, he has more recently dedicated himself to research on digital technologies as our new technical condition and he has developed what he calls a “general organology” (more on this below) to understand the effects on that condition of today’s consumerist capitalism (Stiegler 2010a). He has been a member of the national council for the digital in France. Stiegler’s politics consists in what he (following Plato and Derrida) calls the pharmacology of technology, namely the fact that technology is at the same time good and bad, remedy and poison. The politics of technology is to inhibit the toxicity in favor of the remedy. This also reveals his hope for the positive use of pharmakon as resistance against industrialization based on the exploitation of psycho-power, neuro-plastiticty and the capacity to take care of one’s self and of others (Stiegler 2010b).

    Of course, the immediate decarbonization of our economies and a transformation to renewable energy sources should be our first imperative. It could also be the case, as some geologists suggest, that geo-engineering will solve some of the problems that those changes would also address (Steffen et al. 2011). Others propose so-called “third way technologies” for carbon capture to reduce the atmospheric burden of CO2 during the time that is needed for the transition to a carbon-free economy  (Flannery 2016). However, what we are now facing is much more than a geo-chemical problem; and indeed it would be naïve to believe that it is only a geological question. We are facing, rather, what Stiegler calls the “entropocene”: the becoming entropic (in the sense of a world-wide exhaustion and ruination) of the biosphere due to what he calls a generalized toxification of all the systems that make up the human habitat on this planet: economic, social, technical, psychological, financial, juridical, educational, etc. (Stiegler 2017). In his view, those systems are all conditioned by a technical milieu which has been massively annexed and exploited by the capitalist industry to promote an evermore nihilistic process of production and consumption that exclusively serves the goal of profit accumulation. Since the technical milieu also encompasses the Earth’s biosphere, this leads to a massive accumulation of entropy that has reached such a scale so as to profoundly disrupt the geochemical processes of the earth.

    For Stiegler, humanity is an originally technical phenomenon that is made up of three different organ systems: the psychosomatic organs of human individuals; social organizations; and all kinds of technical organs (Stiegler 2014). Those three organ systems are intimately intertwined and evolve on the basis of changes in the technical organs. And these technical organs must be understood as compensations for an original lack of natural properties. Stiegler has developed the latter point with reference to the story that the sophist told in Plato’s Protagoras, in which the fire stolen by Prometheus is a compensation for the fault of Epimetheus, who forgot to give the human being any skill or property. Stiegler is critical of this compensation or what he also calls supplement. By taking up the concept of the pharmakon from Plato’s Phaedrus and Jacques Derrida’s “Plato’s Pharmacy” (Derrida 1981), he developed further what he calls a “pharmacology” of technology (Stiegler 2011). Technics are understood as pharmaka, i.e., both medicine and poison. New technologies, and one can think of the internet as a digital pharmakon, are initially always toxic and that is why they are in need of “therapies” which can turn the poison into a remedy. Politics, law, education, skill-based labor and professions are for Stiegler domains where such therapies can be developed (Stiegler 2013). Since technological innovation has been delegated totally to the market by neoliberalism and turned into a permanently accelerating process of “innovation for profit”, this therapeutic adoption of technology has become almost impossible, leaving only constant, frenetic, and increasingly blind adaptation. And this is for Stiegler the principal process behind the aggravation of the Anthropocene as entropocene.

    An example that may allow readers to imagine how such an entropy is produced is the use of technical organs (e.g. social networks, smart phones, automations, drones, etc.) for marketing and consumerism, which consequently destroy the psycho-somatic organs, since they produce only a drive toward perpetual consumption and no longer cultivate desire and therapeutic investment in skills and objects –– one can think of the addiction to video games or the internet and how they lead to the collapse of established social organizations. That situation systematically diverts attention away from confronting our real situation on this planet. The restructuralization of the economy as exo-somatisation oriented around the digital attention economy, big data and what is called “algorithmic governance” are taking us ever further into the abyss of nihilism. And yet, the internet is potentially also the best instrument at hand for a collective care-taking of the Earth and its inhabitants on a global scale. In Stiegler’s For a New Critique of Political Economy (Stiegler 2010), one of the alternatives put forward is an “economy of contribution”, which proposes to develop technologies which serve the initiation of a new economy of real investment of desire and the fight against the drive-based economy of consumerism. If the drive-based economy ultimately leads to addiction, then the economy of contribution hopes to turn libido into investment. That conversion is fundamentally a question of care: taking care of oneself and others.

    The entropocene marks the inability to construct such an economy of care and of libido. Instead, it leads and will continue to lead to the further spread of entropy. The anthropocene presents a global symptom, which cannot and must not be ignored as if it were simply a geological or a mere economical question. In 2015, the summer school of the Pharmakon academy–the philosophy school Stiegler started in 2010 in Epineuil in France–was dedicated to the “affirmation of a neganthropocene”. The neganthropocene argues for a new form of technological development that allows a so-called “bifurcation” – a radical change of direction in the sense of thermodynamics and seeks to produce qualitative differences for individuals as well as social groups. Recently, Stiegler has started a project with the Plaine Commune of Saint-Denis next to Paris to create what he calls a “truly smart city”, the realization of his philosophy for a new economy.

    *

    Sloterdijk already provided a perceptive and prescient sketch of the global situation of humanity in the epoch of what is now called the Anthropocene in his 1989 treatise Eurotaoismus. Until the dawning of the planetary “limits to growth”, as the famous 1972 report on the discrepancy between global economic expansion and planetary resources issued by the Club of Rome was entitled, the Earth was conceived (and accordingly treated) by a modernizing and industrializing humanity exclusively as the backdrop and unlimited resource fund for its cultural-historical projects. The metaphysical and “antisymbiotic” logic that characterizes the historical drama of mobilization that is modernity is indifferent if not blind to the stage upon which it is enacted. For a humanity that aims to become “master” and possessor of nature, as Descartes’ famous phrase had it, the Earth is reduced to a servant and supplier of material and energetic resources (and it is today still overwhelmingly considered as such by politicians and economists in terms of the “ecosystem services” it provides). It is only when the play starts to ruin the stage, Sloterdijk wrote in Eurotaoismus, that the actors are forced to take another view of both the stage and of themselves. What was once called “nature” and conceived of as an ever reliant, productive, abundant and robust backdrop has been fatally implicated in the maelstrom of human productivism and consumerism – “enframed” by it, as Heidegger would have it – with the destruction of its habitability impending if humanity does not start taking care of it and make it an integral if not central part of its cultural concerns. Referring to a phrase of the late Heidegger, Sloterdijk writes that the Earth can for us no longer be the endlessly patient “building-carrying” one that she was for all of humanity before us. The continued existence of so-called “nature, which we have now uncovered as being just a small and fragile ‘film’ covering a planetary body, can no longer be entrusted to her own autarky (since she has been scientifically exposed and technologically exploited), but will become dependent on us humans” (Sloterdijk 1989). That realization also means the definitive end of any peace of mind in the cosmos, on which all human cultures until now have rested (Davis and Turpin 2015).

    In the apocalyptic last chapter of his 2009 book You Must Change Your Life, Sloterdijk claims that the awareness of the fact that we cannot continue our current care-less lifestyles any longer but need to “change our lives” and start “taking care of the whole” is nowadays almost universally shared, forming the quintessence even of today’s Zeitgeist. Arguing that the global crisis shares many characteristics with the ancient God of monotheism, he speculates that this crisis will inevitably initiate, and will have to initiate, nothing less than a global immunological turn, i.e., a revolutionary transformation in the way humans construct and organize their immuno-spheric residence on the planet: “a new world-forming gesture” in terms of a new global project of sphere-construction, understood first of all as a transformation from local to global immunization strategies, from local protectionisms to a “protectionism of the whole” (Sloterdijk 2014a). This will require a “social tipping point” in the awareness, willingness and ability to act collectively as Earthlings.

    A viable future for humanity on this planet can therefore only be conceived for Sloterdijk on the basis of constructing a “global co-immunity structure” or a “global immune-design”, infused by a spirit of “co-immunism”, based on the awareness of a shared ecological and immunological situation and the realization that this new situation, which is actually that of the Anthropocene, cannot be dealt with on the basis of the existing local techno-cultural resources only but needs a planet-wide “logic of cooperation” (Sloterdijk 2014a). The technological reversion suggested by Sloterdijk is one that he calls a homeotechnological turn, i.e., a turn from the traditional, largely contra-natural, dominating, Earth-ignoring and Earth-ignorant allotechnological paradigm to a co-natural, non-dominating and Earth-caring homeotechnological paradigm. That also means the reconstruction of the global technosphere from a machine of exploitation and violation of the planetary oikos to an engine that co-operates and co-produces with the Earth’s bio- and atmosphere, an idea that resonates much with Stiegler negentropic turn (Sloterdijk 2015). Like Stiegler, who sometimes tends to identify the anthropocene with Heidegger’s Gestell, i.e., re-interpreted as the Ereignis of the Industrial Revolution as the deployment of the thermodynamic machine (the entropic character of which was not perceived by Heidegger, anymore than he took account of the notion of entropy in his thinking of the physis), Sloterdijk also thinks of the homeotechnological revolution as a benign turn of the Gestell towards a global-ecological “housing” project (Gehäuse) (Sloterdijk 2001).

    In a lecture given at the climate conference in Kopenhagen in December 2009, Sloterdijk suggests that a homeotechnological conversion of the human noosphere and technosphere around the Earth, and thus of the institution of a co-operative and co-productive relation of both anthropospheres with the biosphere, might eventually lead to the explication or unconcealing – here meant in the quasi-Heideggerian sense of the term – of a “hybrid-Earth” that is capable of much more than we can now imagine from our still allotechnologically programmed perspective, i.e., a homeotechnologized Earth whose capacities might very well be multiplied to an unimaginable extent (Sloterdijk 2015).

    Applying Spinoza’s famous dictum (from his Ethica) that “Nobody knows what a body can do” to the body of the Earth, Sloterdijk makes the wager that a homeotechnological turn of our immuno-spheropoietic being-on-the-planet forms our best and most hopeful answer to the challenge of the anthropocene, thereby referring to the bold ideas of the famous American architect Richard Buckminster Fuller, whose notion of Spaceship Earth as expounded in his 1968 book Operation Manual for Spaceship Earth has had a decisive influence on Sloterdijk’s sphero-immunological perception of the global ecological crisis and the anthropocene (Sloterdijk 2015, 108-9).

    As Sloterdijk already emphasizes in the final section of his 1993 book Weltfremdheit, such a global co-immunization project could very well prove to be a challenge that is too big for the anthropos, that is to say: as it currently exists (Sloterdijk 1993). Yet if there is one over-arching insight that runs through all of Sloterdijk’s onto-anthropological reflections, it is that humans are those beings that are always confronted with problems that are far too big for them but that they nevertheless cannot avoid dealing with. This structural burdening with what the tragic Greeks called ta megala, the “big things”, which puts human beings under permanent “growth stress” and/or “format stress” – today unfolding as “planetarization stress” (Sloterdijk 1995) – is what anthropogenesis as hominization and coming-into-the-world through sphero-poietic expansion is all about. And philosophy’s inaugural task is to be the birth-helper of this process of uncanny coming-into-the world (Sloterdijk 1993).

    If the human matures by increasing his awareness and responsibility through confrontations with the “big things”, the anthropocenic challenge of creating a global, i.e., planetary co-immunity structure will probably make clear for the very first time, and to all those involved, what “growing up” in its most general sense truly means for humanity (Sloterdijk 1993). Although the anthropos charged with responsibility is still “below the age of maturity” today (Sloterdijk 2015), the challenge of the anthropocene forces him, and provides him with the chance, to assume and acquire the proper maturity.

    Although he never gets very specific about the details, Sloterdijk claims that the anthropocene in this sense requires an entirely new, still to be invented mode of “big politics”, one that he designated as “hyperpolitics” in a book entitled Im selben Boot. Versuch über die Hyperpolitik (In the same boat. An Essay on Hyperpolitics) from 1995 that is, like many other books from that period, a preliminary sketch for the Spheres project (Sloterdijk 195). After the “paleopolitics” as the “miracle of the repetition of humans by humans” characteristic of pre-sedentary, pre-agricultural societies, and the “classic politics” of agriculture-based cities and nation-states as the perpetuation of that miracle in larger formats, today’s expansion of humans’ spheropoiesis toward the global, forcing them to live together in even larger formats, calls for a hyperpolitics, i.e., a global “state-athletics” for which there are no traditional examples at hand and for which the existing modes of “national-egoism” politics in fact only act as blockades. As in 1995, we can still observe a huge disproportion between the forces that are necessary and the weaknesses that are available and it seems still all too obvious that “creating jobs on the Titanic” continues to represent the pinnacle of current political intelligence (although piling up debts to continue unbridled consumption is today’s preferred policy). And Sloterdijk’s spot-on remark after the failed Copenhagen climate summit of 2009, that citizens all over the globe should safeguard themselves from their own governments, seems still all too valid after the 2015 Paris summit.

    The Herculean, currently impossible task for a coming hyperpolitics is to transform today’s “monster-international of end-users” or the hypermass of “last men with no return” into a global solidarity collective that takes care again of itself and the world and understands itself as a link between its ancestors and its offspring and not egoistically as the exclusive end-user of itself and its own life chances, an important theme Sloterdijk extensively elaborated upon in his 2014 book Die schrecklichen Kinder der Neuzeit. Über das anti-genealogische Experiment der Moderne (The Terrible Children of Modernity. On the Anti-Genealogical Experiment of the Modern Age; Sloterdijk 2014b). As such, hyperpolitics is the first politics of last men and should be understood as the continuation of paleopolitics with other means and on a global scale.

    Since human spheropoiesis has gone global and pretends to encompass the entire biosphere, the situation of humanity vis-à-vis the planet has reversed, as Swedish earth system scientist Johan Rockström proclaims, from a “small world, big planet” situation into a “big world, small planet” one (Rockström and Klum 2015). To preserve what he calls a “safe operating space for humanity” within the planetary boundaries, he argues that we are in need of a global governance of the earth system in order to reconnect human techno-cultural systems with the biosphere in a co-constructive fashion. There exists already a “Global Earth Observation System of Systems” (GEOSS), which tracks many key planetary boundary processes. Intelligent and democratic use of such a system might indeed usher in a “good anthropocene” beneficial to all inhabitants of the earth system. It could be one of the supports of the global immune system that is necessary for our collective survival as Sloterdijk claims. Yet it is also important to make sure that life in the anthropocene is not just about sur-vival. It should also be a “good life”, a “life worth living” in Stiegler’s expression.

    But how can Sloterdijk’s polyspherology, which takes the visual image of bubbles, be prevented from becoming the soil for fascism? The current refugee problem seems to be the touchstone of the foam theory. In an interview with the German magazine Cicero early this year, Sloterdijk claims that “we haven’t learned the praise of border”, and “The Europeans will sooner or later develop an efficient common border policy. In the long run the territorial imperative prevails. Finally, there is no moral obligation to self-destruction” (Sloterdijk 2016b). For sure, borders define the interiority and exteriority of bubbles, and hence realize such polycosmology; however, they thereby also blur the line between fascism and co-existence. In what sense can we interpret further the concept of co-existence, which recently has appeared in many other works dealing with the anthropocene and ecological crisis? Co-existence implies first of all communication and coalition – a positive concept of immune system under the current pharmacological condition, which stands as the opposite of the Brexit. We will come back to the politics of co-existence later when we address the concept of the “internation” as an alternative political imaginary.

    Dealing With the Apocalypse. A New Kind of Politics for the Anthropocene

    Let us try to conclude by restating the classic question “what is to be done”? Recently, there has been a lot of discussion about the question of scale and the Anthropocene is a scale problem of the highest order. The well-known American writer Evgeny Morozov has stated in almost all of his recent speeches that there is in fact NO alternative to the current neoliberal model of Silicon Valley – you are “free to use and free to give your data”, because the “Silicon Valley ideology” is so powerful that no individual effort will ever be able to challenge it–only the intervention of a body like the European Union could have a substantial effect. However, he does not see this will happen. On the other hand, British accelerationists like Nick Srnicek and Alex Williams have argued that after Occupy Wallstreet, the resistances or “micropolitics” that continue to spring up everywhere (such as urban gardening or dumpster diving) are not able to “scale up” to really challenge capitalism (Srnicek and Williams 2013). They criticize the individualist moral of the anarchist as a self-limitation as revolutionary force, and therefore fall prey to the appropriation of capitalism (Srnicek and Williams 2016: 29-37). This leaves us in a situation of helplessness, and micropolitics becomes self-consolation par excellence. The authors proposed what they call accelerationist politics inspired by the cybersyn project in the socialist Chili of the early Seventies, namely a socialist appropriation of technology in order to construct what they called a “post-work” economy, which includes 1) full automation, 2) reduction of working week, 3) universal basic income and 4) diminishment of the work ethic (2016: 127). Except for the last point, which is very close to the anarchists, their vision can be superimposed on the agenda of the Chinese Communist Party, which is unfortunately built upon a rather simple if not naïve understanding of technology.

    First of all, it still remains to be debated if previous forms of resistance are futile, especially when such claims are no more than pure intellectual activities. Indeed such claims seem like a revival of cynicism for intellectuals to stay in front of the computer and renounce direct actions on the street; and sometimes it seems even grotesque when some respond to such “impasse” by “fully appropriating” Facebook or Google, as if “high technology” has necessarily led to the illusionary “post-capitalism” in the sense of Paul Mason (Mason 2016). A more critical attitude towards the technological acceleration should be taken, which goes beyond the opposition between optimism and pessimism. Both proposals for the neguentropocene and co-immunization should be taken further as concrete political acts. There, realization can only be achieved by going back to the question of the local. Locality is central for both Stiegler and Sloterdijk in terms of resistance against global capitalism, and locality can only be archived through personal contacts and concrete projects, that seem further and further from the grand intellectual revolutionary plots. We don’t pretend to know what is to be done. However, for effectively confronting the Anthropocene, and responding to it in a systematic and scalable way, we would like to propose two points concerning the role of the state and the form of resistance.

    If states want to avoid being liquidated by the neoliberal economy, they will have to assume responsibility. We all know that nation-states had no problem whatsoever with intervening after the financial crisis of 2008 when the European banks ran into trouble. It was a moment when European governments undeniably showed that they are still capable of doing things on a global scale – though in the wrong way – in stark contrast to Hardt and Negri’s thesis of the power of Empire and the withering-away of nation-states (Hardt and Negri 2000). It seems that the nation-state should be obliged to take the problem of Anthropocene seriously and act upon it – not just by “going green”, but also by seriously addressing what Stiegler diagnoses as the entropic becoming of our world. However, it is also undeniably true that national governments have become pawns in the hands of global oligarchies and that national sovereignty is de facto eliminated and replaced by the dictates of the financial markets, with the recent fate of Greece being the most pitiable example. How much hope can we still bestow to our governments? And indeed one should be skeptical about them; however at the moment, they are the only institutions besides of transnational enterprises, which can effectively mobilize resources for large-scale projects.

    The anti-globalization movement in the late 20th century and first decade of the new millennium has made popular the multitude, yet the silence of the anti-globalization movement in recent years means that the form of micropolitics or artistic gesture proposed by it is no longer effective for dealing with the Anthropocene. By the same token, we already know about the failure of the “third sector” of NGOs, which since the anti-globalization movement haven’t cast any new light on the future. We also know that the post-World War II institution of the United Nations, despite its innumerable programs, doesn’t have any real executive power. Surely one can imagine, as many have done, that in order to form a federal body more powerful than the United Nations, a third world war would have to break out – and if the Anthropocenic situation worsens, such a scenario is not at all unrealistic.

    By way of conclusion, we want to gesture toward the possibility of establishing an “internation”, a concept developed by Marcel Mauss in 1920 and recently taken up by Stiegler to propose the constitution of a new form of public power that might be able to defy the forces of capital and guide humanity into another future than the barbaric and intolerable “no future” prescribed by neoliberalism’s TINA (“There is no alternative”) mantra (Stiegler 2015b). Mauss pronounced the article “Nation et internationalisme” in the colloquium “The Problem of Nationality” organized by the Aristotelian Society in London, in which he expressed the urgency that philosophers take an avant-garde approach to the question of nation and internation (Mauss 1920). The increasing economic interdependence after the first world war becomes a “défaut”, based on which Mauss proposed also a “moral interdependence” of mutual-aid as well as the reduction in sovereignty to reduce war. Stiegler took up Mauss’s notion of the internation recently in States of shock (2015b) and interpreted it through the lens of Simondon’s concept of individuation.

    Bernard Stiegler. Courtesy of Alchetron

    A nation for Stiegler is a project of “collective individuation” through the establishment of a res publica. Internation is a project that takes this process further in order to re-institutionalize the production and dissemination of knowledge in order to re-create the circuits of transindividuation in the sciences that are now dominated by the marketisation and commercialization of knowledge. Stiegler imagines this internation first of all as a project for academics and scholars more generally all over the world (what he calls “interscience”) to unite in resolutely refusing their recruitment in the global economic world war unleashed by neoliberalism and instead sign a global peace treaty, backed up by a new legislative body (Stiegler 2015b).

    This should start the re-forging of the digital networks into tools for cooperation and care, and for the elevation of collective intelligence. De facto, this internation already exists (and has existed for a long time) in the form of collaborations among research institutes, schools and universities worldwide. However, the research funding strategies in the past decades in Europe (if not worldwide) have rigidified these collaborations and turned them into zombie-like dogmas. Political visions of researchers are always submitted to the hidden agenda of the market and commercial value (what is called the “valorization agenda”). There is no lack of awareness of this among academics but at the moment there is no effective strategy to act against the market hegemony. The formation of an internation could foster such a strategy. Yet it will have to become explicitly politicized in order to function as a catalyst for the construction of new forms of global socialization and cooperation that could usher in the neganthropocene and bring about a large-scale homeotechnological revolution in the sense of Sloterdijk. The only alternative would be to surrender to the brutal dictates of a consumerist capitalist innovation that will only produce more entropy, impotence and stupidity. In the words of Stiegler, we need to mobilize internation against disindividuation.

    The creation of an internation has a meaning for our epoch, and indeed there is an urgency to do so, in view of the destructive nature of the anthropocene and the entropic becoming of the technological world. It is for sure not only the responsibility of intellectuals and universities, and it is for sure that a larger scale of association with sectors and groups outside of university is necessary, but it is also important to reflect on these at the level of locality and localization, according to different orders of magnitude. To pass into act is only a question of perception and action but also, and probably even more profoundly, a process of psychic and collective individuation, which doesn’t come naturally. It takes courage to create such a condition and such a quantum leap. Retrospectively, Mauss’ remark on intellectual courage can therefore still serve as a Mahnruf to contemporary intellectuals:

    Why didn’t the philosophers take an avant-garde position on this? They understood it well as it is about founding the doctrine of democracy and nationalities. British and French were ahead of their time, and one shouldn’t forget Kant, Fichte. Why did they choose to stay at the back, and serve the vested interest? (Mauss 1920)

    We would finally like to ask here, most likely in deviation from Stiegler’s own intentions, whether it would be possible to conceive of such an internation as an enabling strategy for what Antonio Negri and Judith Revel have called “the invention of the common” (Negri and Revel 2008), i.e. as an intermediate step toward the establishment of a “global commons” of knowledge and capabilities and ultimately a common global authority not only beyond the private but also beyond the public. This return to Negri does not mean that we are proposing to undermine the role of the state, which we have invoked earlier. On the contrary, if the global economy in the past decades has been running on the principle of privatization and marketization, as Slavoj Žižek has rightly argued (Žižek 2009), and if the recent triumph of Donald Trump as well as the Brexit signal a return to a conservative revolution founded on the strengthening of sovereignty and border control, “communization” will be a counter process against the struggling self-preservation of capitalism. In that case, the economy of the commons inscribed in the project of internation could become a vehicle for the creation of a truly global co-immunity structure, and a truly global engine of neguanthropy. But for this to be possible, there should first be a re-orientation of strategies in teaching, research and funding within universities.

    References

    Barnosky, Anthony D. et al. 2012. “Approaching a state shift in Earth’s biosphere”, Nature, No. 486, 07 June 2012: 52–58

    Bonneuil, Christophe and Fressoz, Jean-Baptiste. 2016. The Shock of the Anthropocene. The Earth, History and Us. London:Verso.

    Crutzen, Paul. 2002. “Geology of Mankind”, Nature, No. 415, 23, 3 January 2002: 23.

    Davis, Heather and Turpin, Etienne. 2015. Art in the Anthropocene: Encounters Among Aesthetics, Politics, Environments and Epistemologies. London: Open Humanities Press.

    Derrida, Jacques. 1981. “Plato’s Pharmacy” in Dissemination. Translated by Barbara Johnson. Chicago: University of Chicago Press.

    Flannery, Tim. 2016. Atmosphere of Hope. Solutions to the Climate Crisis. London: Penguin.

    Hardt, Michael and Negri, Antonio. 2000. Empire. Cambridge: Harvard University Press.

    L’impansable. 2006. L’Effondrement du temps. Tome 1, Pénétration. Paris: Le Grand Souffle Editions.

    Jonas, Hans. 1985. The Imperative of Responsibility. In Search of an Ethics for the Technological Age. Chicago: University of Chicago Press.

    Kolbert, Elizabeth. 2014. The Sixth Extinction: An Unnatural History. London: Bloomsbury.

    Latour, B. 2008. “A Cautious Prometheus? A Few StepsToward a Philosophy of Design (with Special Attention to Peter Sloterdijk)”, In Fiona Hackne, Jonathn Glynne and Viv Minto (editors), Proceedings of the 2008 Annual International Conference of the Design History Society – Falmouth, 3-6 September 2009, e-books, Universal Publishers, 2-10.

    Mason, Paul. 2016. PostCapitalism: A Guide to Our Future. London: Penguin.

    Mauss, Marcel. 1920. « La nation et l’internationalisme. » Communication en français à un colloque: « The Problem of Nationality », Proceedings of the Aristotelien Society, London, 20, 1920, 242-251.

    McPherson, G. 2013. Going Dark. Baltimore: Publish America.

    Moore, Jason. 2015. Capitalism in the Web of Life: Ecology and the Accumulation of Capital. London: Verso.

    Moore, Jason. 2016. Anthropocene or Capitalocene? Nature, History, and the Crisis of Capitalism. Oakland: PM Press.

    Negri, Antonio and Revel, Judith. 2008. “Inventing the Common”. Multitudes, 13 May 2008: http://www.generation-online.org/p/fp_revel5.htm

    Rockström, Johan and Klum, Mattias. 2015. Big World, Small Planet. Abundance Within Planetary Boundaries. Stockholm: Bokförlaget Max Ström.

    Sloterdijk, Peter. 1988. Critique of Cynical Reason. Minneapolis: University of Minnesota Press.

    Sloterdijk, Peter. 1989. Eurotaoismus. Eine Kritik der politischen Kinetik. Frankfurt am Main: Suhrkamp.

    Sloterdijk, Peter. 1993. Weltfremdheit. Frankfurt am Main: Suhrkamp.

    Sloterdijk, Peter. 1995. Im selben Boot. Versuch über die Hyperpolitik. Frankfurt am Main: Suhrkamp.

    Sloterdijk, Peter. 2001. Nicht gerettet. Versuche nach Heidegger. Frankfurt am Main: Suhrkamp.

    Sloterdijk, Peter. 2009. “Das 21. Jahrhundert beginnt mit dem Debakel vom 19. Dezember 2009”, Süddeutschen Zeitung, 19 December 2009.

    Sloterdijk, Peter. 2004. Sphären. Frankfurt am Main: Suhrkamp.

    Sloterdijk, Peter. 2014a. You Must Change Your Life. Cambridge-Malden: Polity.

    Sloterdijk, Peter. 2014b. Die schrecklichen Kinder der Neuzeit. Über das anti-genealogische Experiment der Moderne. Frankfurt am Main: Suhrkamp.

    Sloterdijk, Peter. 2015. Was geschah im 20. Jahrhundert? Frankfurt am Main: Suhrkamp.

    Sloterdijk, Peter. 2016a. Foams: Spheres Volume III: Plural Spherology. Los Angeles: Semiotext(e).

    Sloterdijk, Peter. 2016b. “Es gibt keine moralische Pflicht zur Selbstzerstörung”. Cicero Magazin für politische Kultur, 28 January 2016: http://www.cicero.de/berliner-republik/peter-sloterdijk-ueber-merkel-und-die-fluechtlingskrise-es-gibt-keine-moralische

    Sloterdijk, Peter and Stiegler, Bernard. 2016. “Welcome to the Anthropocene. A Public Debate”, Nijmegen, 27 June 2016: https://www.youtube.com/watch?v=HoxPk4VBbOk

    Srnicek, Nick and Alex Williams, 2013. “#ACCELERATE MANIFESTO for an Accelerationist Politics” www.criticallegalthinking.com/2013/05/14/accelerate-manifesto-for-an-accelerationist-politics/

    Srnicek, Nick and Alex Williams, 2016. Inventing the Future. London: Verso.

    Steffen, Will et al. 2011. “The Anthropocene: From Global Change to Planetary Stewardship”, AMBIO Vol. 40: 739-761.

    Stiegler, Bernard. 1998. Technics and Time Vol. 1. The Fault of Epimetheus. Stanford: Stanford University Press.

    Stiegler, Bernard. 2010a. Taking Care of Youth and the Generations. Stanford: Stanford University Press.

    Stiegler, Bernard. 2010b. For a New Critique of Political Economy. Cambridge-Malden: Polity.

    Stiegler, Bernard. 2013. What Makes Life Worth Living. On Pharmacology. Cambridge-Malden: Polity.

    Stiegler, Bernard. 2014. Symbolic Misery Vol. 1. The Hyperindustrial Epoch. Cambridge-Malden: Polity.

    Stiegler, Bernard. 2015a. Interview with Le Monde, « Ce n’est qu’en projetant un véritable avenir qu’on pourra combattre Daech http://www.lemonde.fr/emploi/article/2015/11/19/bernard-stiegler-ce-n-est-qu-en-projetant-un-veritable-avenir-qu-on-pourra-combattre-daech_4813660_1698637.html

    Stiegler, Bernard. 2015b. States of Shock: Stupidity and Knowledge in the 21st Century. Cambridge-Malden: Polity.

    Stiegler, Bernard. 2016. Dans la disruption. Comment ne pas devenir fou? Paris: Le liens qui libérent.

    Stiegler, Bernard. 2017. Automatic Society: Volume 1: The Future of Work. Cambridge-Malden: Polity.

    Vial, Stephane. 2016. La Fin d’un Philosophe. https://medium.com/@svial/bernard-stiegler-la-fin-dun-philosophe-autrefois-inspirant-ff59c1ac4c8#.d7sqsaa6s

    Ward, Peter. 2015. The Medea Hypothesis: Is Life on Earth Ultimately Self-Destructive? Cambridge: Princeton University Press.

    Wark, McKenzie. 2015a. “The Capitalocene”. PS: http://www.publicseminar.org/2015/10/the-capitalocene/

    Wark, McKenzie. 2015b. Molecular Red. Theory for the Anthropocene. London: Verso.

    Westbroek, Peter. 1992. Life as a Geological Force: Dynamics of the Earth. New York: W.W. Norton & Co.

    Wyke, Tom. 2016. “Killer heatwave alert after temperature hits a record 51C: More deaths feared as hundreds die and India sees its hottest day on record”, Daily Mail, 20 May 2016.

    Žižek, Slavoj. 2009. First As Tragedy, Then As Farce. London: Verso.

    [1] See Barnosky et al. 2012.

    [2] The event took place in Paris just before the COP 21 in November 2015 and was organized by Philippe Descola and Catherine Larrère.

  • Daniel Greene – Digital Dark Matters

    Daniel Greene – Digital Dark Matters

    a review of Simone Browne, Dark Matters: On the Surveillance of Blackness (Duke University Press, 2015)

    by Daniel Greene

    ~

    The Book of Negroes was the first census of black residents of North America. In it, the British military took down the names of some three thousand ex-slaves between April and November of 1783, alongside details of appearance and personality, destination and, if applicable, previous owner. The self-emancipated—some free, some indentured to English or German soldiers—were seeking passage to Canada or Europe, and lobbied the defeated British Loyalists fleeing New York City for their place in the Book. The Book of Negroes thus functioned as “the first government-issued document for state-regulated migration between the United States and Canada that explicitly linked corporeal markers to the right to travel” (67). An index of slave society in turmoil, its data fields were populated with careful gradations of labor power, denoting the value of black life within slave capitalism: “nearly worn out,” “healthy negress,” “stout labourer.”  Much of the data in The Book of Negroes was absorbed from so-called Birch Certificates, issued by a British Brigadier General of that name, which acted as passports certifying the freedom of ex-slaves and their right to travel abroad. The Certificates became evidence submitted by ex-slaves arguing for their inclusion in the Book of Negroes, and became sites of contention for those slave-owners looking to reclaim people they saw as property.

    If, as Simone Browne argues in Dark Matters: On the Surveillance of Blackness, “the Book of Negroes [was] a searchable database for the future tracking of those listed in it” (83), the details of preparing, editing, monitoring, sorting and circulating these data become direct matters of (black) life and death. Ex-slaves would fight for their legibility within the system through their use of Birch Certificates and the like; but they had often arrived in New York in the first place through a series of fights to remain illegible to the “many start-ups in slave-catching” that arose to do the work of distant slavers. Aliases, costumes, forged documents and the like were on the one hand used to remain invisible to the surveillance mechanisms geared towards capture, and on the other hand used to become visible to the surveillance mechanisms—like the Book—that could potentially offer freedom. Those ex-slaves who failed to appear as the right sort of data were effectively “put on a no-sail list” (68), and either held in New York City or re-rendered into property and delivered back to the slave-owner.

    Start-ups, passports, no-sail lists, databases: These may appear anachronistic at first, modern technological thinking out of sync with colonial America. But Browne deploys these labels with care and precision, like much else in this remarkable book. Dark Matters reframes our contemporary thinking about surveillance, and digital media more broadly, through a simple question with challenging answers: What if our mental map of the global surveillance apparatus began not with 9/11 but with the slave ship? Surveillance is considered here not as a specific technological development but a practice of tracking people and putting them into place. Browne demonstrates how certain people have long been imagined as out of place and that technologies of control and order were developed in order to diagnose, map, and correct these conditions: “Surveillance is nothing new to black folks. It is a fact of antiblackness” (10). That this ”fact” is often invisible even in our studies of surveillance and digital media more broadly speaks, perversely, to the power of white supremacy to structure our vision of the world. Browne’s apparent anachronisms make stranger the techniques of surveillance with which we are familiar, revealing the dark matter that has structured their use and development this whole time. Difficult to visualize, Browne shows us how to trace this dark matter through its effects: the ordering of people into place, and the escape from that order through “freedom acts” of obfuscation, sabotage, and trickery.

    This then is a book about new (and very old) methods of research in surveillance studies in particular, and digital studies in general, centered in black studies—particularly the work of critical theorists of race such as Saidiya Hartman and Sylvia Wynter who find in chattel slavery a prototypical modernity. More broadly, it is a book about new ways of engaging with our technocultural present, centered in the black diasporic experience of slavery and its afterlife. Frantz Fanon is a key figure throughout. Browne introduces us to her own approach through an early reflection on the revolutionary philosopher’s dying days in Washington, DC, overcome with paranoia over the very real surveillance to which he suspected he was subjected. Browne’s FOIA requests to the CIA regarding their tracking of Fanon during his time at the National Institutes of Health Clinical Center returned only a newspaper clipping, a book review, and a heavily redacted FBI memo reporting on Fanon’s travels. So she digs further into the archive, finding in Fanon’s lectures at the University of Tunis, delivered in the late 1950s after being expelled from Algeria by French colonial authorities, a critical exploration of policing and surveillance. Fanon’s psychiatric imagination, granting such visceral connection between white supremacist institutions and lived black experience in The Wretched of the Earth, here addresses the new techniques of ‘control by quantification’—punch clocks, time sheets, phone taps, and CCTV—in factories and department stores, and the alienation engendered in the surveilled.

    Browne’s recovery of this work grounds a creative extension of Fanon’s thinking into surveillance practices and surveillance studies. From his concept of “epidermalization”—“the imposition of race on the body” (7)—Browne builds a theory of racializing surveillance. Like many other key terms in Dark Matters, this names an empirical phenomenon—the crafting of racial boundaries through tracking and monitoring—and critiques the “absented presence” (13) of race in surveillance studies. Its opposition is found in dark sousveillance, a revision of Steve Mann’s term for watching the watchers that, again, describes both the freedom acts of black folks against a visual field saturated with racism, as well as an epistemology capable of perceiving, studying, and deconstructing apparatuses of racial surveillance.

    Each chapter of Dark Matters presents a different archive of racializing surveillance paired with reflections on black cultural production Browne reads as dark sousveillance. At each turn, Browne encourages us to see in slavery and its afterlife new modes of control, old ways of studying them, and potential paths of resistance. Her most direct critique of surveillance studies comes in Chapter 1’s precise exegesis of the key ideas that emerge from reading Jeremy Bentham’s plans for the Panopticon and Foucault’s study of it—the signal archive and theory of the field—against the plans for the slave ship Brookes. It turns out Bentham travelled on a ship transporting slaves during the trip where he sketched out the Panopticon, a model penitentiary wherein, through the clever use of lights, mirrors, and partitions, prisoners are totally isolated from one another and never sure whether they are being monitored or not. The archetype for modern power as self-discipline is thus nurtured, counter to its own telling, alongside sovereign violence. Browne’s reading of archives from the slave ship, the auction block, and the plantation reveal the careful biopolitics that created “blackness as a saleable commodity in the Western Hemisphere” (42). She asks how “the view from ‘under the hatches’” of Bentham’s Turkish ship, transporting, in his words, “18 young negresses (slaves),” might change our narrative about the emergence of disciplinary power and the modern management of life as a resource. It becomes clear that the power to instill self-governance through surveillance did not subordinate but rather partnered with the brutal spectacle of sovereign power that was intended to educate enslaved people on the limits of their humanity. This correction to the Foucauldian narrative is sorely necessary in a field, and a general political conversation about surveillance, that too often focuses on the technical novelty of drones, to give one example, without a connection to a generation learning to fear the skies.

    Stowage of the British slave ship Brookes under the regulated slave trade act of 1788
    “Stowage of the British slave ship Brookes under the regulated slave trade act of 1788.” Illustration. 1788. Library of Congress Rare Book and Special Collections Division Washington, D.C.

    These sorts of theoretical course corrections are among the most valuable lessons in Dark Matters. There is fastidious empirical work here, particularly in Chapter 2’s exploration of the Book of Negroes and colonial New York’s lantern laws requiring all black and indigenous people to bear lights after dark. But this empirical work is not the book’s focus, nor its main promise. That promise comes in prompting new empirical and political questions about how we see surveillance and what it means, and for whom, through an archaeology of black life under surveillance (indeed, Chapter 4, on airport surveillance, is the one I find weakest largely because it abandons this archaeological technique and focuses wholly on the present). Chapter 1’s reading of Charles William Tait’s prescriptions for slave management, for example, is part of a broader turn in the study of the history of capitalism where the roots of modern business practices like data-driven human resource management are traced to the supposedly pre-modern slave economy. Chapter 3’s assertion that slave branding “was a biometric technology…a measure of slavery’s making, marking, and marketing of the black subject as commodity” (91) does similar work, making strange the contemporary security technologies that purport the reveal racial truths which unwilling subjects do not give up. Facial recognition technologies and other biometrics are calibrated based on what Browne calls a “prototypical whiteness…privileged in enrollment, measurement, and recognition processes…reliant upon dark matter for its own meaning” (162). Particularly in the context of border control, these default settings reveal the calculations built into our security technologies regarding who “counts” enough to be recognized. Calculations grounded in an unceasing desire for new means with which to draw clear-cut racial boundaries.

    The point here is not that a direct line of technological development can be drawn from brands to facial recognition or from lanterns to ankle bracelets. Rather, if racism, as Ruth Wilson Gilmore argues, is “the state-sanctioned or extralegal production and exploitation of group-differentiated vulnerability to premature death,” then what Browne points to are methods of group differentiation, the means by which the value of black lives are calculated and how those calculations are stored, transmitted, and concretized in institutional life. If Browne’s cultural studies approach neglects a sustained empirical engagement with a particular mode of racializing surveillance—say, the uneven geography produced by the Fugitive Slave Act, mentioned in passing in relation to “start-ups in slave catching”—it is because she has taken on the unenviable task of shifting the focus of whole fields to dark matter previously ignored, opening a series of doors through which readers can glimpse the technologies that make race.

    Here then is a space cleared for surveillance studies, and digital studies more broadly, in an historical moment when so many are loudly proclaiming that Black Lives Matter, when the dark sousveillance of smartphone recordings has made the violence of institutional racism impossible to ignore. Work in digital studies has readily and repeatedly unearthed the capitalist imperatives built into our phones, feeds, and friends lists. Shoshanna Zuboff’s recent work on “surveillance capitalism” is perhaps a bellwether here: a rich theorization of the data accumulation imperative that transforms intra-capitalist competition, the nature of the contract, and the paths of everyday life. But her account of the growth of an extractive data economy that leads to a Big Other of behavior modification does not so far have a place for race.

    This is not a call on my part to sprinkle a missing ingredient atop a shoddy analysis in order to check a box. Zuboff is critiqued here precisely because she is among our most thoughtful, careful critics of contemporary capitalism. Rather, Browne’s account of surveillance capitalism—though she does not call it that—shows that race does not need to be introduced to the critical frame from outside. That dark matter has always been present, shaping what is visible even if it goes unseen itself. This manifests in at least two ways in Zuboff’s critique of the Big Other. First, her critique of Google’s accumulation of  “data exhaust” is framed primarily as a ‘pull’ of ever more sites and sensors into Google’s maw, passively given up users. But there is a great deal of “push” here as well. The accumulation of consumable data also occurs through the very human work of solving CAPTCHAs and scanning books. The latter is the subject of an iconic photo that shows the brown hand of a Google Books scanner—a low-wage subcontractor, index finger wrapped in plastic to avoid cuts from a day of page-turning—caught on a scanned page. Second, for Zuboff part of the frightening novelty of Google’s data extraction regime is its “formal indifference” to individual users, as well as to existing legal regimes that might impede the extraction of population-scale data. This, she argues, stands in marked contrast to the midcentury capitalist regimes which embraced a degree of democracy in order to prop up both political legitimacy and effective demand. But this was a democratic compromise limited in time and space. Extractive capitalist regimes of the past and present, including those producing the conflict minerals so necessary for hardware running Google services, have been marked by, at best, formal indifference in the North to conditions in the South. An analysis of surveillance capitalism’s struggle for hegemony would be greatly enriched by a consideration of how industrial capitalism legitimated itself in the metropole at the expense of the colony. Nor is this racial-economic dynamic and its political legitimation purely a cross-continental concern. US prisons have long extracted value from the incarcerated, racialized as second-class citizens. Today this practice continues, but surveillance technologies like ankle bracelets extend this extraction beyond prison walls, often at parolees’ expense.

    A Google Books scanner’s hand
    A Google Books scanner’s hand, caught working on WEB Du Bois’ The Souls of Black Folk. Via The Art of Google Books.

    Capitalism has always, as Browne’s notes on plantation surveillance make clear, been racial capitalism. Capital enters the world arrayed in the blood of primitive accumulation, and reproduces itself in part through the violent differentiation of labor powers. While the accumulation imperative has long been accepted as a value shaping media’s design and use, it is unfortunate that race has largely entered the frame of digital studies, and particularly, as Jessie Daniels argues, internet studies, through a study of either racial variables (e.g., “race” inheres to the body of the nonwhite person and causes other social phenomena) or racial identities (e.g., race is represented through minority cultural production, racism is produced through individual prejudice). There are perhaps good institutional reasons for this framing, owing to disciplinary training and the like, beyond the colorblind political ethic of much contemporary liberalism. But it has left us without digital stories of race (although there are certainly exceptions, particularly in the work of writers like Lisa Nakamura and her collaborators), perceived to be a niche concern, on par with our digital stories of capitalism—much less digital stories of racial capitalism.

    Browne provides a path forward for a study of race and technology more attuned to institutions and structures, to the long shadows old violence casts on our daily, digital lives. This slim, rich book is ultimately a reflection on method, on learning new ways to see. “Technology is made of people!” is where so many of our critiques end, discovering, once again, the values we build into machines. This is where Dark Matters begins. And it proceeds through slave ships, databases, branding irons, iris scanners, airports, and fingerprints to map the built project of racism and the work it takes to pass unnoticed in those halls or steal the map and draw something else entirely.

    _____

    Daniel Greene holds a PhD in American Studies from the University of Maryland. He is currently a Postdoctoral Researcher with the Social Media Collective at Microsoft Research, studying the future of work and the future of unemployment. He lives online at dmgreene.net.

    Back to the essay

  • Travis Alexander – Deregulating Grief: A Review of Dagmawi Woubshet’s “The Calendar of Loss: Race, Sexuality, and Mourning in the Early Era of AIDS”

    Travis Alexander – Deregulating Grief: A Review of Dagmawi Woubshet’s “The Calendar of Loss: Race, Sexuality, and Mourning in the Early Era of AIDS”

    a review of Dagmawi Woubshet’s The Calendar of Loss: Race, Sexuality, and Mourning in the Early Era of AIDS (Baltimore: The Johns Hopkins University Press, 2015)

    by Travis Alexander

    ~

    Not long after someone dies in Ethiopia, the edir—friend, relative, or neighbor—takes to the streets to blow a horn and call out the deceased’s name. Thus begins the process of mourning. After this announcement, the edir pitches a tent in front of the bereaved’s home. Over the next three days, mourners congregate in the tent and grieve. By the seventh day, public grieving has largely subsided. More urgency still has passed by the fortieth and eightieth days, by the seventh year. Dagmawi Woubshet opens The Calendar of Loss with a lyrical description of this practice, according to which the temporality of the living attunes itself to the claim of the dead. It’s a fitting introduction, as The Calendar casts Woubshet himself as no less edir than scholar. His particular charge is the AIDS dead from the “early years” of the epidemic—1981 to 1996, when highly active antiretroviral treatment became widely available. It was in 1996 that AIDS, according to certain political constituencies, was rendered nonlethal; according to others, it was even cured.

    The ambition of The Calendar, though, exceeds mourning the AIDS dead in either the form of a memoir or uncritical memorialization. To be sure, there exists a prolific tradition of just this kind of memoirish text, epitomized by writers like Sarah Schulman. Woubshet looks instead to efforts made by AIDS mourners to simultaneously grieve their dead, process the historical contingency of these deaths, and reckon with the probability that their own deaths were on the horizon. As such, these works are “steeped in a ‘poetics of compounding loss’” (3). This idiosyncratic form of mourning not only registers a novel structure of feeling, but, in “confound[ing] and travers[ing] the limits of mourning” renders extant literary and cultural elegaic genres inadequate (3). Evincing his interdisciplinary sensibility, Woubshet trains his analysis on genres running to obituaries, funerals, graffiti art, photography, film, epistolaries, choreography, installations, and of course, the poetic elegy itself. The resulting critical work is a dialogue at the intersection of trauma studies, psychoanalysis, queer theory, and African Diaspora studies.

    Woubshet organizes the book’s chapters according to the various ways that queer loss was reinserted into a public discourse that had attempted first to conceal it, and then to efface its embodied specificities. To take only one of his most powerful examples, Woubshet addresses how in its traditional form the obituary had functioned as a disciplinary genre of (hetero-) reproductive futurism. In its foregrounding of birth-family kinship networks, the obituary not only omitted mention of gay partners, but reified the futurism (those, especially, children, who live on) that sublimates and mediates such reproductivism. Moreover, these pieces never mentioned AIDS, coyly alluding instead to a “long disease” the deceased had suffered, thereby interring the dead in one last closet. In response to the mainstream news outlets running these posthumously disciplinary remembrances, gay newspapers “arrogated to themselves the authority of the obituary,” emphasizing the cause of death and the queer networks left in the wake of the decedent’s passing, thus both constituting queer counterpublics and protecting the “rights of the queer dead from the normative rites of the living” (59, 61, 67, 84). Woubshet’s ability to demonstrate how works of mourning exhumed the queer body interdicted from the scene of public grief is equally salient in his poetic analysis, centering on figures like Melvin Dixon and Paul Monette and informed by poetry and elegy scholars ranging from Peter Sacks to Max Cavitch to Jonathan Culler. He hastens to remind us that the explicitly fatal homophobia of the 1980s and ’90s has simply been sanitized into the gay liberalism of the present. In its triumphalist projection of gay normalcy and citizenship, gay liberalism (akin to what Jasbir Puar calls homonationalism) demands the erasure of AIDS, of the embodied queer past. “[B]y looking for the dead now, therefore,” The Calendar of Loss “challenge[s] gay liberalism’s present undertaking” (23).

    As such, the reformulation of central mourning genres such as the obituary , Woubshet notes, wasn’t demanded simply by the novel epidemiological and biocultural poetics of AIDS itself. It also responded to the unique forms of silence and erasure under which queer loss was placed in the 1980s and 90s by civil and governmental institutions alike. It is this “regulation of the ‘sphere of appearances’” (to borrow Judith Butler’s phrase) that the activist group ACT UP (AIDS Coalition to Unleash Power) addressed in its motto “Silence = Death” (16). Woubshet argues that the protocols of silence in this era “disprized” mourners of queer loss, “shroud[ing]” their grief “in silence, shame, and disgrace” (4). The texts and performances collected in Calendar refuse this status, and collectively insist that “mourning = survival.”

    In its recuperation of a form of grief that is indeterminate and inconsolable, The Calendar of Loss is also a referendum on the approach to loss and trauma offered by Freudian psychoanalysis, which sets forth a pat binary between normative grief (mourning) and pathological grief (melancholia). Where the mourner eventually replaces his lost object, the melancholic cannot, and languishes. Amid the exigencies of AIDS, however, this binary falls short insofar as it fails to apprehend the fact that for these mourners, death is not a “singular” event, but part of an ever-expanding series of deaths, including—most likely—the mourner’s own (5). The melancholic grief of queer communities constituted by AIDS are certainly not “normal” according to Freud, but neither are they pathological, inasmuch as they “achieve cathexis in mourning itself and in its art and activism. However, […] as newly cathected objects, [these] cannot displace loss; on the contrary, they place loss center stage” (18). In worrying the normal/pathological binary, Woubshet delivers a theoretical instrument to those employing psychoanalysis, and a bracing intervention to a queer theory whose conceptualizations of trauma have unproblematically embraced this conspicuously unqueer binarism for too long.

    Drawing on work by Howard Thurman, Woubshet observes that this non-pathological melancholy finds clear historical expression in the genre of slave songs and black spirituals. In the spirituals as well as in black life generally, “[d]eath and dying are not just ‘unusual, untoward events’ or ‘inevitably end-of-lifespan events,’ but instead punctuate [it] routinely and proleptically” (19). This constant anticipation of loss is central to the conceptions of social death elaborated by scholars such as Orlando Patterson. Thus, the paradigm of black mourning (as in the slave songs) and black life generally, “accommodates” and illuminates early AIDS mourning, particularly in its “insistence that death is ever present, that death is somehow always impending, and that survivors can confront all this death in the face of shame and stigma in eloquent ways that also often imply a fierce political sensibility and a longing for justice” (5). This comparative work confirms The Calendar Of Loss as the first monograph in the humanities at the intersection of queer theory and African Diaspora studies and allows it to spark a true theoretical commerce between those fields (26).

    Already in this book, in fact, interdisciplinarity has sensitized Woubshet to a liability of queer theory over and above its internalization of Freud’s pathologization of melancholy. I’m speaking here of queer theory’s characterization of the child derived heavily from Lee Edelman’s pathbreaking No Future: Queer Theory and the Death Drive (2004). In this latter account, the figure of the child is not only opposed to the queer subject, but is deployed—insofar as it represents the claims of futurity—to discipline and defer queer pleasure, which represents by contrast not only the present at the expense of the future, but also the very foreclosure of the future itself. In his final chapter, Woubshet details the Sudden Flowers collective, which provides the resources for Ethiopian orphans whose parents were lost to AIDS to create works of art and performances that help mediate their grief. Many of these orphans choose to write letters to their deceased parents in which they chronicle the stages and practices of their mourning, and the sensation of the absence, the lost object(s), they have not (yet) filled or replaced. These children “rely not on idealized figures of innocence and purity to characterize their own experiences, but instead on queer figures of abjection, disparagement, and fearlessness,” thereby “thwart[ing] the naturalized figure of the child as the very embodiment of futurity” (140). The experiences of these children, then, are a living rebuke to the cleanliness of queer theory’s characterization of the child. But Woubshet doesn’t simply gesture to the children of Sudden Flowers to append an asterisk to the queer theory’s anti-natalism, to correctively bolster its critical acumen (though he certainly does accomplish this). While joining Edelman in the latter’s critique of hegemonic natalism, he breaks away in aiming to indicate what we might well call the white privilege of queer theory—the complacency of the latter’s archive, its evident disinterest in the particularities of life in the submerged global south in favor of an aestheticized lumping-together of African people with AIDS under the signifier of unalterable tragedy.

    But more witheringly still, The Calendar of Loss reveals the extent to which queer theory becomes a vested defender, an unwitting academic strategist, in the process of universalizing whiteness. Drawing on Robin Bernstein’s Racial Innocence, Woubshet recounts how, unlike the image of the white child that gelled (under the auspices of nineteenth-century Romanticism) to figure innocence, purity, and futurity, the black child discursively produced simultaneously (most canonically in the pickaninny) evoked repulsion, abjection, and social death (142). “Emptied of innocence and futurity,” he speculates, “the black child […] cannot be a marker against which queerness can be negatively defined” (142). Hidden behind the tact of Woubshet’s account is the indictment that positions like Edelman’s not only prefer the white child for its compatibility with a given theoretical imperative, but perpetuate a universalization according to which the white child, unburdened by racial marking, becomes the child as such, which iterates in turn the social death (in its rhetorical concealment) of the black child. This revelation represents just one of the fruits of Woubshet’s inflection of queer theory by the itinerary of African Diaspora studies.

    While we might fairly critique Woubshet’s failure to address the role of NGOs (like those that care for Ethiopian Orphans) as the “mendicant orders” (cf. Hardt and Negri) of the very same biopolitical governmentality that allowed AIDS to become a pandemic in the first place, this oversight seems the exception rather than the rule. The Calendar’s more concerning oversight is instead its unintentional reification of vitalist, optimistic, and citizenship-oriented rubrics of affect in its moments of “recuperation.” Consider for example Woubshet’s description of the children in the Sudden Flowers art collective who become “political figure[s], publicly taking on one of the most urgent issues of our time, [while simultaneously] departing from the norm” (144). These children are revealed in turn as “powerful agents, as subjects capable of reflection on and articulation of their experiences” (140). Here these children become deserving of praise insofar as they embrace an active, vigorous relationship with their circumstances. Elsewhere Woubshet will attribute the same valorizing characteristics to the gay American subject of his book too. AIDS mourners “across the Atlantic […] embodied AIDS openly and fearlessly” (5). Here “openly and fearlessly” carries the same sense of vigor and interactivity he attributed to the “powerful,” “agent[ial]” children of Ethiopia.

    Not only do these forms of affect coincide neatly with the behavioral strictures demanded by a late liberalism that exercises itself in intellectual and emotional economies, but they also threaten to undo the depathologization of melancholy executed above. That is to say, where Woubshet had previously claimed to find melancholy non-pathological insofar as it generates a new cathexis (attention to compounding loss), here he seems to smuggle in—through “articulation of […] experiences”—the kind of object-replacement or work-completion characteristic of normative mourning. Indeed, he says so himself in expressing his desire to show that nonnormative mourning “can be ‘productive rather than pathological, abundant rather than lacking, social rather than solipsistic, militant rather than reactionary’” (22). Here Woubshet no longer desires simply a neutral opposition to the pathological (that is, the nonnormative), but—in the term “productive”—casts his lot in with a term derived from the cathectic economy of capital. In turn “social” evokes liberal citizenship and pluralism, while “militant” continues in the valorization of vigorous and positive affect suggested earlier by “powerful,” “agent[ial],” “open,” and “fearless.” Inasmuch as “militancy,” “articulation,” “social[ity],” and “productiv[ity]” address themselves to futurity, they reiterate the natalism that Woubshet in agreement with Edelman deemed unsalvageable.

    Indeed, Edelman himself is perhaps most helpful in diagnosing the forms of complicity I’ve attributed to Woubshet. In a 2006 piece, he cautions us against the trap of “affirm[ing] an angry, uncivil ‘politics of negativity’” (“The Antisocial Thesis in Queer Theory” 821). Insofar as such negativity is “affirmed,” it becomes “little more than Oedipal kitsch,” performing the sentimental and “fundamentalist […] attachment to ‘sense, mastery, and meaning,’” and thereby striking “the pose of negativity while evacuating its force” (822). True negativity, meanwhile, refuses what Adorno calls the “all subjugating identity principle” (Negative Dialectics 320). In his attempt to depathologize queer melancholy, Woubshet pays homage to negativity, spurning the identification between melancholy and pathology. But in framing that melancholy as “militant,” “productive,” “social,” “articulate,” “open,” “fearless,” and certainly “agent[ial],” his negativity is outed as an identity principle in drag. This complicity also lends support to Jasbir Puar’s recent critique of affect theory (“Prognosis Time: Toward a Geopolitics of Affect, Debility, and Capacity”). For her the latter, in attempting to conceptualize a register of energies and forces uncapturable by a form of governmentality dependent on the capitalization of intellectual and emotional labor, unwittingly finds itself attributing to affect a set of optimistic, buoyant characteristics that are themselves of a piece with the imperatives of productivity and ablement central to late capital in the first place (“Prognosis Time”). While Woubshet’s methodology has no stake in affect, the optimism inherent in his characterizations of melancholic grief and its creative expression—even his exclusionary attention to only those who have taken it upon themselves to create—instantiates the ideological double-bind of Puar’s affect theorists.

    Of course, a productivity that is cyclical and endlessly iterative would be recuperable where one that is teleological would not. And his investment in the trope of the calendar, which evokes a form of articulation that repeats—despite its “militan[cy]”—in stasis, suggests that this is version of productivity Woubshet has in mind. So his flirtation with productivity is potentially aesthetic rather than ideological. Whatever the case may be, The Calendar of Loss remains a rich and urgently needed contribution. When the legacy of AIDS is being submerged, not only by the rhetoric of gay liberalism, but by a generation of queer theorists who have turned their attentions elsewhere, efforts like Woubshet’s to “speak again” its history and “reanimate lives that demand remembering” cannot go unnoticed (xi).


    _____

    Travis Alexander is a Mellon Graduate Fellow at The University of North Carolina, Chapel Hill. Though broadly interested in Post-45 literature and visual art, his specific interests cluster around portrayals of the HIV/AIDS epidemic in film, literature, television, and cultural theory between the 1980s and 1990s. Website: http://englishcomplit.unc.edu/people/travis-alexander.

    Back to the essay
    _____

    Works Cited

    • Adorno, Theodor. Negative Dialectics. Trans. E.B. Ashton. New York: Continuum, 1994.
    • Edelman, Lee with Robert L. Caserio, Judith Halberstam, José Esteban Muñoz, and Tim Dean. “The Antisocial Thesis in Queer Theory.” PMLA 121.3 (2006): 819 – 828.
    • Puar, Jasbir. “Prognosis Time: Toward a Geopolitics of Affect, Debility, and Capacity.” Women & Performance: A Journal of Feminist Theory 19.2 (2009): 161 – 172.
    • Woubshet, Dagmawi. The Calendar of Loss: Race, Sexuality, and Mourning in the Early Era of AIDS. Baltimore: The Johns Hopkins University Press, 2015.
  • Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    By Audrey Watters

    ~

    This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology and the Media, Art, and Text PhD Program. The slides are also available here.

    Thank you very much for inviting me here to speak today. I’m particularly pleased to be speaking to those from Sociology and those from the English and those from the Media, Art, and Text departments, and I hope my talk can walk the line between and among disciplines and methods – or piss everyone off in equal measure. Either way.

    This is the last public talk I’ll deliver in 2016, and I confess I am relieved (I am exhausted!) as well as honored to be here. But when I finish this talk, my work for the year isn’t done. No rest for the wicked – ever, but particularly in the freelance economy.

    As I have done for the past six years, I will spend the rest of November and December publishing my review of what I deem the “Top Ed-Tech Trends” of the year. It’s an intense research project that usually tops out at about 75,000 words, written over the course of four to six weeks. I pick ten trends and themes in order to closely at the recent past, the near-term history of education technology. Because of the amount of information that is published about ed-tech – the amount of information, its irrelevance, its incoherence, its lack of context – it can be quite challenging to keep up with what is really happening in ed-tech. And just as importantly, what is not happening.

    So that’s what I try to do. And I’ll boast right here – no shame in that – no one else does as in-depth or thorough job as me, certainly no one who is entirely independent from venture capital, corporate or institutional backing, or philanthropic funding. (Of course, if you look for those education technology writers who are independent from venture capital, corporate or institutional backing, or philanthropic funding, there is pretty much only me.)

    The stories that I write about the “Top Ed-Tech Trends” are the antithesis of most articles you’ll see about education technology that invoke “top” and “trends.” For me, still framing my work that way – “top trends” – is a purposeful rhetorical move to shed light, to subvert, to offer a sly commentary of sorts on the shallowness of what passes as journalism, criticism, analysis. I’m not interested in making quickly thrown-together lists and bullet points. I’m not interested in publishing clickbait. I am interested nevertheless in the stories – shallow or sweeping – that we tell and spread about technology and education technology, about the future of education technology, about our technological future.

    Let me be clear, I am not a futurist – even though I’m often described as “ed-tech’s Cassandra.” The tagline of my website is “the history of the future of education,” and I’m much more interested in chronicling the predictions that others make, have made about the future of education than I am writing predictions of my own.

    One of my favorites: “Books will soon be obsolete in schools,” Thomas Edison said in 1913. Any day now. Any day now.

    Here are a couple of more recent predictions:

    “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.” – that’s Sebastian Thrun, best known perhaps for his work at Google on the self-driving car and as a co-founder of the MOOC (massive open online course) startup Udacity. The quotation is from 2012.

    And from 2013, by Harvard Business School professor, author of the book The Innovator’s Dilemma, and popularizer of the phrase “disruptive innovation,” Clayton Christensen: “In fifteen years from now, half of US universities may be in bankruptcy. In the end I’m excited to see that happen. So pray for Harvard Business School if you wouldn’t mind.”

    Pray for Harvard Business School. No. I don’t think so.

    Both of these predictions are fantasy. Nightmarish, yes. But fantasy. Fantasy about a future of education. It’s a powerful story, but not a prediction made based on data or modeling or quantitative research into the growing (or shrinking) higher education sector. Indeed, according to the latest statistics from the Department of Education – now granted, this is from the 2012–2013 academic year – there are 4726 degree-granting postsecondary institutions in the United States. A 46% increase since 1980. There are, according to another source (non-governmental and less reliable, I think), over 25,000 universities in the world. This number is increasing year-over-year as well. So to predict that the vast vast majority of these schools (save Harvard, of course) will go away in the next decade or so or that they’ll be bankrupt or replaced by Silicon Valley’s version of online training is simply wishful thinking – dangerous, wishful thinking from two prominent figures who will benefit greatly if this particular fantasy comes true (and not just because they’ll get to claim that they predicted this future).

    Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

    Fantasy. Fortune-telling. Or as capitalism prefers to call it “market research.”

    “Market research” involves fantastic stories of future markets. These predictions are often accompanied with a press release touting the size that this or that market will soon grow to – how many billions of dollars schools will spend on computers by 2020, how many billions of dollars of virtual reality gear schools will buy by 2025, how many billions of dollars of schools will spend on robot tutors by 2030, how many billions of dollars will companies spend on online training by 2035, how big will coding bootcamp market will be by 2040, and so on. The markets, according to the press releases, are always growing. Fantasy.

    In 2011, the analyst firm Gartner predicted that annual tablet shipments would exceed 300 million units by 2015. Half of those, the firm said, would be iPads. IDC estimates that the total number of shipments in 2015 was actually around 207 million units. Apple sold just 50 million iPads. That’s not even the best worst Gartner prediction. In October of 2006, Gartner said that Apple’s “best bet for long-term success is to quit the hardware business and license the Mac to Dell.” Less than three months later, Apple introduced the iPhone. The very next day, Apple shares hit $97.80, an all-time high for the company. By 2012 – yes, thanks to its hardware business – Apple’s stock had risen to the point that the company was worth a record-breaking $624 billion.

    But somehow, folks – including many, many in education and education technology – still pay attention to Gartner. They still pay Gartner a lot of money for consulting and forecasting services.

    People find comfort in these predictions, in these fantasies. Why?

    Gartner is perhaps best known for its “Hype Cycle,” a proprietary graphic presentation that claims to show how emerging technologies will be adopted.

    According to Gartner, technologies go through five stages: first, there is a “technology trigger.” As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage: the “peak of inflated expectations.” So many promises have been made about this technological breakthrough. Then, the third stage: the “trough of disillusionment.” Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly – this is the “slope of enlightenment.” Eventually the new technology becomes mainstream – the “plateau of productivity.”

    It’s not that hard to identify significant problems with the Hype Cycle, least of which being it’s not a cycle. It’s a curve. It’s not a particularly scientific model. It demands that technologies always move forward along it.

    Gartner says its methodology is proprietary – which is code for “hidden from scrutiny.” Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place technologies on the line. But most of the time when Gartner uses the word “methodology,” it is trying to signify “science,” and what it really means is “expensive reports you should buy to help you make better business decisions.”

    Can it really help you make better business decisions? It’s just a curve with some technologies plotted along it. The Hype Cycle doesn’t help explain why technologies move from one stage to another. It doesn’t account for technological precursors – new technologies rarely appear out of nowhere – or political or social changes that might prompt or preclude adoption. And at the end it is simply too optimistic, unreasonably so, I’d argue. No matter how dumb or useless a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted. Where would you plot the Segway, for example? (In 2008, ever hopeful, Gartner insisted that “This thing certainly isn’t dead and maybe it will yet blossom.” Maybe it will, Gartner. Maybe it will.)

    And maybe this gets to the heart as to why I’m not a futurist. I don’t share this belief in an increasingly technological future; I don’t believe that more technology means the world gets “more better.” I don’t believe that more technology means that education gets “more better.”

    Every year since 2004, the New Media Consortium, a non-profit organization that advocates for new media and new technologies in education, has issued its own forecasting report, the Horizon Report, naming a handful of technologies that, as the name suggests, it contends are “on the horizon.”

    Unlike Gartner, the New Media Consortium is fairly transparent about how this process works. The organization invites various “experts” to participate in the advisory board that, throughout the course of each year, works on assembling its list of emerging technologies. The process relies on the Delphi method, whittling down a long list of trends and technologies by a process of ranking and voting until six key trends, six emerging technologies remain.

    Disclosure/disclaimer: I am a folklorist by training. The last time I took a class on “methods” was, like, 1998. And admittedly I never learned about the Delphi method – what the New Media Consortium uses for this research project – until I became a scholar of education technology looking into the Horizon Report. As a folklorist, of course, I did catch the reference to the Oracle of Delphi.

    Like so much of computer technology, the roots of the Delphi method are in the military, developed during the Cold War to forecast technological developments that the military might use and that the military might have to respond to. The military wanted better predictive capabilities. But – and here’s the catch – it wanted to identify technology trends without being caught up in theory. It wanted to identify technology trends without developing models. How do you do that? You gather experts. You get those experts to consensus.

    So here is the consensus from the past twelve years of the Horizon Report for higher education. These are the technologies it has identified that are between one and five years from mainstream adoption:

    It’s pretty easy, as with the Gartner Hype Cycle, to look at these predictions and note that they are almost all wrong in some way or another.

    Some are wrong because, say, the timeline is a bit off. The Horizon Report said in 2010 that “open content” was less than a year away from widespread adoption. I think we’re still inching towards that goal – admittedly “open textbooks” have seen a big push at the federal and at some state levels in the last year or so.

    Some of these predictions are just plain wrong. Virtual worlds in 2007, for example.

    And some are wrong because, to borrow a phrase from the theoretical physicist Wolfgang Pauli, they’re “not even wrong.” Take “collaborative learning,” for example, which this year’s K–12 report posits as a mid-term trend. Like, how would you argue against “collaborative learning” as occurring – now or some day – in classrooms? As a prediction about the future, it is not even wrong.

    But wrong or right – that’s not really the problem. Or rather, it’s not the only problem even if it is the easiest critique to make. I’m not terribly concerned about the accuracy of the predictions about the future of education technology that the Horizon Report has made over the last decade. But I do wonder how these stories influence decision-making across campuses.

    What might these predictions – this history of the future – tell us about the wishful thinking surrounding education technology and about the direction that the people the New Media Consortium views as “experts” want the future to take. What can we learn about the future by looking at the history of our imagining about education’s future. What role does powerful ed-tech storytelling (also known as marketing) play in shaping that future? Because remember: to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

    It’s both convenient and troubling then these forward-looking reports act as though they have no history of their own; they purposefully minimize or erase their own past. Each year – and I think this is what irks me most – the NMC fails to looks back at what it had predicted just the year before. It never revisits older predictions. It never mentions that they even exist. Gartner too removes technologies from the Hype Cycle each year with no explanation for what happened, no explanation as to why trends suddenly appear and disappear and reappear. These reports only look forward, with no history to ground their direction in.

    I understand why these sorts of reports exist, I do. I recognize that they are rhetorically useful to certain people in certain positions making certain claims about “what to do” in the future. You can write in a proposal that, “According to Gartner… blah blah blah.” Or “The Horizon Reports indicates that this is one of the most important trends in coming years, and that is why we need to commit significant resources – money and staff – to this initiative.” But then, let’s be honest, these reports aren’t about forecasting a future. They’re about justifying expenditures.

    “The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Let’s consider: is there something about the field of computer science in particular – and its ideological underpinnings – that makes it more prone to encourage, embrace, espouse these sorts of predictions? Is there something about Americans’ faith in science and technology, about our belief in technological progress as a signal of socio-economic or political progress, that makes us more susceptible to take these predictions at face value? Is there something about our fears and uncertainties – and not just now, days before this Presidential Election where we are obsessed with polls, refreshing Nate Silver’s website obsessively – that makes us prone to seek comfort, reassurance, certainty from those who can claim that they know what the future will hold?

    “Software is eating the world,” investor Marc Andreessen pronounced in a Wall Street Journal op-ed in 2011. “Over the next 10 years,” he wrote, “I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.” Buy stock in technology companies was really the underlying message of Andreessen’s op-ed; this isn’t another tech bubble, he wanted to reinsure investors. But many in Silicon Valley have interpreted this pronouncement – “software is eating the world” – as an affirmation and an inevitability. I hear it repeated all the time – “software is eating the world” – as though, once again, repeating things makes them true or makes them profound.

    If we believe that, indeed, “software is eating the world,” that we are living in a moment of extraordinary technological change, that we must – according to Gartner or the Horizon Report – be ever-vigilant about emerging technologies, that these technologies are contributing to uncertainty, to disruption, then it seems likely that we will demand a change in turn to our educational institutions (to lots of institutions, but let’s just focus on education). This is why this sort of forecasting is so important for us to scrutinize – to do so quantitatively and qualitatively, to look at methods and at theory, to ask who’s telling the story and who’s spreading the story, to listen for counter-narratives.

    This technological change, according to some of the most popular stories, is happening faster than ever before. It is creating an unprecedented explosion in the production of information. New information technologies, so we’re told, must therefore change how we learn – change what we need to know, how we know, how we create and share knowledge. Because of the pace of change and the scale of change and the locus of change (that is, “Silicon Valley” not “The Ivory Tower”) – again, so we’re told – our institutions, our public institutions can no longer keep up. These institutions will soon be outmoded, irrelevant. Again – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    These forecasting reports, these predictions about the future make themselves necessary through this powerful refrain, insisting that technological change is creating so much uncertainty that decision-makers need to be ever vigilant, ever attentive to new products.

    As Neil Postman and others have cautioned us, technologies tend to become mythic – unassailable, God-given, natural, irrefutable, absolute. So it is predicted. So it is written. Techno-scripture, to which we hand over a certain level of control – to the technologies themselves, sure, but just as importantly to the industries and the ideologies behind them. Take, for example, the founding editor of the technology trade magazine Wired, Kevin Kelly. His 2010 book was called What Technology Wants, as though technology is a living being with desires and drives; the title of his 2016 book, The Inevitable. We humans, in this framework, have no choice. The future – a certain flavor of technological future – is pre-ordained. Inevitable.

    I’ll repeat: I am not a futurist. I don’t make predictions. But I can look at the past and at the present in order to dissect stories about the future.

    So is the pace of technological change accelerating? Is society adopting technologies faster than it’s ever done before? Perhaps it feels like it. It certainly makes for a good headline, a good stump speech, a good keynote, a good marketing claim, a good myth. But the claim starts to fall apart under scrutiny.

    This graph comes from an article in the online publication Vox that includes a couple of those darling made-to-go-viral videos of young children using “old” technologies like rotary phones and portable cassette players – highly clickable, highly sharable stuff. The visual argument in the graph: the number of years it takes for one quarter of the US population to adopt a new technology has been shrinking with each new innovation.

    But the data is flawed. Some of the dates given for these inventions are questionable at best, if not outright inaccurate. If nothing else, it’s not so easy to pinpoint the exact moment, the exact year when a new technology came into being. There often are competing claims as to who invented a technology and when, for example, and there are early prototypes that may or may not “count.” James Clerk Maxwell did publish A Treatise on Electricity and Magnetism in 1873. Alexander Graham Bell made his famous telephone call to his assistant in 1876. Guglielmo Marconi did file his patent for radio in 1897. John Logie Baird demonstrated a working television system in 1926. The MITS Altair 8800, an early personal computer that came as a kit you had to assemble, was released in 1975. But Martin Cooper, a Motorola exec, made the first mobile telephone call in 1973, not 1983. And the Internet? The first ARPANET link was established between UCLA and the Stanford Research Institute in 1969. The Internet was not invented in 1991.

    So we can reorganize the bar graph. But it’s still got problems.

    The Internet did become more privatized, more commercialized around that date – 1991 – and thanks to companies like AOL, a version of it became more accessible to more people. But if you’re looking at when technologies became accessible to people, you can’t use 1873 as your date for electricity, you can’t use 1876 as your year for the telephone, and you can’t use 1926 as your year for the television. It took years for the infrastructure of electricity and telephony to be built, for access to become widespread; and subsequent technologies, let’s remember, have simply piggy-backed on these existing networks. Our Internet service providers today are likely telephone and TV companies; our houses are already wired for new WiFi-enabled products and predictions.

    Economic historians who are interested in these sorts of comparisons of technologies and their effects typically set the threshold at 50% – that is, how long does it take after a technology is commercialized (not simply “invented”) for half the population to adopt it. This way, you’re not only looking at the economic behaviors of the wealthy, the early-adopters, the city-dwellers, and so on (but to be clear, you are still looking at a particular demographic – the privileged half.)

    And that changes the graph again:

    How many years do you think it’ll be before half of US households have a smart watch? A drone? A 3D printer? Virtual reality goggles? A self-driving car? Will they? Will it be fewer years than 9? I mean, it would have to be if, indeed, “technology” is speeding up and we are adopting new technologies faster than ever before.

    Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

    Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues (and this is from his recent book The Rise and Fall of American Growth: The US Standard of Living Since the Civil War), to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

    Let’s return briefly to those Horizon Report predictions again. They certainly reflect this belief that technology must be speeding up. Every year, there’s something new. There has to be. That’s the purpose of the report. The horizon is always “out there,” off in the distance.

    But if you squint, you can see each year’s report also reflects a decided lack of technological change. Every year, something is repeated – perhaps rephrased. And look at the predictions about mobile computing:

    • 2006 – the phones in their pockets
    • 2007 – the phones in their pockets
    • 2008 – oh crap, we don’t have enough bandwidth for the phones in their pockets
    • 2009 – the phones in their pockets
    • 2010 – the phones in their pockets
    • 2011 – the phones in their pockets
    • 2012 – the phones too big for their pockets
    • 2013 – the apps on the phones too big for their pockets
    • 2015 – the phones in their pockets
    • 2016 – the phones in their pockets

    This hardly makes the case for technological speeding up, for technology changing faster than it’s ever changed before. But that’s the story that people tell nevertheless. Why?

    I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

    But I also pay attention to these sorts of stories because there’s that part of me that is horrified at the stuff – predictions – that people pass off as true or as inevitable.

    “65% of today’s students will be employed in jobs that don’t exist yet.” I hear this statistic cited all the time. And it’s important, rhetorically, that it’s a statistic – that gives the appearance of being scientific. Why 65%? Why not 72% or 53%? How could we even know such a thing? Some people cite this as a figure from the Department of Labor. It is not. I can’t find its origin – but it must be true: a futurist said it in a keynote, and the video was posted to the Internet.

    The statistic is particularly amusing when quoted alongside one of the many predictions we’ve been inundated with lately about the coming automation of work. In 2014, The Economist asserted that “nearly half of American jobs could be automated in a decade or two.”“Before the end of this century,” Wired Magazine’s Kevin Kelly announced earlier this year, “70 percent of today’s occupations will be replaced by automation.”

    Therefore the task for schools – and I hope you can start to see where these different predictions start to converge – is to prepare students for a highly technological future, a future that has been almost entirely severed from the systems and processes and practices and institutions of the past. And if schools cannot conform to this particular future, then “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Now, I don’t believe that there’s anything inevitable about the future. I don’t believe that Moore’s Law – that the number of transistors on an integrated circuit doubles every two years and therefore computers are always exponentially smaller and faster – is actually a law. I don’t believe that robots will take, let alone need take, all our jobs. I don’t believe that YouTube has been rendered school irrevocably out-of-date. I don’t believe that technologies are changing so quickly that we should hand over our institutions to entrepreneurs, privatize our public sphere for techno-plutocrats.

    I don’t believe that we should cheer Elon Musk’s plans to abandon this planet and colonize Mars – he’s predicted he’ll do so by 2026. I believe we stay and we fight. I believe we need to recognize this as an ego-driven escapist evangelism.

    I believe we need to recognize that predicting the future is a form of evangelism as well. Sure gets couched in terms of science, it is underwritten by global capitalism. But it’s a story – a story that then takes on these mythic proportions, insisting that it is unassailable, unverifiable, but true.

    The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.

    Image credits: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28. And a special thanks to Tressie McMillan Cottom and David Golumbia for organizing this talk. And to Mike Caulfield for always helping me hash out these ideas.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • Vassilis Lambropoulos – A Review of Aamir Mufti’s “Forget English!”

    Vassilis Lambropoulos – A Review of Aamir Mufti’s “Forget English!”

    514ywdifl6l-_sx327_bo1204203200_Aamir R. Mufti:  Forget English!  Orientalisms and World Literatures (Harvard University Press, 2016)

    Reviewed by Vassilis Lambropoulos

    This essay was peer-reviewed by the editorial board of b2o: an online journal

    Aamir Mufti’s Forget English! exposes the regulatory operations of presumably borderless world literature.  Second, it questions the cultural control of presumably egalitarian global English.  Next, it traces the Orientalist administration of presumably universal colonial knowledge.  Readers may agree with all this despite the repeated warnings that these three systems remain closely implicated not only in the objects of study but also in epistemological critique.  Mufti’s most radical proposition comes last:  The basis of the modern national and global cultural field is the institution of literature, that is, the disciplinary literary regimen that includes the askeses of composition, the exercises of pleasure, the practices of interpretation, and the technologies of education.  Mufti’s critique of critique itself as an aesthetic ethics ought to be disturbing.  In what follows, I will repurpose his project, reshuffling its case studies, to foreground its ultimate target, literary ideology, namely, the constitutive antinomies of the interpretive freedom, the self-imposed limits and controls of aesthetic understanding.  I will do that by narrating the institutional story of “literature” that underlies his anatomy of world literature.

    Mufti proposes that today, as a popular project of translation, circulation, criticism, and scholarship, “world literature” turns an opaque and unequal process of violent appropriation into a supposedly transparent and equal one of free communication.  Its inviting name occludes “the ways in which contemporary critical thinking unwittingly replicates logics of a longer provenance in the colonial and postcolonial eras” (248).  This is particularly evident in multicultural celebrations of the Global South.  Mufti warns against “the triumphalist ‘We are the World’ tone so clearly discernible in the self-staging of world literature in our times.  In many ways, the rubric ‘postcolonial literature’ as used in the Global North now serves as a means of domesticating those radical energies – and not just linguistic or cultural differences – [for example, the now defunct “Bandung” internationalism] into the space of (bourgeois) world literature as varieties of local practice – as Indian, African, or Middle Eastern literary practices, for instance” (92).  Instead of liberal appeals to “diversity” and its token-like selections, what is needed is “a concept of world literature (and practices of teaching it) that work to reveal the ways in which diversity itself (national, religious, civilizational, continental) is a colonial and Orientalist problematic, one that emerges precisely on the plane of equivalence that is literature” (250).  Sensitivity to diversity and respect for difference may express noble sentiments but do nothing to question the values dominating the literary and academic market.

    Studies of scholars in world literature often “are salutary in having emphasized inequality as the primary structural principle of world literary space rather than difference, which has been the dominant preoccupation in the discussion of world literature since the late eighteenth century, including in Goethe’s late-in-life elaboration of the idea of Weltliteratur.  But they give us no account whatsoever of the exact nature of these forms of inequality and the sociocultural logics through which they have historically been instituted, logics of the institution of inequality that incorporate notions and practices of ‘difference’ and proceed precisely through them” (33).  Whether they are describing a “world system” or a “republic of letters,” these scholars fail “to understand the mutual imbrication of inequality and difference” (33) in their operations, which is as short sighted as studying autopoiesis in Niklas Luhmann but not Cornelius Castoriadis.  Mufti does not elaborate a new model of doing world literature.  Instead, he examines how this comprehensive approach to culture has been devised and institutionalized for some two hundred fifty years, starting with the observation that its current resurgence is “a post-1989 development, which has appeared against the background of the larger neoliberal attempt to monopolize all possibilities of the international into the global life of capital.  This mode of appearance of the literatures of the Global South in the literary sphere of the North is thus linked to the disappearance of those varieties of internationalism that had sought in various ways to bypass the circuits of interaction, transmission, and exchange of the emergent global bourgeois order in the postwar and early postcolonial decades in the interest of the decolonizing societies of the South” (91).  Mufti seeks “to unmask and to make available for criticism and analysis” (20) world literature in the twenty first century as the main “field force” (199) of the project to subsume all centrifugal possibilities for an international literature under the monopoly of global cultural capital.  He treats it simultaneously as a “concept,” a “field of study,” and a set of “practices and institutional frameworks” (10), and uses a genealogical approach for a “critical-historical examination of a certain constellation of ideas and practices in its accretions and transformations over time” (19-20).  In what follows I discuss much less the numerous and wonderful cases to focus on the larger historical trajectory produced by this approach.

    The genealogy of world literature begins with the role that “literature as national institution” (3) played “in the emergence of the hierarchies that structure relations between societies in the modern world” (97).  An international literary space first formed in Europe as a structure of rivalries among the traditions (58) emerging in the “intra-European ‘competitive’ vernacularization,” which was later followed by its “colonial absorption and transformation” (76).  The standardization of the vernaculars was a central part of “a project of ethnonational or civilizational nationalism in linguistically diverse and multicultural societies” (148).  This made possible the formation of “literature” as a separate domain of writing and reading out of diverse guild, church, local, and other traditions.  “The nationalization of languages over the past two centuries all over the world . . . transformed former extensive and dispersed cultures of writing . . . into narrowly conceived ethnonational spheres” (146).  Through an extensive philological and interpretive operation “often-overlapping bodies of writing came to acquire, through a process of historicization, distinct personalities as ‘literature’ along national lines” (97).  This is how literature achieves centrality in all constellations of national arts.  “The (now universal) category of literature itself . . . marks this process of assimilation of diverse cultures of writing” (80).  New practices of reading claim existing textual regimes for new purposes and milieus while new elites are also trained to curate them.  “In this process of the acquisition of literary history, the textual corpus acquires, first of all, the attributes of literariness.  That is to say, . . . it enters the world literary system as one among many other literatures, being subject henceforth to the requirements and measures of literariness, replacing the models and modes of evaluation internal to the textual corpus itself.  Furthermore, in the moment of its historicization, it undergoes a shift of orientation within the larger social formation, being reinscribed within a discursive system for the attribution of a literature to a language, understood as the unique possession and mode of expression of a people” (141).

    A foundational act of historicization produced for the first time the terms of a distinct and independent literary history, anchoring a regional tradition in a national logic (143).  When a premodern corpus of undifferentiated writing acquired such a prestigious history, its newly self-regulating “works” entered literary modernity (38-9).   The admission of a corpus “into world literary space as a distinct literary tradition has characteristically taken place since the nineteenth century through its acquisition of a narrative of (‘national’) historical development” (131).  A literary history proper legitimized the literary modernity of a writing tradition by granting it national authority.

    Thus the word “literature” in the term world literature “marks the plane of equivalence and compatibility between historically distinct and particular practices of writing” (240).  The word “world” in “world literature” is a world of nations, the new regimes of sovereignty.  “’World’ and ‘nation’ are in a determinate relationship of mutual reinforcement here, rather than simply one of contradiction or negation” (77).  When world literature is invoked, it is important to keep in mind “the forms of nationalization of language, literature, and culture installed . . . precisely in and through the world-historical process that is the emergence of world literature” (130).  Literature and nation are mutually authenticating and reinforcing:  They confirm the antiquity and autonomy of one another. “The concept and practices of world literature, far from representing the superseding of national forms of identification of language, literature, and culture, emerged for the first time precisely along the forms of . . . nation-thinking” (97).  In addition, world literature played an important role in the orientation of national literatures toward the global space to which every nation could make its own “distinct national contribution” (112).  This role ought to be placed in an even broader global context since it is important to stress that “the emergence and modes of functioning of world literature, as the space of interaction between and articulation of the ‘national’ or regional literatures, are elements of the much-wider historical process of the emergence of the modern, bourgeois state and its dissemination worldwide, under colonial and semicolonial conditions, as the normative state-form of the modern era” (98).  Literature strengthened the claim of the national state against other state forms by giving voice to its organic character.

    It is in this broader context that Mufti introduces world literature as “the (bourgeois) understanding and experience of the world as an assemblage of ‘literary’ or expressive traditions, whose very ground of possibility was the Orientalist knowledge revolution” (90).  Tracing “the historical dialectic of Orientalism and/as world literature” (38) within literary studies since the late 18th century (99), he highlights the production of entirely new objects of study and insists on the central role “that philological Orientalism played in producing and establishing a method and a system for classifying and evaluating diverse forms of textuality, now all processed and codified uniformly as literature” (80).  If national literature was from the beginning world literature too, this was based on Orientalist assumptions.  Mufti’s strong thesis is that “a genealogy of world literature . . . leads to the classical phase of modern Orientalism in the late eighteenth and early nineteenth centuries, an enormous assemblage of projects and practices that was the ground for the emergence of the concept of world literature as for the literary and scholarly practices it originally referenced” (19).  The project of philological Orientalism, from the microscopic level of the text to the macroscopic one of the library, produces an entire hermeneutics, which “may be understood as a set of processes for the reorganization of language, literature, and culture on a planetary scale that effected the assimilation of heterogeneous and dispersed bodies of writing onto the plane of equivalence and evaluability that is (world) literature, fundamentally transforming in the process their internal distribution and coherence, their modes of authorization, and their relationship to the larger social order and social imaginaries in their place of origin” (145).  In a nutshell, this is how the colonial Orient was collected, archived, studied, and administered, and the regimes of the truth of the empire established and imposed.

    Orientalism should be understood not only as the apparatus that produced the Orient as a domain of interpretation and administration but additionally as “the cultural system that for the first time articulated a concept of the world as an assemblage of ‘nations’ with distinct expressive traditions, above all ‘literary’ ones.  Orientalism thus played a crucial role in the emergence of the cultural logics of the modern bourgeois world, an element of European self-making, first of all” (35).  In this respect, as in others, the author acknowledges his predecessor, Edward Said, whose  “entire effort in Orientalism was (at one level) to argue for the centrality of Orientalism, as cultural logic and enterprise, to the emergence of modern European culture, to Europe’s self-making” (75).  Mufti illustrates his argument with a fascinating example, proposing that the “lyricization of poetry in the West,” that is, the “gradual expansion of . . . ‘lyric’ norms of expression . . .  to encompass” all practices of reading and writing poetry, is “an intercultural and worldwide process” that can be traced back to the “Orientalist ‘discovery’ of the ‘ancient’ poetic traditions of the ‘Eastern nations’” (71).   By considering the Orient/Occident interplay, a genealogy of the early concepts and practices of world literature shows how a “’lyric’ sensibility emerged in Europe at the threshold of modernity in the encounter with ‘Oriental’ verse and, having taken over the universe of poetic expression in the West, became a benchmark and a test for ‘Oriental’ writing traditions themselves, erasing in the process all memory of its intercultural origins” (74).

    Together, philological Orientalism and (adopting a contrast of Erich Auerbach’s, Herder’s “Nordic” national rather than Vico’s “Latinate civilizatory”) philosophical historicism made the new concept of world literature possible.  The combined Orientalist and historicist thinking legitimized both the different manners of being human and “the same manner of being different” (77).  In addition to its contribution to European self-making, Orientalism contributed to world making as well and deserves to be studied “as an articulated and effective imperial system of cultural mapping, which produced for the first time a conception of the world as an assemblage of civilizational entities, each in possession of its own textual and/or expressive traditions” (20).  Oriental mapping structured “the cultural logic of the modern, bourgeois West in its outward orientation” (11) and facilitated the expansionist “transformation of societies on a world scale” (90).  In non-Western societies it fabricated “forms of cultural authority tied to the claim to authenticity of (religious, cultural, and national) ‘tradition’” (27).

    Orientalism was first activated in the production, periodization, and territorialization of India.  “What the early generation of Orientalists encountered on the subcontinent was not one single culture of writing but rather a loose articulation of different, often overlapping but also mutually exclusive, systems based variously on Persian, Sanskrit, and a large number of the vernacular registers, often more than one in a single language, properly speaking” (104-5).  To make sense of this variety and complexity, they re-structured it completely on the basis of the only model they knew and trusted, the historicist narrative of an evolutionary national history.  “The German and eventually pan-European discourse of world literature is thus fundamentally indebted to and predicated on” (104) the British colonial project of Indological philology, launched near the end of the 18th century.  “It is in this manner, by providing the materials and the practices of a new cosmopolitanism (as well as indigenist or particularist) conception of the world as linguistic and cultural assemblage, that English began to supplant the neoclassical order on the continent in which above all others French and France had provided the norms for literary production” (109).  Non-Western textual traditions entered the literary space as “literature” through the revolution of the philological knowledge that included the “discovery” of classical languages in the East and the invention of their family tree (58).  Eastern writing practices were absorbed into “literature” when their ancient works were classicized, that is, established as the original tradition of a civilization and arranged as its core national canon.

    Mufti documents “that Orientalist theories of cultural difference are grounded in a notion of indigeneity as the condition of culture – a chronotope, properly speaking, of deep habitation in time – and that therefore nationalism is a fundamentally Orientalist cultural impulse” (37).  What he calls the “chronotope of the indigenous” (74) consists of “spatiotemporal figures of habitation” (74) deeply rooted in both place/territory and time/history (129).  Its territorially common ground validates “the authenticity of tradition” (112).  Consequently, the task of genealogical inquiry is “to give a historical account of the acquisition of literary history . . . by a vast, diffuse, and internally differentiated body of writing … a historical (and critical) account of the . . . ascription of historicality . . . structured around the chronotope of the indigenous” (143).  The Orientalist practice of indigenization standardized the pluralist logic of a pre-modern cultural space into a differentiated linguistic-literary field and ushered it into the colonial “world republic of letters.”

    The “dual process of indigenization” (116) of language, literature, and culture, which incorporates of the intertwined strategies of historicism and Orientalism, consisted in classicizing (say, into Sanskrit) a civilization (say, the Indo-Persian one) and vernacularizing (say, into Urdu and Hindi) its cosmopolitanism (say, the subcontinental one).  Τhus, through indigenization, Indian writing essentialized itself into a national literature in order to be admitted to the Orientalist canon of world literature and join the global system of different and unique cultures.  The overlapping colonial cultural projects of indigenization “in the name of return to the origin” (173) and vernacularization as recovery of “authenticity” (251) are inseparable from bourgeois modernization (119).  “It is thus in English as cultural system, broadly conceived – namely, in the new Indology and its wider reception in the Euro-American world – that the subcontinent was first conceived of in the modern era as a single cultural entity, a unique civilization with its roots on the Sanskritic and more particularly Vedic texts of the Aryans. . . .  The idea that India is a unique national civilization in possession of a ‘classical’ culture was first postulated on the terrain of literature, that is, in the very invention of the idea of Indian literature in the course of the philological revolution” (109).  The encounter between Oriental philology and Occidental literature produced a national literary model that inspired the Indian national sentiment and identity (115) and created the “institution of Indian literature” (37, 73).

    I have constructed here the chronological genealogy of world literature that drives Mufti’s argument, the linear story that is plotted in his book through complex discussions of practices, notions, and texts.  The “world” of world literature consists of indigenous cultures using vernaculars to sustain literature as their national institution.  Their heterogeneity is predicated on standardized difference, their cosmopolitanism is based on the nation-state, their unity guaranteed by unequal power relations, and they can all be traced to the Orientalist construction of the colonial archive, be it registry, collection, or museum.  Mufti puts into practice with great integrity and virtuosity his conviction that “the task of criticism today is at the very least the untangling and rearranging of the various elements presently congealed into seemingly distinct and autonomous objects of divergent literary histories.  The critical task of overcoming the colonial logics persistently at work in the formation of literary and linguistic identities today is thus indistinguishable from the task of pushing against the multiple identarian assumptions, colonial and Orientalist in nature, of Hindi and Urdu’s mutual and religiously marked distinctness and autonomy.  A post-colonial philology of this literary and linguistic complex can never adequately claim to be produced from a position uncontaminated by the language polemic that now constitutes it and can only proceed by working through its terms.  This secular-critical task, furthermore, corresponds not to the erection of some image of a heterogeneous past but to the elaboration of the contradictory contemporary situation of language and literature itself” (128-9).  Forgetting English is possible only in English.

    He advocates resistance both to the colonial gaze and national authenticity, asking fellow scholars to “forget” (that is, learn to question by working with) not only English and the “world” in world literature but also the prefix in post-colonial.  “If, on the one hand, I urge world literature studies to take seriously the colonial origins of the very concept and practices they take as their objet of study, on the other, I hope to question the more or less tacit nationalism of many cotemporary attempts to champion the cultural products of the colonial and postcolonial world against the dominance of European and more broadly Western cultures and practices” (53).  This position exemplifies notion of a contrapuntal criticism that takes into account intertwined perspectives and discourses. “No self-described attempt to ‘return’ to tradition, religious or secular, can sustain its claim to be autonomous of ‘the West’ as Other. . . . No attempt at self-definition and self-exploration can therefore bypass a historical critique of the West and its emergence into this particular position of dominance.  And, in this sense, the critique of the West and the logics of its imperial expansion from a postcolonial location is in fact a self-critique, since this location is at least partially a product of that historical process” (153-4).

    While both Orientalism and Occidentalism/Anglicism seek to capture an “one-world” reality, they are caught between the local and the cosmopolitan, the particular and the universal (3).  By consciously operating within these tensions without being at home in either of their poles, the exilic perspective introduced by Auerbach and later advocated by Said can avoid both cosmopolitan detachment and communal narcissism.  An “exilic rethinking of the philology of world literature” (41) would become the basis for a radicalized “philology as homeless practice” (200), for a “historically engaged and linguistically attuned” (241) secular criticism with a “missing homeland” (202).  Supporting neither transnational nor autochthonous social imaginaries, it can provide a dialectically alert account of concrete cultural circumstances “because it captures simultaneously the violent exclusions of the national frame, the material reality of its (physical as well as symbolic) borders, the dire need to overcome its destructive fixations, and its inescapability in the present moment” (194).

    In his conclusion, addressing the central case of post-colonial subcontinent, Mufti supplements the exilic perspective with an additional one, also drawn from twentieth century experience, which promises to offer intrinsic means of study by drawing explicitly on partition as condition and modality since the “politics of linguistic and literary indigenization is a distinct element in the larger historical process that culminated in the religio-political partition of India in 1947 and is thus at the same time an important element in the history of the worldwide institution of world literature” (38).  In a manner reminiscent of the ways in which post-Heideggerian thought puts metaphysics “under erasure,” Mufti puts the subcontinent under partition.  “In light of the historical analysis of the cultural logic of Orientalism-Anglicism operating in the long, fitful, and ongoing process of bourgeois modernization in the subcontinent that I have attempted here, the task of criticism with respect to the field of culture and society in the region is therefore to adopt partition as method, to enter this field and inhabit the processes of its bifurcation, partition not merely as event, result, or outcome but rather as the very modality of culture, a political logic that inheres in the core concepts and practices of the state” (200).  Not a closed part of the past or even its living memory, partition is “the very condition of possibility of nation-statehood and therefore the ever-renewed condition of national experience in the subcontinent” (201).  The political logic of partition is inherent in the normative majoritarianism of the modern nation-state which by definition entails the minoritarization of certain groups and practices, a crisis of legitimacy leading to the partition of society (200-1).  “To argue for partition as method is, therefore, to argue for extracting submerged modes of thinking and feeling from the ongoing historical experience that is partition” (202).

    Furthermore, in the twenty first century this condition operates far beyond the subcontinent.  Ours is a time of proliferating boundaries where the traditional institution of the border of the nation-state is undergoing internal and external challenges and transformations, with some of its functions “redistributed throughout social space” (7) and others globalized, turning it into a “universalized institution” (201).  What is the meaning of world literature in a world where borders are traversing urban, regional, national, and transnational environments and literature often functions as a generalized cartography?  With this question I will proceed to indicate just a few of the many fields of inquiry where this book deserves to be studied and activated.

    Mufti’s notion of “partition as method,” which enriches the problematic of books like Asia as Method:  Toward Deimperialization (21010) by Kuan-Hsing Chen’s and Border as Method (2013) by Sandro Mezzadra and Brett Neilson, should be of obvious interest to Border Studies, an interdisciplinary field that since the 1980s has been examining geographical, political, economic, cultural, and other boundaries primarily in Asia, Africa, and Latin America and with an emphasis on matters of migration and gender.  The field started by looking at legal, political, and lexical definitions but it has been expanding to consider how borderscapes are narrated, performed, and de-legitimized in the Global South.  An anatomy of world literature would complement current studies of the ways in which, in addition to lands, borderings distribute languages, communities, stories, signs, and jurisdictions.  The order of literature since its national and Oriental origins shows borders working as epistemological devices and markers of relations rather than lines and locations.

    An adjacent and even more interdisciplinary field is the study of territories and their flux in the integrated post-industrial world.  Influenced by the work of Deleuze & Guattari (with their interests from “minor literature” to plateaus to nomadology), it has radically shifted emphasis from the structure to the flow of capital and the dominant econo-semiotic system, which Mufti too has done with literature.  The “assemblage of enunciation” might fit well with his notion of the writing corpus, and the “plane of immanence” with his “plane of equivalence.”  Most importantly, the Deleuzian “rhythm” of difference and repetition would resonate with the contrapuntal circulation of literature in the post-colonial milieu.

    The sociology of culture would benefit greatly from attention to the emergence of the literary sphere and its citizenry, whose members often belong to the national intellectual aristocracy.  Given its interest in the ways in which Bourdieu’s habitus operates according to a logic of practice, it would examine the subfield of literature within the objects, norms, and practices of the cultural field.  Mufti’s work on production and appropriation, and above all domination through symbolic power, provide numerous examples of the kind of capital gained and interest served by disinterested taste as competence and distinction as performance.

    The quest for cultural capital and symbolic power has been driven by the counter-political ideology of the aesthetic state, a milieu and habitus where aesthetic practices constitute the highest form of politics.  Mufti contributes greatly to an understanding of this regime, including the institutions it establishes and cherishes.  The bourgeois subject, who is the citizen of that ideal state, responds to the functional differentiation of society in distinct borderlands with the democratization of art and the sacralization of high culture. Through the proper literary education, fiction and poetry train readers to achieve a Kantian freedom of aesthetic autonomy by giving the interpretive law to themselves above the constraints of any internal or external partition.

    The path from the sociology of culture to its ideology may lead next to its ethics, namely, art as a spiritual ascesis.  Mufti has discussed the political rationality of the humanities and the aesthetically administered university.  His rigorous genealogical approach may be supplemented by Ian Hunter’s interest in humanism and the pre-national state of the sixteenth and seventeenth centuries as well as the aesthetic discipline of literary cultivation that emerged with Romantic literature and philosophy.  The origins of the philological skills that mobilized Orientalism to create world literature may also lie in a combination of artistic pleasure as worldly ethical competence with literary criticism as a moral practice of the self, that is, in the aesthetico-ethical training of the self in interpretive (self-)problematization which first produced the reader of literature.

    In addition to chronicling the emergence of world literature, Aamir Mufti’s Forget English! reflects on “just about the most encompassing cultural concept of our times, the notion of the systematic totality of the expressive productions of nothing less than humanity in its entirety.” (252).  Through a genealogy of literary comparison it raises the question of doing comparative humanities on a global level.  That is why it ought to have a broad scholarly and pedagogical impact.  This is not a book that scholars may read with profit, and then simply add to their bibliography and syllabus.  It invites reflection on what it means to compare at a time of universal comparability, that is, when everything is comparable (and also appears contemporary) to everything else.  Rather than seeking to add unknown or neglected materials to our canons, it challenges us to reconfigure canon making itself as well as the way we put together panels, collective volumes, or institutes.  Ultimately, Mufti is proposing that, in addition to new critiques, World Humanities needs new ways of constituting the humanities as a common.

    Vassilis Lambropoulos is the C. P. Cavafy Professor of Modern Greek in the Departments of Classical Studies and Comparative Literature of the University of Michigan.  He is the author of Literature as National Institution (1988).

  • Elizabeth Losh — Hiding Inside the Magic Circle: Gamergate and the End of Safe Space

    Elizabeth Losh — Hiding Inside the Magic Circle: Gamergate and the End of Safe Space

    by Elizabeth Losh, The College of William and Mary

    The Gamergate controversy of recent years has brought renewed public attention to issues around online misogyny, as feminist game developers, critics, scholars, and fans of independent video gaming have been targeted by very intense campaigns of digital harassment that seem to threaten their fundamental rights to personal privacy, bodily safety, and sexual agency. Feminists under attack by users of the hashtag #GamerGate complain of being silenced, as they report being disciplined for imagined infractions of supposed sexual, social, journalistic, and ludic norms in computational culture with punishing messages of censure, ridicule, exclusion, and violence. As noted by the mainstream news media, extremely aggressive tactics have been deployed, including leaking women’s sensitive private information – such as unlisted addresses and social security numbers – to the web (a practice known as “doxxing”), placing false reports with law enforcement or emergency first responders (a practice known as “swatting”), and highly personalized stalking with rapid escalations of threats of graphic violence that are often sexualized as rape or racialized as lynching. Although it may be important for the eloquent first-person testimony of the terrorized women themselves to be given priority as speech acts that command attention in resisting prevailing misogyny, the women’s antagonists often are allowed to remain invisible. Furthermore, allies presuming to advocate for the feminist victims of Gamergate may not adequately honor their stated wishes for peace, privacy, and closure that those experiencing online violence may express (Quinn 2015). This essay attempts to examine the larger discursive context of Gamergate and why hardcore gamers who were fans of AAA videogames – often with military storylines and first-person shooter game mechanics – constructed a seemingly illogical and paranoid explanatory theory about so-called “social justice warriors” (Bokhari et al. 2015) or “SJWs,” pursuing unfair advantage to sway the game industry.

    How do we understand how Gamergaters’ claims for noninterference and sovereignty in game worlds and online forums function alongside their claims for no-holds-barred investigations and public debates? Common rhetorical tactics deployed by Gamergaters include using rights-based language to further this specific variant of the men’s rights movement (Esmay 2014) and making appeals to the values of a supposedly rational public sphere (MSMPlan 2015). As these hardcore gaming fans deny the materiality, affect, embodiment, labor, and situatedness of new media, they also affirm positive notions about the exceptionalism of a realm defined – in Nicholas Negroponte’s terms – by bits rather than atoms. Gamergaters are particularly vehement in denying that “online violence” is a possibility with tweets such as “>violence >online pick one” and “will you please point me to the online killing fields where all the bodies from violence online are kept?” (Wernimont 2015). The Gamergate vision of digital culture is one of disembodied and immaterial interactions in which emotional harm is considered to be nonviolent.

    According to Gamergate accounts, the assumption that hardcore gamers representing masculine white privilege were under attack was also apparently buttressed by a number of online articles by game journalists suggesting that that the species was endangered and soon to be extinct. Gamers were declared “over” (Alexander 2014), at their “end” (Golding 2014), or facing the “death” of their collective identity (Plunkett 2014). The arguments made for years by feminist game collectives for pursuing the large market share in lower-status “casual” games, often played by women, had finally seemed to create inroads for independent developers. At the same time Gamergaters described their defensive position as a response to what they often characterized as a feminist “incursion” or “invasion” of gaming that was conceptualized as a substantive attack or threat to gamers. So-called “men’s rights” proponents – who may characterize themselves as “Men’s Human Rights Activists” – differentiated themselves from the distributed and heterogeneous population of gamers but also proclaimed that “the same people attacking Gamergate have been attacking us for years, using exactly the same tactics” (Esmay 2014). According to Breitbart columnist Yiannopoulos (2014a), “cultural warriors” arrived on the scene of gaming like “genocidal, psychopathic aliens in Independence Day;” these “social justice warriors” allegedly attempted to colonize a diverse community, but their “killjoy” advances were repelled and defenders declared them “not welcome in the gaming community.” According to this columnist, supposedly “politeness and persistence” had guaranteed victory in “the culture wars against guilt-mongerers, nannies, authoritarians and far-Left agitators.” While Sara Ahmed (2010) has explicitly called for self-identified “feminist killjoys” to disrupt the perpetuation of patriarchal false consciousness and the enforcement of positive affect in society, the perceived opponents of Gamergate are often cast as the aggressors despite what may be deep desires to participate in the gaming communities that exclude them.

    Decades before Gamergate, Dutch game theorist Johan Huizinga (2014) described what he called the “magic circle” of the temporary world constituted by a game, which appears to function as an isolated “consecrated spot” within which “special rules obtain” for performances apart from everyday concerns (10). Gamergaters often use similar terminology to discuss how game spaces should be intended to serve as a refuge from real-world behavioral constraints and the restrictions of social roles, as in the case of one Breitbart blogger seeking to exclude “angry feminists” and “unethical journalists” from interference with game play.

    Gamers, as dozens of readers have told me in the relatively short time I have been covering the controversy now called #GamerGate, play games to escape the frustrations and absurdities of everyday life. That’s why they object so strongly to having those frustrations injected into their online worlds. The war in the gaming industry isn’t about right versus left, or tolerance versus bigotry: it’s between those who leverage video games to fight proxy wars about other things, introducing unwanted and unwarranted tension and misery, and those who simply want to enjoy themselves. (Yiannopoulos 2014a)

    Gamergate advocates claim that video games are expected to be arenas where gamers can assert their sovereignty and self-determination in spaces that can’t be “leveraged” or annexed to “fight proxy wars” by non-gamer outsiders.

    According to Huizinga (2014), the arena of game play is characterized by the freedom of voluntary participation, disinterested behavior, and an opposition to serious conduct. Similar criteria also often are presented as premises for action in the rhetoric of Gamergate enthusiasts in their comments on various sites for public debate. For example, feminist game developers and critics may be accused of coercing and manipulating potential allies who are journalists through sexual liaisons, romantic promises, or appeals to social justice that invoke guilt and shame. Feminist opponents of Gamergaters are also characterized on sites such as Breitbart as “self-promoters” and “opportunists” and labeled as “egotistical” people who “beg for sympathy and cash” (Yiannopoulos 2014b). Thus, according to the logic of free choice, feminist “social justice-oriented art” in digital culture is aimed at “robbing players of agency and individualism” in every possible kind of engagement (Yiannopoulos 2014b).

    Personal freedom and a separation from material interests or a profit motive are often cited as ethical values shared by Gamergate, although many of its tactics are not at all solemn or high-minded. Active Gamergaters on the Escapist and 8chan emphasize their own diverse and distributed structure, and these anarchic swarms of participants take action “for the lulz,” much as members of Anonymous and 4chan have engaged in outing and calling out campaigns (Coleman 2013). Images of feminist gamers are altered with editing software, phrases like “online violence” are mocked, and fake identities are manufactured with puns and inside jokes. For example, in a crowd-funding effort to promote women in games who disavowed feminist “SJWs,” Gamergate forum members created an elaborate green-eyed and hoodie-wearing fictional persona intended to represent a pro-Gamergate libertarian “everywoman.” The avatar dubbed “Vivian James” wears the four-leafed clover of 4chan, “tough-loves video games,” and “loathes dishonesty and hypocrisy” (“The Birth of Vivian” 2015).

    While Gamergaters emphasize “personal responsibility” and “individual agency” (Yiannopoulos 2014b) as values, feminist critics tend to emphasize interdependence and states of being always-already subject to the coercions of others. In Huizinga’s (2014) terms, feminists inside the magic circle may be perceived as “spoil-sports” who must be “ejected” from the “community,” because they are attempting to break the magic world by failing to acknowledge its misogynistic conventions (11-12). As Anastasia Salter (2016) notes, in Huizinga’s analysis the spoil-sport is most visible in “boys’ games,” thereby establishing solidarity around youthful masculinity as the norm.

    By discussing misogyny in different venues for conversation among networked publics in game forums, blogs, or vlogging communities, and even within live multi-player gaming itself, feminists are cast as a disruptive presence.  Social justice warriors must be treated as aggressors to be repulsed by Gamergaters from the magic circles of game worlds in order to reclaim these spaces and return them to their proper exceptional status and thus maintain their security from real-world incursions.

    Of course, the concept of “safe space” has been central to the history of the women’s liberation movement and its associated consciousness-raising efforts. After all, feminists have reasoned that safe space might be necessary to explore intimate issues about sexuality and reproductive health – which might even include techniques for gynecological self-examination championed by foundational texts like Our Bodies, Ourselves – and safe space would also be needed to share confidences about personal histories of rape, domestic violence, and other forms of gendered trauma. How safe space is constituted can be developed along a number of different axes. For example, as awareness about “microaggressions” – a term used to describe the automatic or unconscious utterance of subtle insults (Solorzano, Ceja, & Yosso 2000) – has proliferated, participants at feminist events may be asked to be mindful of their own assumptions, privileges, and power relations in social gatherings. The full sensorium of potential kinds of assault may also be invoked in defining safe spaces, so those speaking loudly or wearing scent may be prohibited from these activities to protect those intolerant, averse, or allergic to certain stimuli.

    Feminists themselves have been reevaluating the assumed need for safe space for a variety of reasons. While media outlets grappling with the concept of “trigger warnings” may characterize any special treatment of vulnerable individuals as coddling or “hiding from scary ideas” (Shulevitz 2015), feminists themselves are often concerned about how the gestures of exclusion mandated by protective impulses enforce particular norms counter to the goal of empowerment. Some argue that “brave spaces” that encourage public acts of asserting identity or declaring solidarity may be more productive than private “safe spaces” (Fox 2004). Homogeneous safe spaces designed for the security of cisgendered whites may be criticized as excluding transgender people (Browne 2009) or people of color (Halberstam 2014). As Betty Sasaki (2002) observes, “safety” can become “the code word for the absence of conflict, a tacit and seductive invitation to collude with the unspoken ideological machinery of the institutional family” (47). And Donadey (2009) points out the irony “that radical feminist pedagogy tends to replicate the assumptions of the bourgeois concept of the public sphere” (214).

    In addition to using the #Gamergate and #SJW (for “social justice warrior”) hashtags on social media platforms such as Twitter, Gamergate adherents frequently use #NotYourShield, which indicates that feminists shouldn’t be shielded from criticism merely because they might claim alliances with underrepresented groups, such as women or minorities, given the fact that members of these groups might not identify with feminism or feel exploited, disenfranchised, or excluded from hardcore gaming communities. #NotYourShield allies of Gamergate may embrace the quintessential hardcore gamer identity of AAA titles with military themes, or may indicate that they are content with conventionally feminized casual games played on mobile devices and don’t want to interfere with so-called “real” games. While Gamergaters may protect the borders of their own magic circles, they criticize those who claim feminist discourse operates in safe spaces devoid of challenges from opponents. Affixing the #NotYourShield piece of metadata to a message supports Gamergaters’ contentions that feminists use the victimization of women and people of color to shield themselves unfairly from rebuttals or tests of truth claims. In videos such as “#NotYourShield – We Are Gamers,” choruses of voices are carefully curated to emphasize “corruption” and “censorship” as features of feminism, and “transparency” and call-out culture as features of Gamergate.

    Although Huizinga’s (2014) magic circle may be more open to public spectatorship than the private sphere of feminist safe space, it is also a zone of exception that is marked off by “secrecy” and “disguise,” according to Homo Ludens (13). Even if the rules for the magic circle are assumed to be uncontested, and the space of play is accepted as apart from the everyday world, the exceptional territory of game play could be a space of less violence (if mockery of authoritarian rulers is tolerated in the case of the Bakhtinian carnivalesque) or more violence (if physical injuries from contact sports are permitted that would normally be prosecuted as assault). Nonetheless, according to Edward Castronova (2007), the membrane of the magic circle “can be considered a shield of sorts, protecting the fantasy world from the outside world. The inner world needs defining and protecting because it is necessary that everyone who goes there adhere to the different set of rules” (147).

    Feminist game critics have begun to question Huizinga’s (2014) concept of a zone of exceptionalism, particularly as the legal, economic, and social consequences of game play are manifested in a variety of “real world” contexts. For example, Mia Consalvo (2009) challenges Castronova’s belief that “fantasy worlds” are a separate domain: “even as he might wish for such spaces, such worlds must inevitably leave the hands of their creators and are then taken up (and altered, bent, modified, extended) by players or users—indicating that the inviolability of the game space is a fiction, as is the magic circle, as pertaining to digital games” (411). Within game spaces of conflict and collaboration, players may bring different agendas into the magic circle, and thus it might be more difficult than Huizinga (or Castronova) imagines to reach consensus about the common rules of play. For example, when a guild of players in World of Warcraft decided to hold a funeral in an area for player-versus-player combat, other participants justified attacking the solemn ceremony in a coordinated raid on the grounds of asserting existing play conventions (Losh 2009). Consalvo further claims the static, formalist vision of bounded play that is grounded in structuralist theory, which is articulated by Huizinga and his disciples, ignores the fact that context is constantly being evaluated by players. Instead of the magic circle, she posits that players “exist or understand ‘reality’ through recourse to various frames” (415).

    For women, queer and transgender persons, and people of color who identify as gamers, neither magic circle nor safe space often seem descriptive of the harsh settings of their game play experiences. As Lisa Nakamura (2012) observes, playing as a woman, a person of color, or a queer person requires extraordinary game skills and talent at a level of hyper-accomplishment because of the extremely rigorous “difficulty setting” of playing in an identity position other than straight white male. Unfortunately, to be an exceptional individual in an exceptional space is often punished rather than rewarded. Moreover, as a woman of color, Shonte Daniels (2014) has insisted that “gaming never was a safe space for women” because “their identity makes them vulnerable to threats or harassment.” However, she also speculates that Gamergate may prove to be “both a blessing and a curse,” given how much attention to online misogyny has been generated by the intensity and egregiousness of Gamergate behavior.

    Many date the Gamergate controversy from fall 2014 – when harassment of dozens of feminists in the videogame industry, including game developers Zoë Quinn and Brianna Wu and cultural critic Anita Sarkeesian, made headlines. However, online misogyny and gender-based aggression have had a long history in digital culture that goes back to bulletin boards, MOOs, and MUDs and the existence of virtual rape in early forms of cyberspace (Dibbell 1998). To coordinate the current campaign of harassment, IRC channels and online forums such as Reddit, 4chan, and 8chan were used by an anonymous and amorphous group that came to be represented by the Twitter hashtag #GamerGate after actor Adam Baldwin deployed a familiar suffix associated with prominent political cover-ups. According to the Wikipedia entry, Gamergate “has been described as a manifestation of a culture war over gaming culture diversification, artistic recognition and social criticism of video games, and the gamer social identity. Some of the people using the Gamergate hashtag allege collusion among feminists, progressives, journalists and social critics, which they believe is the cause of increasing social criticism in video game reviews” (“Gamergate Controversy” 2015).

    It is worth noting that Wikipedia’s handling of its own distributed labor practices defining Gamergate has had a contentious history that included a personal invitation to Gamergaters from Wikipedia founder Jimmy Wales to contribute to improving the Gamergate article (Wales 2014), a pointed rejection of financial contributions to Wikipedia from Gamergaters (“So I Decided to Email Jimbo” 2014), and a defense of banning Wikipedia editors perceived as biased against Gamergate (Beaudette 2014). Ironically, during this intense period of engagement with the “toxic” participants of Gamergate eventually dismissed by Wales, Wikipedia often deployed a rhetoric about volunteerism, disinterested conduct, and playing by a neutral set of rules that paralleled similar rhetorical appeals from Gamergaters.

    Attention to this recent controversy – about who is a gamer and what is a game – has already generated a literature of scholarly response that focuses, as this essay does, on Gamergate rhetoric itself. Shira Chess and Adrienne Shaw’s (2015) essay, “A Conspiracy of Fishes,” analyzes how a particular cultural moment in which “masculine gaming culture became aware of and began responding to feminist game scholars” produced conspiratorial discourses with a specific internal logic that shouldn’t be dismissed as nonsensical:

    It is less useful to consider the validity of a conspiracy in terms of actual persecution, and is more potent if we look at it in terms of a combination of perceived persecution and an examination of the anxieties that the conspiracy is articulating. From this perspective, we can look at gaming culture as a somewhat marginalized group: For years those who have participated in gaming culture have defended their interests in spite of claims by popular media and (some) academics blaming it for violence, racism, and sexism. A perceived threat opens a venue for those who feel their culture has been misunderstood—regardless of whether they are the oppressors or the ones being oppressed. It is easy to negate and mark the claims of this group as inconsequential, but it is more powerful to consider the cultural realities that underline those claims. (217)

    As Chess and Shaw point out, the gamer identity may function in the context of other kinds of intersectional identities in which subjects for which the personal is political can be imagined as oppressors in one context and the oppressed in another.

    In addition to deploying a primary strategy about constructing a narrative about persecution aimed at a marginalized group, Gamergate is also concerned with the secondary strategy of mapping supposed networks of influence across publication venues, media genres, knowledge domains, political spheres, and economic sectors. Such Gamergate infographics seem to have begun with visualizations that were often reminiscent of Wanted posters, in which names and photographs of individual offenders were clustered in particular interest areas. For example, 4chan assembled a list of “SJW Game Journalists” that was republished on Reddit, which goes far beyond the initial allegations of impropriety about game reviewing at Kotaku to target writers at over a dozen other publications.

    As Gamergaters go down the “rabbit hole” of exploring possible connections and exposing hidden networks, they eventually claim political and educational institutions as agents in the conspiracy with a particular focus on DiGRA, the Digital Games Research Association, which was founded in 2003 and holds an international conference each year. One diagram shows the tentacles of DiGRA extending into online venues for gaming news and reviews, such as Kotaku, Gamasutra, and Polygon, as well as mainstream publications with a print tradition, such as The Guardian and TIME, and conference venues for many AAA games, such as the annual Game Developers Conference (GDC), which was founded in 1988 with a focus on fostering more creativity in the industry. Pictures of offender/participants in the network continued to be featured in this denser and more recursive form of network mapping, as though facial recognition would be a key literacy for Gamergaters.

    It is worth noting that many feminists would describe DiGRA as far from being a haven organization from misogyny, given existing biases in game studies that may privilege academics with ties to computer science, corporate start-ups, or other male dominated fields. Members of the feminist game collective Ludica have described strong reactions of denial when they declared at DiGRA in 2007 that the “power elite of the game industry is a predominately white, and secondarily Asian, male-dominated corporate and creative elite that represents a select group of large, global publishing companies in conjunction with a handful of massive chain retail distributors” and thus constitutes a “hegemonic” power that “determines which technologies will be deployed, and which will not; which games will be made, and by which designers; which players are important to design for, and which play styles will be supported” (Fron et al. 2007). The rhetoric of the Ludica manifestos about how games and gamers were being defined too rigidly by an industry enamored of AAA titles often ran counter to the origin stories of organizations such as GDC and SIGGRAPH.

    The third key strategy of Gamergaters – in addition to the fabricating the persecution narrative and the influence maps – is formulating threats of financial retaliation. If liberal members of the press and academic and professional associations in game studies and game development benefit from a supposed flow of money, social capital, and privileged access to career advancement, libertarian Gamergaters will thwart them with economic threats. This creates a paradoxical dynamic in which Gamergaters both assert an ethos of economic disinterest – because gaming is supposed to be a non-profit/non-wage activity that is separate from accumulation of capital in the real world – and seek to exercise their collective power to crowdfund sympathizers, and boycott, divest, and freeze assets of feminist allies and ally organizations. Advertisers are besieged with consumer complaints about the ethics of reporting in game publications, university employees are reported to administrators with accusations about frittering away public funds, and even donations to Wikipedia are withdrawn by indignant Gamergaters.

    Because feminists supposedly use financial interest as a lever, Gamergaters must also use financial interest as a way to assert the fairness, neutrality, and civility of a rational public sphere, which is tied to their fourth strategy about policing discourse. In regulating language in order to keep it freely flowing in a neoliberal marketplace of ideas so that the best notions will be the most valued, hyperbolic and hysterical feminist “strawmanning” and “insulting” very explicitly will not be tolerated by Gamergaters. In insisting that harassers are a statistically insignificant fraction of their movement in a counterfactual account of their power to terrorize targets and dominate channels of communication, language reminiscent of Robert’s Rules of Order can be as commonly encountered in Gamergate discourses as more stereotypical forms of trolling.

    This does not mean that the campaigns of Gamergate to construct us-and-them narratives, to make explicit and to visualize connections in social networks, to block some financial transactions and facilitate others, and to regulate discourse with structures of rational dialogue, leveling effects, and tone policing are not misogynistic. They defend and enable doxxing, swatting, and stalking behaviors that undermine the very barriers between virtual reality and material existence that are central to their contradictory ideologies of exceptionalism and common jurisdiction.

    The need for nurturing diversity among game players and developers (Fron et al. 2007) has been a work in progress for the better part of a decade, but in the wake of Gamergate, hundreds of prominent signatories who asserted the “right to play games, criticize games and make games without getting harassed or threatened” published an “open letter to the gaming community” (IGDA 2014). The fact that this pointed defense of feminist gamers, critics, and designers also used rights-based language might be instructive for better understanding the discursive context of Gamergate as well.

    The Italian biopolitical philosopher Roberto Esposito (2010, 2011) has theorized that two conflicting modalities of “community” and “immunity” operate when members either accept or resist the obligations of the social contract. Looking at the rhetoric of Gamergaters about the magic circle and how they caricature the rhetoric of feminists about safe space, we see how these oppositions are underexamined, and we can ask why opportunities for reflection and reflexive thinking about intersectionality are being foreclosed.

    Works Cited

    • Ahmed, Sara. 2010. The Promise of Happiness. Durham: Duke University Press.
    • Alexander, Leigh. 2014. “‘Gamers’ Don’t Have to Be Your Audience. ‘Gamers’ Are Over.” Gamasutra, August 28. http://www.gamasutra.com/view/news/224400/Gamers_dont_have_to_be_your_audience_Gamers_are_over.php.
    • Bailey, Moya. 2015. “#transform(ing)DH Writing and Research: An Autoethnography of Digital Humanities and Feminist Ethics.” Digital Humanities Quarterly 9, no. 2.
    • Beaudette, Philippe. 2015. “Civility, Wikipedia, and the Conversation on Gamergate.” Wikimedia Blog. January 27. http://blog.wikimedia.org/2015/01/27/civility-wikipedia-Gamergate/.
    • Bokhari, Allum, and Milo Yiannopoulos. 2015. “Entertainment Industry Says ‘No More’ to Social Justice Warriors.” Breitbart. July 20. http://www.breitbart.com/big-hollywood/2015/07/20/enough-entire-entertainment-industry-says-no-more-to-social-justice-warriors/.
    • Browne, Kath. 2009. “Womyn’s Separatist Spaces: Rethinking Spaces of Difference and Exclusion.” Transactions of the Institute of British Geographers, New Series, 34 (4): 541–56.
    • Castronova, Edward. 2007. Synthetic Worlds: The Business and Culture of Online Games. Chicago: University of Chicago Press.
    • Chess, Shira, and Adrienne Shaw. 2015. “A Conspiracy of Fishes, Or, How We Learned to Stop Worrying About #Gamergate and Embrace Hegemonic Masculinity.” Journal of Broadcasting & Electronic Media 59, no. 1: 208–20.
    • Coleman, Beth. 2011. Hello Avatar: Rise of the Networked Generation. Cambridge, MA: MIT Press.
    • Coleman, E. Gabriella. 2014. Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. Brooklyn, NY: Verso.
    • Consalvo, Mia. 2009. “There Is No Magic Circle.” Games and Culture 4, no. 4: 408–17.
    • Daniels, Shonte. 2014. “Gaming Was Never a Safe Space for Women.” RH Reality Check. November 4. http://rhrealitycheck.org/article/2014/11/10/gaming-never-safe-space-women/.
    • Dibbell, Julian. 1998. “A Rape in Cyberspace.” http://www.juliandibbell.com/articles/a-rape-in-cyberspace/.
    • Donadey, Anne. 2009. “Negotiating Tensions: Teaching about Race in a Graduate Feminist Classroom.” In Feminist Pedagogy: Looking back to Move Forward, edited by Robbin Crabtree, David Alan Sapp, and Adela C. Licona, 209–29. Baltimore, MD: Johns Hopkins University Press.
    • Esmay, Dean. 2014. “Keeping up with #Gamergate.” A Voice for Men. October 16. https://lockerdome.com/7754206970916417.
    • Esposito, Roberto. 2010. Communitas: The Origin and Destiny of Community. Stanford, Calif.: Stanford University Press.
    • ———. 2011. Immunitas: The Protection and Negation of Life. Cambridge; Malden MA: Polity.
    • Fox, D. L., and C. Fleischer. 2004. “Beginning Words: Toward ‘Brave Spaces’ in English Education.” English Education. 37, no. 1: 3–4.
    • Fron, Janine, Tracy Fullerton, Jacquelyn Ford Morie, and Celia Pearce. 2007. “The Hegemony of Play.” In Proceedings, DiGRA: Situated Play, Tokyo, September 24-27, 2007, 309–18. Tokyo, Japan. http://www.digra.org/dl/db/07312.31224.pdf.
    • “Gamergate Controversy.” 2015. Wikipedia, the Free Encyclopedia. https://en.wikipedia.org/w/index.php?title=Gamergate_controversy&oldid=682713753.
    • Golding, Dan. 2014. “The End of Gamers.” Dan Golding. August 28. http://dangolding.tumblr.com/post/95985875943/the-end-of-gamers.
    • Halberstam, Jack. 2014. “You Are Triggering Me! The Neo-Liberal Rhetoric of Harm, Danger and Trauma.” Bully Bloggers. July 5. https://bullybloggers.wordpress.com/2014/07/05/you-are-triggering-me-the-neo-liberal-rhetoric-of-harm-danger-and-trauma/.
    • Huizinga, Johan. 2014. Homo Ludens: A Study of the Play-Element in Culture. Mansfield Centre, CT: Martino Fine Books.
    • “IGDA Developer Satisfaction Survey Summary Report Available – International Game Developers Association (IGDA).” 2015. https://www.igda.org/news/179436/IGDA-Developer-Satisfaction-Survey-Summary-Report-Available.htm (accessed September 23, 2015).
    • Jacobs-Huey, Lanita. 2006. From the Kitchen to the Parlor Language and Becoming in African American Women’s Hair Care. Oxford, UK, and New York, NY: Oxford University Press.
    • Koebler, Jason. 2015. “Dear Gamergate: Please Stop Stealing Our Shit.” Motherboard. http://motherboard.vice.com/read/dear-Gamergate-please-stop-stealing-our-shit (accessed September 24, 2015).
    • Levmore, Saul, and Martha Craven Nussbaum. 2010. The Offensive Internet: Speech, Privacy, and Reputation. Cambridge, MA: Harvard University Press.
    • Losh, Elizabeth. 2009. “Regulating Violence in Virtual Worlds: Theorizing Just War and Defining War Crimes in World of Warcraft.” Pacific Coast Philology 44, no. 2: 159–72.
    • MSMPlan. 2015. “The Flaws in Adrienne Shaw’s Paper on Gamergate and Conspiracy Theories.” Medium. March 18. https://medium.com/@MSMPlan/the-flaws-in-adrienne-shaw-s-paper-on-Gamergate-and-conspiracy-theories-7fc91df43bc.
    • Nakamura, Lisa. 2012. “Queer Female of Color: The Highest Difficulty Setting There Is? Gaming Rhetoric as Gender Capital.” Ada: A Journal of Gender, New Media & Technology 1, no. 1. http://adanewmedia.org/2012/11/issue1-nakamura/
    • Negroponte, Nicholas. 1995. Being Digital. New York: Knopf.
    • Plunkett, Luke. 2014. “We Might Be Witnessing The ‘Death of An Identity.’” Kotaku, August 28. http://kotaku.com/we-might-be-witnessing-the-death-of-an-identity-1628203079.
    • Quinn, Zoe. 2015. “August Never Ends.” Quinnspiracy Blog. January 11. http://ohdeargodbees.tumblr.com/post/107838639074/august-never-ends.
    • Salter, Anastasia. 2016. “Code before Content? Brogrammer Culture in Games and Electronic Literature.” presented at the Electronic Literature Organization, University of Victoria, June 10.
    • Sargon of Akkad. 2014. A Conspiracy Within Gaming #Gamergate #NotYourShield. https://www.youtube.com/watch?v=yJyU7RSvs_s.
    • Sasaki, Betty. 2002. “Toward a Pedagogy of Coalition.” In Twenty-First-Century Feminist Classrooms: Pedagogies of Identity and Difference, edited by Amie A. Macdonald and Susan Sánchez-Casal, 31–57. New York, NY: Palgrave Macmillan.
    • Shield Project. 2014. #NotYourShield – We Are Gamers. https://www.youtube.com/watch?v=SYqBdCmDR0M#t=81.
    • Shulevitz, Judith. 2015. “In College and Hiding From Scary Ideas.” The New York Times, March 21. http://www.nytimes.com/2015/03/22/opinion/sunday/judith-shulevitz-hiding-from-scary-ideas.html.
    • “So I Decided to Email Jimbo…” 2015. Reddit. Accessed September 25. https://www.reddit.com/r/KotakuInAction/comments/2pphuo/so_i_decided_to_email_jimbo/cmyzva7?context=3.
    • Solorzano, Daniel, Miguel Ceja, and Tara Yosso. 2000. “Critical Race Theory, Racial Microaggressions, and Campus Racial Climate: The Experiences of African American College Students.” The Journal of Negro Education 69, no. 1/2: 60–73.
    • “The Birth of Vivian.” 2015. http://i.imgur.com/FdqKFwu.jpg (accessed September 27, 2015).
    • Wales, Jimmy. 2014. “I Have an Idea for pro #Gamergate Folks of Good Will. Go to http://Gamergate.wikia.com/Proposed_Wikipedia_Entry … and Write What You Think Is an Appropriate Article.” Microblog. @jimmy_wales. November 12. https://twitter.com/jimmy_wales/status/532624325694992385?ref_src=twsrc%5Etfw.
    • Wernimont, Jacqueline. 2015. “A ‘Conversation’ about Violence against Women Online (with Images, Tweets) · Jwernimo.” Storify. https://storify.com/jwernimo/a-conversation-about-violence-against-women-online (accessed September 23, 2015).
    • Yiannopoulos, Milo. 2014a. “Gamergate: Angry Feminists, Unethical Journalists Are the Ones Not Welcome in the Gaming Community.” Breitbart. September 14. http://www.breitbart.com/big-hollywood/2014/09/15/the-Gamergate-movement-is-making-terrific-progress-don-t-stop-now/.
    • ———. 2014b. “The Authoritarian Left Was on Course to Win the Culture Wars… Then Along Came #Gamergate.” Breitbart. November 12. http://www.breitbart.com/london/2014/11/12/the-authoritarian-left-was-on-course-to-win-the-culture-wars-then-along-came-Gamergate/.
  • Zachary Loeb – What Technology Do We Really Need? – A Critique of the 2016 Personal Democracy Forum

    Zachary Loeb – What Technology Do We Really Need? – A Critique of the 2016 Personal Democracy Forum

    by Zachary Loeb

    ~

    Technological optimism is a dish best served from a stage. Particularly if it’s a bright stage in front of a receptive and comfortably seated audience, especially if the person standing before the assembled group is delivering carefully rehearsed comments paired with compelling visuals, and most importantly if the stage is home to a revolving set of speakers who take turns outdoing each other in inspirational aplomb. At such an event, even occasional moments of mild pessimism – or a rogue speaker who uses their fifteen minutes to frown more than smile – serve to only heighten the overall buoyant tenor of the gathering. From TED talks to the launching of the latest gizmo by a major company, the person on a stage singing the praises of technology has become a familiar cultural motif. And it is a trope that was alive and drawing from that well at the 2016 Personal Democracy Forum, the theme of which was “The Tech We Need.”

    Over the course of two days some three-dozen speakers and a similar number of panelists gathered to opine on the ways in which technology is changing democracy to a rapt and appreciative audience. The commentary largely aligned with the sanguine spirit animating the founding manifesto of the Personal Democracy Forum (PDF) – which frames the Internet as a potent force set to dramatically remake and revitalize democratic society. As the manifesto boldly decrees, “the realization of ‘Personal Democracy,’ where everyone is a full participant, is coming” and it is coming thanks to the Internet. The two days of PDF 2016 consisted of a steady flow of intelligent, highly renowned, well-meaning speakers expounding on the conference’s theme to an audience largely made up of bright caring individuals committed to answering that call. To attend an event like PDF and not feel moved, uplifted or inspired by the speakers would be a testament to an empathic failing. How can one not be moved? But when one’s eyes are glistening and when one’s heart is pounding it is worth being wary of the ideology in which one is being baptized.

    To critique an event like the Personal Democracy Forum – particularly after having actually attended it – is something of a challenge. After all, the event is truly filled with genuine people delivering (mostly) inspiring talks. There is something contagious about optimism, especially when it presents itself as measured optimism. And besides, who wants to be the jerk grousing and grumbling after an activist has just earned a standing ovation? Who wants to cross their arms and scoff that the criticism being offered is precisely the type that serves to shore up the system being criticized? Pessimists don’t often find themselves invited to the after party. Thus, insofar as the following comments – and those that have already been made – may seem prickly and pessimistic it is not meant as an attack upon any particular speaker or attendee. Many of those speakers truly were inspiring (and that is meant sincerely), many speakers really did deliver important comments (that is also meant sincerely), and the goal here is not to question the intentions of PDF’s founders or organizers. Yet prominent events like PDF are integral to shaping the societal discussions surrounding technology – and therefore it is essential to be willing to go beyond the inspirational moments and ask: what is really being said here?

    For events like PDF do serve to advance an ideology, whether they like it or not. And it is worth considering what that ideology means, even if it forces one to wipe the smile from one’s lips. And when it comes to PDF much of its ideology can be discovered simply by dissecting the theme for the 2016 conference: “The Tech We Need.”

    “The Tech”

    What do you (yes, you) think of when you hear the word technology? After all, it is a term that encompasses a great deal, which is one of the reasons why Leo Marx (1997) was compelled to describe technology as a “hazardous concept.” Eyeglasses are technology, but so too is Google Glass. A hammer is technology, and so too is a smart phone. In other words, when somebody says “technology is X” or “technology does Q” or “technology will result in R” it is worth pondering whether technology really is, does or results in those things, or if what is being discussed is really a particular type of technology in a particular context. Granted, technology remains a useful term, it is certainly a convenient shorthand (one which very many people [including me] are guilty of occasionally deploying), but in throwing the term technology about so casually it is easy to obfuscate as much as one clarifies. At PDF it seemed as though a sentence was not complete unless it included a noun, a verb and the word technology – or “tech.” Yet what was meant by “tech” at PDF almost always meant the Internet or a device linked to the Internet – and qualifying this by saying “almost” is perhaps overly generous.

    Thus the Internet (as such), web browsers, smart phones, VR, social networks, server farms, encryption, other social networks, apps, and websites all wound up being pleasantly melted together into “technology.” When “technology” encompasses so much a funny thing begins to happen – people speak effusively about “technology” and only name specific elements when they want to single something out for criticism. When technology is so all encompassing who can possibly criticize technology? And what would it mean to criticize technology when it isn’t clear what is actually meant by the term? Yes, yes, Facebook may be worthy of mockery and smart phones can be used for surveillance but insofar as the discussion is not about the Internet but “technology” on what grounds can one say: “this stuff is rubbish”? For even if it is clear that the term “technology” is being used in a way that focuses on the Internet if one starts to seriously go after technology than one will inevitably be confronted with the question “but aren’t hammers also technology?” In short, when a group talks about “the tech” but by “the tech” only means the Internet and the variety of devices tethered to it, what happens is that the Internet appears as being synonymous with technology. It isn’t just a branch or an example of technology, it is technology! Or to put this in sharper relief: at a conference about “the tech we need” held in the US in 2016 how can one avoid talking about the technology that is needed in the form of water pipes that don’t poison people? The answer: by making it so that the term “technology” does not apply to such things.

    The problem is that when “technology” is used to only mean one set of things it muddles the boundaries of what those things are, and what exists outside of them. And while it does this it allows people to confidently place trust in a big category, “technology,” whereas they would probably have been more circumspect if they were just being asked to place trust in smart phones. After all, “the Internet will save us” doesn’t have quite the same seductive sway as “technology will save us” – even if the belief is usually put more eloquently than that. When somebody says “technology will save us” people can think of things like solar panels and vaccines – even if the only technology actually being discussed is the Internet. Here, though, it is also vital to approach the question of “the tech” with some historically grounded modesty in mind. For the belief that technology is changing the world and fundamentally altering democracy is nothing new. The history of technology (as an academic field) is filled with texts describing how a new tool was perceived as changing everything – from the compass to the telegraph to the phonograph to the locomotive to the [insert whatever piece of technology you (the reader) can think of]. And such inventions were often accompanied by an, often earnest, belief that these inventions would improve everything for the better! Claims that the Internet will save us, invoke déjà vu for those with a familiarity with the history of technology. Carolyn Marvin’s masterful study When Old Technologies Were New (1988) examines the way in which early electrical communications methods were seen at the time of their introduction, and near the book’s end she writes:

    Predictions that strife would cease in a world of plenty created by electrical technology were clichés breathed by the influential with conviction. For impatient experts, centuries of war and struggle testified to the failure of political efforts to solve human problems. The cycle of resentment that fueled political history could perhaps be halted only in a world of electrical abundance, where greed could not impede distributive justice. (206)

    Switch out the words ”electrical technology” for “Internet technology” and the above sentences could apply to the present (and the PDF forum) without further alterations. After all, PDF was certainly a gathering of “the influential” and of “impatient experts.”

    And whenever “tech” and democracy are invoked in the same sentence it is worth pondering whether the tech is itself democratic, or whether it is simply being claimed that the tech can be used for democratic purposes. Lewis Mumford wrote at length about the difference between what he termed “democratic” and “authoritarian” technics – in his estimation “democratic” systems were small scale and manageable by individuals, whereas “authoritarian” technics represented massive systems of interlocking elements where no individual could truly assert control. While Mumford did not live to write about the Internet, his work makes it very clear that he did not consider computer technologies to belong to the “democratic” lineage. Thus, to follow from Mumford, the Internet appears as a wonderful example of an “authoritarian” technic (it is massive, environmentally destructive, turns users into cogs, runs on surveillance, cannot be controlled locally, etc…) – what PDF argues for is that this authoritarian technology can be used democratically. There is an interesting argument there, and it is one with some merit. Yet such a discussion cannot even occur in the confusing morass that one finds oneself in when “the tech” just means the Internet.

    Indeed, by meaning “the Internet” but saying “the tech” groups like PDF (consciously or not) pull a bait and switch whereby a genuine consideration of what “the tech we need” simply becomes a consideration of “the Internet we need.”

    “We”

    Attendees to the PDF conference received a conference booklet upon registration; it featured introductory remarks, a code of conduct, advertisements from sponsors, and a schedule. It also featured a fantastically jarring joke created through the wonders of, perhaps accidental, juxtaposition; however, to appreciate the joke one needed to open the booklet so as to be able to see the front and back cover simultaneously. Here is what that looked like:

    Personal Democracy Forum (2016)

    Get it?

    Hilarious.

    The cover says “The Tech We Need” emblazoned in blue over the faces of the conference speakers, and the back is an advertisement for Microsoft stating: “the future is what we make it.” One almost hopes that the layout was intentional. For, who the heck is the “we” being discussed? Is it the same “we”? Are you included in that “we”? And this is a question that can be asked of each of those covers independently of the other: when PDF says “we” who is included and who is excluded? When Microsoft says “we” who is included and who is excluded? Of course, this gets muddled even more when you consider that Microsoft was the “presenting sponsor” for PDF and that many of the speakers at PDF have funding ties to Microsoft. The reason this is so darkly humorous is that there is certainly an argument to be made that “the tech we need” has no place for mega-corporations like Microsoft, while at the same time the booklet assures that “the future is what we [Microsoft] make it.” In short: the future is what corporations like Microsoft will make it…which might be very different from the kind of tech we need.

    In considering the “we” of PDF it is worth restating that this is a gathering of well-meaning individuals who largely seem to want to approach the idea of “we” with as much inclusivity as possible. Yet defining a “we” is always fraught, speaking for a “we” is always dangerous, and insofar as one can think of PDF with any kind of “we” (or “us”) in mind the only version of the group that really emerges is one that leans heavily towards describing the group actually present at the event. And while one can certainly speak about the level (or lack) of diversity at the PDF event – the “we” who came together at PDF is not particularly representative of the world. This was also brought into interesting relief in some other amusing ways: throughout the event one heard numerous variations of the comment “we all have smart phones” – but this did not even really capture the “we” of PDF. While walking down the stairs to a session one day I clearly saw a man (wearing a conference attendee badge) fiddling with a flip-phone – I suppose he wasn’t included in the “we” of “we all have smart phones.” But I digress.

    One encountered further issues with the “we” when it came to the political content of the forum. While the booklet states, and the hosts repeated over and over, that the event was “non-partisan” such a descriptor is pretty laughable. Those taking to the stage were a procession of people who had cut their teeth working for MoveOn and the activists represented continually self-identified as hailing from the progressive end of the spectrum. The token conservative speaker who stepped onto the stage even made a self-deprecating joke in which she recognized that she was one of only a handful (if that) of Republicans present. So, again, who is missing from this “we”? One can be a committed leftist and genuinely believe that a figure like Donald Trump is a xenophobic demagogue – and still recognize that some of his supporters might have offered a very interesting perspective to the PDF conversation. After all, the Internet (“the tech”) has certainly been used by movements on the right as well – and used quite effectively at that. But this part of a national “we” was conspicuously absent from the forum even if they are not nearly so absent from Twitter, Facebook, or the population of people owning smart phones. Again, it is in no way shape or form an endorsement of anything that Trump has said to point out that when a forum is held to discuss the Internet and democracy that it is worth having the people you disagree with present.

    Another question of the “we” that is worth wrestling with revolves around the way in which events like PDF involve those who offer critical viewpoints. If, as is being argued here, PDF’s basic ideology is that the Internet (“the tech”) is improving people’s lives and will continue to do so (leading towards “personal democracy”) – it is important to note that PDF welcomed several speakers who offered accounts of some of the shortcomings of the Internet. Figures including Sherry Turkle, Kentaro Toyama, Safiya Noble, Kate Crawford, danah boyd, and Douglas Rushkoff all took the stage to deliver some critical points of view – and yet in incorporating such voices into the “we” what occurs is that these critiques function less as genuine retorts and more as safety valves that just blow off a bit of steam. Having Sherry Turkle (not to pick on her) vocally doubt the empathetic potential of the Internet just allows the next speaker (and countless conference attendees) to say “well, I certainly don’t agree with Sherry Turkle.” Nevertheless, one of the best ways to inoculate yourself against the charge of unthinking optimism is to periodically turn the microphone over to a critic. But perhaps the most important things that such critics say are the ways in which they wind up qualifying their comments – thus Turkle says “I’m not anti-technology,” Toyama disparages Facebook only to immediately add “I love Facebook,” and fears regarding the threat posed by AI get laughed off as the paranoia of today’s “apex predators” (rich white men) being concerned that they will lose their spot at the top of the food chain. The environmental costs of the cloud are raised, the biased nature of algorithms is exposed – but these points are couched against a backdrop that says to the assembled technologists “do better” not “the Internet is a corporately controlled surveillance mall, and it’s overrated.” The heresies that are permitted are those that point out the rough edges that need to be rounded so that the pill can be swallowed. To return to the previous paragraph, this is not to say that PDF needs to invite John Zerzan or Chellis Glendinning to speak…but one thing that would certainly expose the weaknesses of the PDF “we” is to solicit viewpoints that genuinely come from outside of that “we.” Granted, PDF is more TED talk than FRED talk.

    And of course, and most importantly, one must think of the “we” that goes totally unheard. Yes, comments were made about the environmental cost of the cloud and passing phrases recognized mining – but PDF’s “we” seems to mainly refer to a “we” defined as those who use the Internet and Internet connected devices. Miners, those assembling high-tech devices, e-waste recyclers, and the other victims of those processes are only a hazy phantom presence. They are mentioned in passing, but not ever included fully in the “we.” PDF’s “the tech we need” is for a “we” that loves the Internet and just wants it to be even better and perhaps a bit nicer, while Microsoft’s we in “the future is what we make it” is a “we” that is committed to staying profitable. But amidst such statements there is an even larger group saying: “we are not being included.” That unheard “we” being the same “we” from the classic IWW song “we have fed you all for a thousand years” (Green et al 2016). And as the second line of that song rings out “and you hail us still unfed.”

    “Need”

    When one looks out upon the world it is almost impossible not to be struck by how much is needed. People need homes, people need –not just to be tolerated – but accepted, people need food, people need peace, people need stability, people need the ability to love without being subject to oppression, people need to be free from bigotry and xenophobia, people need…this list could continue with a litany of despair until we all don sackcloth. But do people need VR headsets? Do people need Facebook or Twitter? Do those in the possession of still-functioning high-tech devices need to trade them in every eighteen months? Of course it is important to note that technology does have an important role in meeting people’s needs – after all “shelter” refers to all sorts of technology. Yet, when PDF talks about “the tech we need” the “need” is shaded by what is meant by “the tech” and as was previously discussed that really means “the Internet.” Therefore it is fair to ask, do people really “need” an iPhone with a slightly larger screen? Do people really need Uber? Do people really need to be able to download five million songs in thirty seconds? While human history is a tale of horror it requires a funny kind of simplistic hubris to think that World War II could have been prevented if only everybody had been connected on Facebook (to be fair, nobody at PDF was making this argument). Are today’s “needs” (and they are great) really a result of a lack of technology? It seems that we already have much of the tech that is required to meet today’s needs, and we don’t even require new ways to distribute it. Or, to put it clearly at the risk of being grotesque: people in your city are not currently going hungry because they lack the proper app.

    The question of “need” flows from both the notion of “the tech” and “we” – and as was previously mentioned it would be easy to put forth a compelling argument that “the tech we need” involves water pipes that don’t poison people with lead, but such an argument is not made when “the tech” means the Internet and when the “we” has already reached the top of Maslow’s hierarchy of needs. If one takes a more expansive view of “the tech” and “we” than the range of what is needed changes accordingly. This issue – the way “tech” “we” and “need” intersect – is hardly a new concern. It is what prompted Ivan Illich (1973) to write, in Tools for Conviviality, that:

    People need new tools to work with rather than tools that ‘work’ for them. They need technology to make the most of the energy and imagination each has, rather than more well-programmed energy slaves. (10)

    Granted, it is certainly fair to retort “but who is the ‘we’ referred to by Illich” or “why can’t the Internet be the type of tool that Illich is writing about” – but here Illich’s response would be in line with the earlier referral to Mumford. Namely: accusations of technological determinism aside, maybe it’s fair to say that some technologies are oversold, and maybe the occasional emphasis on the way that the Internet helps activists serves as a patina that distracts from what is ultimately an environmentally destructive surveillance system. Is the person tethered to their smart phone being served by that device – or are they serving it? Or, to allow Illich to reply with his own words:

    As the power of machines increases, the role of persons more and more decreases to that of mere consumers. (11)

    Mindfulness apps, cameras on phones that can be used to film oppression, new ways of downloading music, programs for raising money online, platforms for connecting people on a political campaign – the user is empowered as a citizen but this empowerment tends to involve needing the proper apps. And therefore that citizen needs the proper device to run that app, and a good wi-fi connection, and… the list goes on. Under the ideology captured in the PDF’s “the tech we need” to participate in democracy becomes bound up with “to consume the latest in Internet innovation.” Every need can be met, provided that it is the type of need, which the Internet can meet. Thus the old canard “to the person with a hammer every problem looks like a nail” finds its modern equivalent in “to the person with a smart phone and a good wi-fi connection, every problem looks like one that can be solved by using the Internet.” But as for needs? Freedom from xenophobia and oppression are real needs – undoubtedly – but the Internet has done a great deal to disseminate xenophobia and prop up oppressive regimes. Continuing to double down on the Internet seems like doing the same thing “we” have been doing and expecting different results because finally there’s an “app for that!”

    It is, again, quite clear that those assembled at PDF came together with well-meaning attitudes, but as Simone Weil (2010) put it:

    Intentions, by themselves, are not of any great importance, save when their aim is directly evil, for to do evil the necessary means are always within easy reach. But good intentions only count when accompanied by the corresponding means for putting them into effect. (180)

    The ideology present at PDF emphasizes that the Internet is precisely “the means” for the realization of its attendees’ good intentions. And those who took to the stage spoke rousingly of using Facebook, Twitter, smart phones, and new apps for all manner of positive effects – but hanging in the background (sometimes more clearly than at other times) is the fact that these systems also track their users’ every move and can be used just as easily by those with very different ideas as to what “positive effects” look like. The issue of “need” is therefore ultimately a matter not simply of need but of “ends” – but in framing things in terms of “the tech we need” what is missed is the more difficult question of what “ends” do we seek. Instead “the tech we need” subtly shifts the discussion towards one of “means.” But, as Jacques Ellul, recognized the emphasis on means – especially technological ones – can just serve to confuse the discussion of ends. As he wrote:

    It must always be stressed that our civilization is one of means…the means determine the ends, by assigning us ends that can be attained and eliminating those considered unrealistic because our means do not correspond to them. At the same time, the means corrupt the ends. We live at the opposite end of the formula that ‘the ends justify the means.’ We should understand that our enormous present means shape the ends we pursue. (Ellul 2004, 238)

    The Internet and the raft of devices and platforms associated with it are a set of “enormous present means” – and in celebrating these “means” the ends begin to vanish. It ceases to be a situation where the Internet is the mean to a particular end, and instead the Internet becomes the means by which one continues to use the Internet so as to correct the current problems with the Internet so that the Internet can finally achieve the… it is a snake eating its own tail.

    And its own tale.

    Conclusion: The New York Ideology

    In 1995, Richard Barbrook and Andy Cameron penned an influential article that described what they called “The Californian Ideology” which they characterized as

    promiscuously combin[ing] the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through a profound faith in the emancipatory potential of the new information technologies. In the digital utopia, everybody will be both hip and rich. (Barbrook and Cameron 2001, 364)

    As the placing of a state’s name in the title of the ideology suggests, Barbrook and Cameron were setting out to describe the viewpoint that was underneath the firms that were (at that time) nascent in Silicon Valley. They sought to describe the mixture of hip futurism and libertarian politics that worked wonderfully in the boardroom, even if there was now somebody in the boardroom wearing a Hawaiian print shirt – or perhaps jeans and a hoodie. As companies like Google and Facebook have grown the “Californian Ideology” has been disseminated widely, and though such companies periodically issued proclamations about not being evil and claimed that connecting the world was their goal they maintained their utopian confidence in the “independence of cyberspace” while directing a distasteful gaze towards the “dinosaurs” of representative democracy that would dare to question their zeal. And though it is a more recent player in the game, one is hard-pressed to find a better example than Uber of the fact that this ideology is alive and well.

    The Personal Democracy Forum is not advancing the Californian Ideology. And though the event may have featured a speaker who suggested that the assembled “we” think of the “founding fathers” as start-up founders – the forum continually returned to the questions of democracy. While the Personal Democracy Forum shares the “faith in the emancipatory potential of the new information technologies” with Silicon Valley startups it seems less “free-wheeling” and more skeptical of “entrepreneurial zeal.” In other words, whereas Barbrook and Cameron spoke of “The Californian Ideology” what PDF makes clear is that there is also a “New York Ideology.” Wherein the ideological hallmark is an embrace of the positive potential of new information technologies tempered by the belief that such potential can best be reached by taming the excesses of unregulated capitalism. Where the Californian Ideology says “libertarian” the New York Ideology says “liberation.” Where the Californian Ideology celebrates capital the New York Ideology celebrates the power found in a high-tech enhanced capitol. The New York Ideology balances the excessive optimism of the Californian Ideology by acknowledging the existence of criticism, and proceeds to neutralize this criticism by making it part and parcel of the celebration of the Internet’s potential. The New York Ideology seeks to correct the hubris of the Californian Ideology by pointing out that it is precisely this hubris that turns many away from the faith in the “emancipatory potential.” If the Californian Ideology is broadcast from the stage at the newest product unveiling or celebratory conference, than the New York Ideology is disseminated from conferences like PDF and the occasional skeptical TED talk. The New York Ideology may be preferable to the Californian Ideology in a thousand ways – but ultimately it is the ideology that manifests itself in the “we” one encounters in the slogan “the tech we need.”

    Or, to put it simply, whereas the Californian Ideology is “wealth meaning,” the New York Ideology is “well-meaning.”

    Of course, it is odd and unfair to speak of either ideology as “Californian” or “New York.” California is filled with Californians who do not share in that ideology, and New York is filled with New Yorkers who do not share in that ideology either. Yet to dub what one encounters at PDF to be “The New York Ideology” is to indicate the way in which current discussions around the Internet are not solely being framed by “The Californian Ideology” but also by a parallel position wherein faith in Internet enabled solutions puts aside its libertarian sneer to adopt a democratic smile. One could just as easily call the New York Ideology the “Tech On Stage Ideology” or the “Civic Tech Ideology” – perhaps it would be better to refer to the Californian Ideology as the SV Ideology (silicon valley) and the New York Ideology as the CV ideology (civic tech). But if the Californian Ideology refers to the tech campus in Silicon Valley than the New York Ideology refers to the foundation based in New York – that may very well be getting much of its funding from the corporations that call Silicon Valley home. While Uber sticks with the Californian Ideology, companies like Facebook have begun transitioning to the New York Ideology so that they can have their panoptic technology and their playgrounds too. Whilst new tech companies emerging in New York (like Kickstarter and Etsy) make positive proclamations about ethics and democracy by making it seem that ethics and democracy are just more consumption choices that one picks from the list of downloadable apps.

    The Personal Democracy Forum is a fascinating event. It is filled with intelligent individuals who speak of democracy with unimpeachable sincerity, and activists who really have been able to use the Internet to advance their causes. But despite all of this, the ideological emphasis on “the tech we need” remains based upon a quizzical notion of “need,” a problematic concept of “we,” and a reductive definition of “tech.” For statements like “the tech we need” are not value neutral – and even if the surface ethics are moving and inspirational, sometimes a problematic ideology is most easily disseminated when it takes care to dispense with ideologues. And though the New York Ideology is much more subtle than the Californian Ideology – and makes space for some critical voices – it remains a vehicle for disseminating an optimistic faith that a technologically enhanced Moses shall lead us into the high-tech promised land.

    The 2016 Personal Democracy Forum put forth an inspirational and moving vision of “the tech we need.”

    But when it comes to promises of technological salvation, isn’t it about time that “we” stopped getting our hopes up?

    Coda

    I confess, I am hardly free of my own ideological biases. And I recognize that everything written here may simply be dismissed of by those who find it hypocritical that I composed such remarks on a computer and then posted them online. But I would say that the more we find ourselves using technology the more careful we must be that we do not allow ourselves to be used by that technology.

    And thus, I shall simply conclude by once more citing a dead, but prescient, pessimist:

    I have no illusions that my arguments will convince anyone. (Ellul 1994, 248)

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, where an earlier version of this post first appeared, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Barbrook, Richard and Andy Cameron. 2001. “The Californian Ideology.” In Peter Ludlow, ed., Crypto Anarchy, Cyberstates and Pirate Utopias. Cambridge: MIT Press. 363-387.
    • Ellul, Jacques. 2004. The Political Illusion. Eugene, OR: Wipf and Stock.
    • Ellul, Jacques. 1994. A Critique of the New Commonplaces. Eugene, OR: Wipf and Stock.
    • Green, Archie, David Roediger, Franklin Rosemont, and Salvatore Salerno. 2016. The Big Red Songbook: 250+ IWW Songs! Oakland, CA: PM Press.
    • Illich, Ivan. 1973. Tools for Conviviality. New York: Harper and Row.
    • Marvin, Carolyn. 1988. When Old Technologies Were New: Thinking About Electric Communication in the Late Nineteenth Century. New York: Oxford University Press.
    • Marx, Leo. 1997. “‘Technology’: The Emergence of a Hazardous Concept.” Social Research 64:3 (Fall). 965-988.
    • Mumford, Lewis. 1964. “Authoritarian and Democratic Technics.” in Technology and Culture, 5:1 (Winter). 1-8.
    • Weil, Simone. 2010. The Need for Roots. London: Routledge.
  • Bradley J. Fest – The Function of Videogame Criticism

    Bradley J. Fest – The Function of Videogame Criticism

    a review of Ian Bogost, How to Talk about Videogames (University of Minnesota Press, 2015)

    by Bradley J. Fest

    ~

    Over the past two decades or so, the study of videogames has emerged as a rigorous, exciting, and transforming field. During this time there have been a few notable trends in game studies (which is generally the name applied to the study of video and computer games). The first wave, beginning roughly in the mid-1990s, was characterized by wide-ranging debates between scholars and players about what they were actually studying, what aspects of videogames were most fundamental to the medium.[1] Like arguments about whether editing or mise-en-scène was more crucial to the meaning-making of film, the early, sometimes heated conversations in the field were primarily concerned with questions of form. Scholars debated between two perspectives known as narratology and ludology, and asked whether narrative or play was more theoretically important for understanding what makes videogames unique.[2] By the middle of the 2000s, however, this debate appeared to be settled (as perhaps ultimately unproductive and distracting—i.e., obviously both narrative and play are important). Over the past decade, a second wave of scholars has emerged who have moved on to more technical, theoretical concerns, on the one hand, and more social and political issues, on the other (frequently at the same time). Writers such as Patrick Crogan, Nick Dyer-Witherford, Alexander R. Galloway, Patrick Jagoda, Lisa Nakamura, Greig de Peuter, Adrienne Shaw, McKenzie Wark, and many, many others write about how issues such as control and empire, race and class, gender and sexuality, labor and gamification, networks and the national security state, action and procedure can pertain to videogames.[3] Indeed, from a wide sampling of contemporary writing about games, it appears that the old anxieties regarding the seriousness of its object have been put to rest. Of course games are important. They are becoming a dominant cultural medium; they make billions of dollars; they are important political allegories for life in the twenty-first century; they are transforming social space along with labor practices; and, after what many consider a renaissance in independent game development over the past decade, some of them are becoming quite good.

    Ian Bogost has been one of the most prominent voices in this second wave of game criticism. A media scholar, game designer, philosopher, historian, and professor of interactive computing at the Georgia Institute of Technology, Bogost has published a number of influential books. His first, Unit Operations: An Approach to Videogame Criticism (2006), places videogames within a broader theoretical framework of comparative media studies, emphasizing that games deserve to be approached on their own terms, not only because they are worthy of attention in and of themselves but also because of what they can show us about the ways other media operate. Bogost argues that “any medium—poetic, literary, cinematic, computational—can be read as a configurative system, an arrangement of discrete, interlocking units of expressive meaning. I call these general instances of procedural expression, unit operations” (2006, 9). His second book, Persuasive Games: The Expressive Power of Videogames (2007), extends his emphasis on the material, discrete processes of games, arguing that they can and do make arguments; that is, games are rhetorical, and they are rhetorical by virtue of what they and their operator can do, their procedures: games make arguments through “procedural rhetoric.”[4] The publication of Persuasive Games in particular—which he promoted with an appearance on The Colbert Report (2005–14)—saw Bogost emerge as a powerful voice in the broad cohort of second wave writers and scholars.

    But I feel that the publication of Bogost’s most recent book, How to Talk about Videogames (2015), might very well end up signaling the beginning of a third phase of videogame criticism. If the first task of game criticism was to formally define its object, and the second wave of game studies involved asking what games can and do say about the world, the third phase might see critics reflecting on their own processes and procedures, thinking, not necessarily about what videogames are and do, but about what videogame criticism is and does. How to Talk about Videogames is a book that frequently poses the (now quite old) question: what is the function of criticism at the present time? In an industry dominated by multinational media megaconglomerates, what should the role of (academic) game criticism be? What can a handful of researchers and scholars possibly do or say in the face of such a massive, implacable, profit-driven industry, where every announcement about future games further stokes its rabid fan base of slobbering, ravening hordes to spend hundreds of dollars and thousands of hours consuming a form known for its spectacular violence, ubiquitous misogyny, and myopic tribalism? What is the point of writing about games when the videogame industry appears to happily carry on as if nothing is being said at all, impervious to any conversation that people may be having about its products beyond what “fans” demand?

    To read the introduction and conclusion of Bogost’s most recent book, one might think that, suggestions about their viability aside, both the videogame industry and the critical writing surrounding it are in serious crisis, and the matter of the cultural status of the videogame has hardly been put to rest. As a scholar, critic, and designer who has been fairly consistent in positively exploring what digital games can do, what they can uniquely accomplish as a process-based medium, it is striking, at least to this reviewer, that Bogost begins by anxiously admitting,

    whenever I write criticism of videogames, someone strongly invested in games as a hobby always asks the question “is this parody?” as if only a miscreant or a comedian or a psychopath would bother to invest the time and deliberateness in even thinking, let alone writing about videogames with the seriousness that random, anonymous Internet users have already used to write about toasters, let alone deliberate intellectuals about film or literature! (Bogost 2015, xi–xii)

    Bogost calls this kind of attention to the status of his critical endeavor in a number of places in How to Talk about Videogames. The book shows him involved in that untimely activity of silently but implicitly assessing his body of work, reflectively approaching his critical task with cautious trepidation. In a variety of moments from the opening and closing of the book, games and criticism are put into serious question. Videogames are puerile, an “empty diversion” (182), and without value; “games are grotesque. . . . [they] are gross, revolting, heaps of arbitrary anguish” (1); “games are stupid” (9); “that there could be a game criticism [seems] unlikely and even preposterous” (181). In How to Talk about Videogames, Bogost, at least in some ways, is giving up his previous fight over whether or not videogames are serious aesthetic objects worthy of the same kind of hermeneutic attention given to more established art forms.[5] If games are predominantly treated as “perversion, excess” (183), a symptom of “permanent adolescence” (180), as unserious, wasteful, unproductive, violently sadistic entertainments—perhaps there is a reason. How to Talk about Videogames shows Bogost turning an intellectual corner toward a decidedly ironic sense of his role as a critic and the worthiness of his critical object.

    Compare Bogost’s current pessimism with the optimism of his previous volume, How to Do Things with Videogames (2011), to which How to Talk about Videogames functions as a kind of sequel or companion. In this earlier book, he is rather more affirmative about the future of the videogame industry (and, by proxy, videogame criticism):

    What if we allowed that videogames have many possible goals and purposes, each of which couples with many possible aesthetics and designs to create many possible player experiences, none of which bears any necessary relationship to the commercial videogame industry as we currently know it. The more games can do, the more the general public will become accepting of, and interested in, the medium in general. (Bogost 2011, 153)

    2011’s How to Do Things with Videogames aims to bring to the table things that previous popular and scholarly approaches to videogames had ignored in order to show all the other ways that videogames operate, what they are capable of beyond mere mimetic simulation or entertaining distraction, and how game criticism might allow their audiences to expand beyond the province of the “gamer” to mirror the diversified audiences of other media. Individual chapters are devoted to how videogames produce empathy and inspire reverence; they can be vehicles for electioneering and promotion; games can relax, titillate, and habituate; they can be work. Practicing what he calls “media microecology,” a critical method that “seeks to reveal the impact of a medium’s properties on society . . . through a more specialized, focused attention . . . digging deep into one dark, unexplored corner of a media ecosystem” (2011, 7), Bogost argues that game criticism should be attentive to more than simply narrative or play. The debates that dominated the early days of critical game studies, in this regard, only account for a rather limited view of what games can do. Appearing at a time when many were arguing that the medium was beginning to reach aesthetic maturity, Bogost’s 2011 book sounds a note of hope and promise for the future of game studies and the many unexplored possibilities for game design.

    How to Talk about Videogames

    I cannot really overstate, however, the ways in which How to Talk about Videogames, published four years later, shows Bogost reversing tack, questioning his entire enterprise.[6] Even with the appearance of such a serious, well-received game as Gone Home (2013)—to which he devotes a particularly scathing chapter about what the celebration of an ostensibly adolescent game tells us about contemporaneity—this is a book that repeatedly emphasizes the cultural ghetto in which videogames reside. Criticism devoted exclusively to this form risks being “subsistence criticism. . . . God save us from a future of game critics, gnawing on scraps like the zombies that fester in our objects of study” (188). Despite previous claims about videogames “[helping] us expose and interrogate the ways we engage the world in general, not just the ways that computational systems structure or limit that experience” (Bogost 2006, 40), How to Talk about Videogames is, at first glance, a book that raises the question of not only how videogames should be talked about, but whether they have anything to say in the first place.

    But it is difficult to gauge the seriousness of Bogost’s skepticism and reluctance given a book filled with twenty short essays of highly readable, informative, and often compelling criticism. (The disappointingly short essay, “The Blue Shell Is Everything That’s Wrong with America”—in which he writes: “This is the Blue Shell of collapse, the Blue Shell of financial hubris, the Blue Shell of the New Gilded Age” [26]—particularly stands out in the way that it reads an important if overlooked aspect of a popular game in terms of larger social issues.) For it is, really, somewhat unthinkable that someone who has written seven books on the subject would arrive at the conclusion that “videogames are a lot like toasters. . . . Like a toaster, a game is both appliance and hearth, both instrument and aesthetic, both gadget and fetish. It’s preposterous to do game criticism, like it’s preposterous to do toaster criticism” (ix and xii).[7] Bogost’s point here is rhetorical, erring on the side of hyperbole in order to emphasize how videogames are primarily process-based—that they work and function like toasters perhaps more than they affect and move like films or novels (a claim with which I imagine many would disagree), and that there is something preposterous in writing criticism about a process-based technology. A decade after emphasizing videogames’ procedurality in Unit Operations, this is a way for him to restate and reemphasize these important claims for the more popular audience intended for How to Talk about Videogames. Games involve actions, which make them different from other media that can be more passively absorbed. This is why videogames are often written about in reviews “full of technical details and thorough testing and final, definitive scores delivered on improbably precise numerical scales” (ix). Bogost is clear. He is not a reviewer. He is not assessing games’ ability to “satisfy our need for leisure [as] their only function.” He is a critic and the critic’s activity, even if his object resembles a toaster, is different.

    But though it is apparent why games might require a different kind of criticism than other media, what remains unclear is what Bogost believes the role of the critic ought to be. He says, contradicting the conclusion of How to Do Things with Videogames, that “criticism is not conducted to improve the work or the medium, to win over those who otherwise would turn up their noses at it. . . . Rather, it is conducted to get to the bottom of something, to grasp its form, context, function, meaning, and capacities” (xii). This seems like somewhat of a mistake, and a mistake that ignores both the history of criticism and Bogost’s own practice as a critic. Yes, of course criticism should investigate its object, but even Matthew Arnold, who emphasized “disinterestedness . . . keeping aloof from . . . ‘the practical view of things,’” also understood that such an approach could establish “a current of fresh and true ideas” (Arnold 1993 [1864], 37 and 49). No matter how disinterested, criticism can change the ways that art and the world are conceived and thought about. Indeed, only a sentence later it is difficult to discern what precisely Bogost believes the function of videogame criticism to be if not for improving the work, the medium, the world, if not for establishing a current from which new ideas might emerge. He writes that criticism can “venture so far from ordinariness of a subject that the terrain underfoot gives way from manicured path to wilderness, so far that the words that we would spin tousle the hair of madness. And then, to preserve that wilderness and its madness, such that both the works and our reflections on them become imbricated with one another and carried forward into the future where others might find them anew” (xii; more on this in a moment). It is clear that Bogost understands the mode of the critic to be disinterested and objective, to answer ‘the question ‘What is even going on here?’” (x), but it remains unclear why such an activity would even be necessary or worthwhile, and indeed, there is enough in the book that points to criticism being a futile, unnecessary, parodic, parasitic, preposterous endeavor with no real purpose or outcome. In other words, he may say how to talk about videogames, but not why anyone would ever really want to do so.

    I have at least partially convinced myself that Bogost’s claims about videogames being more like toasters than other art forms, along with the statements above regarding the disreputable nature of videogames, are meant as rhetorical provocations, ironic salvos to inspire from others more interesting, rigorous, thoughtful, and complex critical writing, both of the popular and academic stripe. I also understand that, as he did in Unit Operations, Bogost balks at the idea of a critical practice wholly devoted to videogames alone: “the era of fields and disciplines ha[s] ended. The era of critical communities ha[s] ended. And the very idea of game criticism risks Balkanizing games writing from other writing, severing it from the rivers and fields that would sustain it” (187). But even given such an understanding, it is unclear who precisely is suggesting that videogame criticism should be a hermetically sealed niche cut off from the rest of the critical tradition. It is also unclear why videogame criticism is so preposterous, why writing it—even if a critic’s task is limited to getting “to the bottom of something”—is so divorced from the current of other works of cultural criticism. And finally, given what are, at the end of the day, some very good short essays on games that deserve a thoughtful readership, it is unclear why Bogost has framed his activity in such a negatively self-aware fashion.

    So, rather than pursue a discussion about the relative merits and faults of Bogost’s critical self-reflexivity, I think it worth asking what changed between his 2011 and 2015 books, what took him from being a cheerleader—albeit a reticent, tempered, and disinterested one—to questioning the very value of videogame criticism itself. Why does he change from thinking about the various possibilities for doing things with videogames to thinking that “entering a games retail outlet is a lot like entering a sex shop or a liquor store . . . game shops are still vaguely unseemly” (182)?[8] I suspect that such events as 2014’s Gamergate—when independent game designer Zoe Quinn, critic Anita Sarkeesian, and others were threatened and harassed for their feminist views—the generally execrable level of discourse found on internet comments pages, and the questionable cultural identity of the “gamer,” probably account for some of Bogost’s malaise.[9] Indeed, most of the essays found in How to Talk about Videogames initially appeared online, largely in The Atlantic (where he is an editor) and Gamasutra, and, I have to imagine, suffered for it in their comments sections. With this change in audience and platform, it seems to follow that the opening and closing of How to Talk about Videogames reflect a general exhaustion with the level of discourse from fans, companies, and internet trolls. How can criticism possibly thrive or have an impact in a community that so frequently demonstrates its intolerance and rage toward other modes of thinking and being that might upset its worldview and sense of cultural identity? How does one talk to those who will not listen?

    And if these questions perhaps sound particularly apt today—that the “gamer” might bear an awfully striking resemblance to other headline-grabbing individuals and groups dominating the public discussion in the months after the publication of Bogost’s book, namely Donald J. Trump and his supporters—they should. I agree with Bogost that it can be difficult to see the value of criticism at a time when many United States citizens appear, at least on the surface, to be actively choosing to be uncritical. (As Philip Mirowski argues, the promotion of “ignorance [is] the lynchpin in the neoliberal project” [2013, 96].) Given such a discursive landscape, what is the purpose of writing, even in Bogost’s admirably clear (yet at times maddeningly spare) prose, if no amount of stylistic precision or rhetorical complexity—let alone a mastery of basic facts—can influence one’s audience? How to Talk about Videogames is framed as a response to the anti-intellectual atmosphere of the middle of the second decade of the twenty-first century, and it is an understandably despairing one. As such, it is not surprising that Bogost concludes that criticism has no role to play in improving the medium (or perhaps the world) beyond mere phenomenological encounter and description given the social fabric of life in the 2010s. In a time of vocally racist demagoguery, an era witnessing a rising tide of reactionary nationalism in the US and around the world, a period during which it often seems like no words of any kind can have any rhetorical effect at all—procedurally or otherwise—perhaps the best response is to be quiet. But I also think that this is to misunderstand the function of critical thought, regardless of what its object might be.

    To be sure, videogame creators have probably not yet produced a Citizen Kane (1941), and videogame criticism has not yet produced a work like Erich Auerbach’s Mimesis (1946). I am unconvinced, however, that such future accomplishments remain out of reach, that videogames are barred from profound aesthetic expression, and that writing about games preclude the heights attained by previous criticism simply because of some ill-defined aspect of the medium which prevents it from ever aspiring to anything beyond mere craft. Is a study of the Metal Gear series (1987–2015) similar to Roland Barthes’s S/Z (1970) really all that preposterous? Is Mario forever denied his own Samuel Johnson simply because he is composed of code rather than words? For if anything is unclear about Bogost’s book, it is what precisely prohibits videogames from having the effects and impacts of other art forms, why they are restricted to the realm of toasters, incapable of anything beyond adolescent poiesis. Indeed, Bogost’s informative and incisive discussion about Ms. Pac-Man (1981), his thought-provoking interpretation of Mountain (2014), or the many moments of accomplished criticism in his previous books—for example, his masterful discussion of the “figure of fascination” in Unit Operations—betray such claims.[10]

    Matthew Arnold once famously suggested that creativity and criticism were intimately linked, and I believe it might be worthwhile to remember this for the future of videogame criticism:

    It is the business of the critical power . . . “in all branches of knowledge, theology, philosophy, history, art, science, to see the object as in itself it really is.” Thus it tends, at last, to make an intellectual situation of which the creative power can profitably avail itself. It tends to establish an order of ideas, if not absolutely true, yet true by comparison with that which it displaces; to make the best ideas prevail. Presently these new ideas reach society, the touch of truth is the touch of life, and there is a stir and growth everywhere; out of this stir and growth come the creative epochs of literature. (Arnold 1993 [1864], 29)

    In other words, criticism has a vital role to play in the development of an art form, especially if an art form is experiencing contraction or stagnation. Whatever disagreements I might have with Arnold, I too believe that criticism and creativity are indissolubly linked, and further, that criticism has the power to shape and transform the world. Bogost says that “being a critic is not an enjoyable job . . . criticism is not pleasurable” (x). But I suspect that there may still be many who share Arnold’s view of criticism as a creative activity, and maybe the problem is not that videogame criticism is akin to preposterous toaster criticism, but that the function of videogame criticism at the present time is to expand its own sense of what it is doing, of what it is capable, of how and why it is written. When Bogost says he wants “words that . . . would . . . tousle the hair of madness,” why not write in such a fashion (Bogost’s controlled style rarely approaches madness), expanding criticism beyond mere phenomenological summary at best or zombified parasitism at worst. Consider, for instance, Jonathan Arac: “Criticism is literary writing that begins from previous literary writing. . . . There need not be a literary avant-garde for criticism to flourish; in some cases criticism itself plays a leading cultural role” (1989, 7). If we are to take seriously Bogost’s point about how the overwhelmingly positive reaction to Gone Home reveals the aesthetic and political impoverishment of the medium, then it is disappointing to see someone so well-positioned to take a leading cultural role in shaping the conversation about how videogames might change or transform surrendering the field.

    Forget analogies. What if videogame criticism were to begin not from comparing games to toasters but from previous writing, from the history of criticism, from literature and theory, from theories of art and architecture and music, from rhetoric and communication, from poetry? For, given the complex mediations present in even the simplest games—i.e., games not only involve play and narrative, but raise concerns about mimesis, music, sound, spatiality, sociality, procedurality, interface effects, et cetera—it increasingly makes less and less sense to divorce or sequester games from other forms of cultural study or to think that videogames are so unique that game studies requires its own critical modality. If Bogost implores game critics not to limit themselves to a strictly bound, niche field uninformed by other spheres of social and cultural inquiry, if game studies is to go forward into a metacritical third wave where it can become interested in what makes videogames different from other forms and self-reflexively aware of the variety of established and interconnecting modes of cultural criticism from which the field can only benefit, then thinking about the function of criticism historically should guide how and why games are written about at the present time.

    Before concluding, I should also note that something else perhaps changed between 2011 and 2015, namely, Bogost’s alignment with the philosophical movements of speculative realism and object-oriented ontology. In 2012, he published Alien Phenomenology, or What It’s Like to Be a Thing, a book that picks up some of the more theoretical aspects of Unit Operations and draws upon the work of Graham Harman and other anti-correlationists to pursue a flat ontology, arguing that the job of the philosopher “is to amplify the black noise of objects to make the resonant frequencies of the stuffs inside them hum in credibly satisfying ways. Our job is to write the speculative fictions of their processes, their unit operations” (Bogost 2012, 34). Rather than continue pursuing an anthropocentric, correlationist philosophy that can only think about objects in relation to human consciousness, Bogost claims that “the answer to correlationism is not the rejection of any correlate but the acknowledgment of endless ones, all self-absorbed, obsessed by givenness rather than by turpitude” (78). He suggests that philosophy should extend the possibility of phenomenological encounter to all objects, to all units, in his parlance; let phenomenology be alien and weird; let toasters encounter tables, refrigerators, books, climate change, Pittsburgh, Higgs boson particles, the 2016 Electronic Entertainment Expo, bagels, et cetera.[11]

    Though this is not the venue to pursue a broader discussion of Bogost’s philosophical writing, I mention his speculative turn because it seems important for understanding his changing attitudes about criticism. That is, as Graham Harman’s 2012 essay, “The Well-Wrought Broken Hammer,” negatively demonstrates, it is unclear what a flat ontology has to say, if anything, about art, what such a philosophy can bring to critical, hermeneutic activity.[12] Indeed, regardless of where one stands with regard to object-oriented ontology and other speculative realisms, what these philosophies might offer to critics seems to be one of the more vexing and polarizing intellectual questions of our time. Hermeneutics may very well prove inescapably “correlationist,” and, indeed, no matter how disinterested, historical. It is an open question whether or not one can ground a coherent and worthwhile critical practice upon a flat ontology. I am tempted to suspect not. I also suspect that the current trends in continental philosophy, at the end of the day, may not be really interested in criticism as such, and perhaps that is not really such a big deal. Criticism, theory, and philosophy are not synonymous activities nor must they be. (The question about criticism vis-à-vis alien phenomenology also appears to have motivated the Object Lessons series that Bogost edits.) This is all to say, rather than ground videogame criticism in what may very well turn out to be an intellectual fad whose possibilities for writing worthwhile criticism remain somewhat dubious, perhaps there may be more ripe currents and streams—namely, the history of criticism—that can inform how we write about videogames. Criticism may be steered by keeping in view many polestars; let us not be overly swayed by what, for now, burns brightest. For an area of humanistic inquiry that is still very much emerging, it seems a mistake to assume it can and should be nothing more than toaster criticism.

    In this review I have purposefully made few claims about the state of videogames. This is partly because I do not feel that any more work needs to be done to justify writing about the medium. It is also partly because I feel that any broad statement about the form would be an overgeneralization at this point. There are too many games being made in too many places by too many different people for any all-encompassing statement about the state of videogame art to be all that coherent. (In this, I think Bogost’s sense of the need for a media microecology of videogames is still apropos.) But I will say that the state of videogame criticism—and, strangely enough, particularly the academic kind—is one of the few places where humanistic inquiry seems, at least to me, to be growing and expanding rather than contracting or ossifying. Such a generally positive and optimistic statement about a field of the humanities may not adhere to present conceptions about academic activity (indeed, it might even be unfashionable!), which seem to more generally despair about the humanities, and rightfully so. Admitting that some modes of criticism might be, at least in some ways, exhausted, would be an important caveat, especially given how the past few years have seen a considerable amount of reflection about contemporary modes of academic criticism—e.g., Rita Felski’s The Limits of Critique (2015) or Eric Hayot’s “Academic Writing, I Love You. Really, I Do” (2014). But I think that, given how the anti-intellectual miasma that has long been present in US life has intensified in recent years, creeping into seemingly every discourse, one of the really useful functions of videogame criticism may very well be its potential ability to allow reflection on the function of criticism itself in the twenty-first century. If one of the most prominent videogame critics is calling his activity “preposterous” and his object “adolescent,” this should be a cause for alarm, for such claims cannot but help to perpetuate present views about the worthlessness of the humanities. So, I would like to modestly suggest that, rather than look to toasters and widgets to inform how we talk about videogames, let us look to critics and what they have written. Edward W. Said once wrote: “for in its essence the intellectual life—and I speak here mainly about the social sciences and the humanities—is about the freedom to be critical: criticism is intellectual life and, while the academic precinct contains a great deal in it, its spirit is intellectual and critical, and neither reverential nor patriotic” (1994, 11). If one can approach videogames—of all things!—in such a spirit, perhaps other spheres of human activity can rediscover their critical spirit as well.

    _____

    Bradley J. Fest will begin teaching writing this fall at Carnegie Mellon University. His work has appeared or is forthcoming in boundary 2 (interviews here and here), Critical Quarterly, Critique, David Foster Wallace and “The Long Thing” (Bloomsbury, 2014), First Person Scholar, The Silence of Fallout (Cambridge Scholars, 2013), Studies in the Novel, and Wide Screen. He is also the author of a volume of poetry, The Rocking Chair (Blue Sketch, 2015), and a chapbook, “The Shape of Things,” was selected as finalist for the 2015 Tomaž Šalamun Prize and is forthcoming in Verse. Recent poems have appeared in Empty Mirror, PELT, PLINTH, TXTOBJX, and Small Po(r)tions. He previously reviewed Alexander R. Galloway’s The Interface Effect for The b2 Review “Digital Studies.”

    Back to the essay
    _____

    NOTES

    [1] On some of the first wave controversies, see Aarseth (2001).

    [2] For a representative sample of essays and books in the narratology versus ludology debate from the early days of academic videogame criticism, see Murray (1997 and 2004), Aarseth (1997, 2003, and 2004), Juul (2001), and Frasca (2003).

    [3] For representative texts, see Crogan (2011), Dyer-Witherford and Peuter (2009), Galloway (2006a and 2006b), Jagoda (2013 and 2016), Nakamura (2009), Shaw (2014), and Wark (2007). My claims about the vitality of the field of game studies are largely a result of having read these and other critics. There have also been a handful of interesting “videogame memoirs” published recently. See Bissell (2010) and Clune (2015).

    [4] Bogost defines procedurality as follows: “Procedural representation takes a different form than written or spoken representation. Procedural representation explains processes with other processes. . . . [It] is a form of symbolic expression that uses process rather than language” (2007, 9). For my own discussion of proceduralism, particularly with regard to The Stanley Parable (2013) and postmodern metafiction, see Fest (forthcoming 2016).

    [5] For instance, in the concluding chapter of Unit Operations, Bogost writes powerfully and convincingly about the need for a comparative videogame criticism in conversation with other forms of cultural criticism, arguing that “a structural change in our thinking must take place for videogames to thrive, both commercially and culturally” (2006, 179). It appears that the lack of any structural change in the nonetheless wildly thriving—at least financially—videogame industry has given Bogost serious pause.

    [6] Indeed, at one point he even questions the justification for the book in the first place: “The truth is, a book like this one is doomed to relatively modest sales and an even more modest readership, despite the generous support of the university press that publishes it and despite the fact that I am fortunate enough to have a greater reach than the average game critic” (Bogost 2015, 185). It is unclear why the limited reach of his writing might be so worrisome to Bogost given that, historically, the audience for, say, poetry criticism has never been all that large.

    [7] In addition to those previously mentioned, Bogost has also published Racing the Beam: The Atari Video Computer System (2009) and, with Simon Ferrari and Bobby Schweizer, Newsgames: Journalism at Play (2010). Also forthcoming is Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games (2016).

    [8] This is, to be sure, a somewhat confusing point. Are not record stores, book stores, and video stores (if such things still exist), along with tea shops, shoe stores, and clothing stores “retail establishment[s] devoted to a singular practice” (Bogost 2015, 182–83)? Are all such establishments unseemly because of the same logic? What makes a game store any different?

    [9] For a brief overview of Gamergate, see Winfield (2014). For a more detailed discussion of both the cultural and technological underpinnings of Gamergate, with a particular emphasis on the relationship between the algorithmic governance of sites such as Reddit or 4chan and online misogyny and harassment, see Massanari’s (2015) important essay. For links to a number of other articles and essays on gaming and feminism, see Ligman (2014) and The New Inquiry (2014). For essays about contemporary “gamer” culture, see Williams (2014) and Frase (2014). On gamers, Bogost writes in a chapter titled “The End of Gamers” from his previous book: “as videogames broaden in appeal, being a ‘gamer’ will actually become less common, if being a gamer means consuming games as one’s primary media diet or identifying with videogames as a primary part of one’s identity” (2011, 154).

    [10] See Bogost (2006, 73–89). Also, to be fair, Bogost devotes a paragraph of the introduction of How to Talk about Videogames to the considerable affective properties of videogames, but concludes the paragraph by saying that games are “Wagnerian Gesamtkunstwerk-flavored chewing gum” (Bogost 2015, ix), which, I feel, considerably undercuts whatever aesthetic value he had just ascribed to them.

    [11] In Alien Phenomenology Bogost calls such lists “Latour litanies” (2012, 38) and discusses this stylistic aspect of object-oriented ontology at some length in the chapter, “Ontography” (35–59).

    [12] See Harman (2012). Bogost addresses such concerns in the conclusion of Alien Phenomenology, responding to criticism about his study of the Atari 2600: “The platform studies project is an example of alien phenomenology. Yet our efforts to draw attention to hardware and software objects have been met with myriad accusations of human erasure: technological determinism most frequently, but many other fears and outrages about ‘ignoring’ or ‘conflating’ or ‘reducing,’ or otherwise doing violence to ‘the cultural aspects’ of things. This is a myth” (2012, 132).

    Back to the essay

    WORKS CITED

    • Aarseth, Espen. 1997. Cybertext: Perspectives on Ergodic Literature. Baltimore: Johns Hopkins University Press.
    • ———. 2001. “Computer Game Studies, Year One.” Game Studies 1, no. 1. http://gamestudies.org/0101/editorial.html.
    • ———. 2003. “Playing Research: Methodological Approaches to Game Analysis.” Game Approaches: Papers from spilforskning.dk Conference, August 28–29. http://hypertext.rmit.edu.au/dac/papers/Aarseth.pdf.
    • ———. 2004. “Genre Trouble: Narrativism and the Art of Simulation.” In First Person: New Media as Story, Performance, and Game, edited by Noah Wardrip-Fruin and Pat Harrigan, 45–55. Cambridge, MA: MIT Press.
    • Arac, Jonathan. 1989. Critical Genealogies: Historical Situations for Postmodern Literary Studies. New York: Columbia University Press.
    • Arnold, Matthew. 1993 (1864). “The Function of Criticism at the Present Time.” In Culture and Anarchy and Other Writings, edited by Stefan Collini, 26–51. New York: Cambridge University Press.
    • Bissell, Tom. 2010. Extra Lives: Why Video Games Matter. New York: Pantheon.
    • Bogost, Ian. 2006. Unit Operations: An Approach to Videogame Criticism. Cambridge, MA:MIT Press.
    • ———. 2007. Persuasive Games: The Expressive Power of Videogame Criticism. Cambridge, MA: MIT Press.
    • ———. 2009. Racing the Beam: The Atari Video Computer System. Cambridge, MA: MIT
    • Press.
    • ———. 2011. How to Do Things with Videogames. Minneapolis: University of Minnesota Press.
    • ———. 2012. Alien Phenomenology, or What It’s Like to Be a Thing. Minneapolis: University of Minnesota Press.
    • ———. 2015. How to Talk about Videogames. Minneapolis: University of Minnesota Press.
    • ———. Forthcoming 2016. Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games. New York: Basic Books.
    • Bogost, Ian, Simon Ferrari, and Bobby Schweizer. 2010. Newsgames: Journalism at Play.
    • Cambridge, MA: MIT Press.
    • Clune, Michael W. 2015. Gamelife: A Memoir. New York: Farrar, Straus and Giroux.
    • Crogan, Patrick. 2011. Gameplay Mode: War, Simulation, and Tehnoculture. Minneapolis: University of Minnesota Press.
    • Dyer-Witherford, Nick, and Greig de Peuter. 2009. Games of Empire: Global Capitalism and Video Games. Minneapolis: University of Minnesota Press.
    • Felski, Rita. 2015. The Limits of Critique. Chicago: University of Chicago Press.
    • Fest, Bradley J. Forthcoming 2016. “Metaproceduralism: The Stanley Parable and the Legacies of Postmodern Metafiction.” “Videogame Adaptation,” edited by Kevin M. Flanagan, special issue, Wide Screen.
    • Frasca, Gonzalo. 2003. “Simulation versus Narrative: Introduction to Ludology.” In The Video Game Theory Reader, edited by Mark J. P. Wolf and Bernard Perron, 221–36. New York: Routledge.
    • Frase, Peter. 2014.  “Gamer’s Revanche.” Peter Frase (blog), September 3. http://www.peterfrase.com/2014/09/gamers-revanche/.
    • Galloway, Alexander R. 2006a. “Warcraft and Utopia.” Ctheory.net, February 16. http://www.ctheory.net/articles.aspx?id=507.
    • ———. 2006b. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press.
    • Harman, Graham. 2012. “The Well-Wrought Broken Hammer: Object-Oriented Literary Criticism.” New Literary History 43, no. 2: 183–203.
    • Hayot, Eric. 2014. “Academic Writing, I Love You. Really, I Do.” Critical Inquiry 41, no. 1: 53–77.
    • Jagoda, Patrick. 2013. “Gamification and Other Forms of Play.” boundary 2 40, no. 2: 113–44.
    • ———. 2016. Network Aesthetics. Chicago: University of Chicago Press.
    • Juul, Jesper. 2001. “Games Telling Stories? A Brief Note on Games and Narratives.” Game Studies 1, no. 1. http://www.gamestudies.org/0101/juul-gts/.
    • Ligman, Chris. 2014. “August 31st.” Critical Distance, August 31. http://www.critical-distance.com/2014/08/31/august-31st/.
    • Massanari, Adrienne . 2015. “#Gamergate and The Fappening: How Reddit’s Algorithm, Governance, and Culture Support Toxic Technocultures.” New Media & Society, OnlineFirst, October 9.
    • Mirowski, Philip. 2013. Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown. New York: Verso.
    • Murray, Janet. 1997. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, MA: MIT Press.
    • ———. 2004. “From Game-Story to Cyberdrama.” In First Person: New Media as Story, Performance, and Game, edited by Noah Wardrip-Fruin and Pat Harrigan, 1–11. Cambridge, MA: MIT Press.
    • Nakamura, Lisa. 2009. “Don’t Hate the Player, Hate the Game: The Racialization of Labor in World of Warcraft.” Critical Studies in Media Communication 26, no. 2: 128–44.
    • The New Inquiry. 2014. “TNI Syllabus: Gaming and Feminism.” New Inquiry, September 2. http://thenewinquiry.com/features/tni-syllabus-gaming-and-feminism/.
    • Said, Edward W. 1994. “Identity, Authority, and Freedom: The Potentate and the Traveler.” boundary 2 21, no. 3: 1–18.
    • Shaw, Adrienne. 2014. Gaming at the Edge: Sexuality and Gender at the Margins of Gamer Culture. Minneapolis: University of Minnesota Press.
    • Wark, McKenzie. 2007. Gamer Theory. Cambridge, MA: Harvard University Press.
    • Williams, Ian. “Death to the Gamer.” Jacobin, September 9. https://www.jacobinmag.com/2014/09/death-to-the-gamer/.
    • Winfield, Nick. 2014. “Feminist Critics of Video Games Facing Threats in ‘GamerGate’ Campaign.” New York Times, October 15. http://www.nytimes.com/2014/10/16/technology/gamergate-women-video-game-threats-anita-sarkeesian.html.

    Back to the essay