b2o

  • Michael Miller — Seeing Ourselves, Loving Our Captors: Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age

    Michael Miller — Seeing Ourselves, Loving Our Captors: Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age

    a review of Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age (University of Minnesota Press Forerunners Series, 2016)

    by Michael Miller

    ~

    All existence is Beta, basically. A ceaseless codependent improvement unto death, but then death is not even the end. Nothing will be finalized. There is no end, no closure. The search will outlive us forever

    — Joshua Cohen, Book of Numbers

    Being a (in)human is to be a beta tester

    — Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age

    Too many people have access to your state of mind

    — Renata Adler, Speedboat

    Whenever I read through Vilém Flusser’s vast body of work and encounter, in print no less, one of the core concepts of his thought—which is that “human communication is unnatural” (2002, 5)––I find it nearly impossible to shake the feeling that the late Czech-Brazilian thinker must have derived some kind of preternatural pleasure from insisting on the ironic gesture’s repetition. Flusser’s rather grim view that “there is no possible form of communication that can communicate concrete experience to others” (2016, 23) leads him to declare that the intersubjective dimension of communication implies inevitably the existence of a society which is, in his eyes, itself an unnatural institution. One can find all over in Flusser’s work traces of his life-long attempt to think through the full philosophical implications of European nihilism, and evidence of this intellectual engagement can be readily found in his theories of communication.

    One of Flusser’s key ideas that draws me in is his notion that human communication affords us the ability to “forget the meaningless context in which we are completely alone and incommunicado, that is, the world in which we are condemned to solitary confinement and death: the world of ‘nature’” (2002, 4). In order to help stave off the inexorable tide of nature’s muted nothingness, Flusser suggests that humans communicate by storing memories, externalized thoughts whose eventual transmission binds two or more people into a system of meaning. Only when an intersubjective system of communication like writing or speech is established between people does the purpose of our enduring commitment to communication become clear: we communicate in order “to become immortal within others (2016, 31). Flusser’s playful positing of the ironic paradox inherent in the improbability of communication—that communication is unnatural to the human but it is also “so incredibly rich despite its limitations” (26)––enacts its own impossibility. In a representatively ironic sense, Flusser’s point is that all we are able to fully understand is our inability to understand fully.

    As Flusser’s theory of communication can be viewed as his response to the twentieth-century’s shifting technical-medial milieu, his ideas about communication and technics eventually led him to conclude that “the original intention of producing the apparatus, namely, to serve the interests of freedom, has turned on itself…In a way, the terms human and apparatus are reversed, and human beings operate as a function of the apparatus. A man gives an apparatus instructions that the apparatus has instructed him to give” (2011,73).[1] Flusser’s skeptical perspective toward the alleged affordances of human mastery over technology is most assuredly not the view that Apple or Google would prefer you harbor (not-so-secretly). Any cursory glance at Wired or the technology blog at Insider Higher Ed, to pick two long-hanging examples, would yield a radically different perspective than the one Flusser puts forth in his work. In fact, Flusser writes, “objects meant to be media may obstruct communication” (2016, 45). If media objects like the technical apparatuses of today actually obstruct communication, then why are we so often led to believe that they facilitate it? And to shift registers just slightly, if everything is said to be an object of some kind—even technical apparatuses––then cannot one be permitted to claim daily communion with all kinds of objects? What happens when an object—and an object as obsolete as a book, no less—speaks to us? Will we still heed its call?

    ***

    Speaking in its expanded capacity as neither narrator nor focalized character, the book as literary object addresses us in a direct and antagonistic fashion in the opening line to Joshua Cohen’s 2015 novel Book of Numbers. “If you’re reading this on a screen, fuck off. I’ll only talk if I’m gripped with both hands” (5), the book-object warns. As Cohen’s narrative tells the story of a struggling writer named Joshua Cohen (whose backstory corresponds mostly to the historical-biographical author Joshua Cohen) who is contracted to ghostwrite the memoir of another Joshua Cohen (who is the CEO of a massive Google-type company named Tetration), the novel’s middle section provides an “unedited” transcript of the conversation between the two Cohens in which the CEO recounts his upbringing and tremendous business success in and around the Bay Area from the late 1970s up to 2013 of the narrative’s present. The novel’s Silicon Valley setting, nominal and characterological doubling, and structural narrative coupling of the two Cohens’ lives makes it all but impossible to distinguish the personal histories of Cohen-the-CEO and Cohen-the-narrator from the cultural history of the development of personal computing and networked information technologies. The history of one Joshua Cohen––or all Joshua Cohens––is indistinguishable from the history of intrusive computational/digital media. “I had access to stuff I shouldn’t have had access to, but then Principal shouldn’t have had such access to me—cameras, mics,” Cohen-the-narrator laments. In other words, as Cohen-the-narrator ghostwrites another Cohen’s memoir within the context of the broad history of personal computing and the emergence of algorithmic governance and surveillance, the novel invites us to consider how the history of an individual––or every individual, it does not really matter––is also nothing more or anything less than the surveilled history of its data usage, which is always written by someone or something else, the ever-present Not-Me (who just might have the same name as me). The Self is nothing but a networked repository of information to be mined in the future.

    While the novel’s opening line addresses its hypothetical reader directly, its relatively benign warning fixes reader and text in a relation of rancor. The object speaks![2] And yet tech-savvy twenty-first century readers are not the only ones who seem to be fed up with books; books too are fed up with us, and perhaps rightly so. In an age when objects are said to speak vibrantly and withdraw infinitely; processes like human cognition are considered to be operative in complex technical-computational systems; and when the only excuse to preserve the category of “subjective experience” we are able to muster is that it affords us the ability “to grasp how networks technically distribute and disperse agency,” it would seem at first glance that the second-person addressee of the novel’s opening line would intuitively have to be a reading, thinking subject.[3] Yet this is the very same reading subject who has been urged by Cohen’s novel to politely “fuck off” if he or she has chosen to read the text on a screen. And though the text does not completely dismiss its readers who still prefer “paper of pulp, covers of board and cloth” (5), a slight change of preposition in its title points exactly to what the book fears most of all: Book as Numbers. The book-object speaks, but only to offer an ominous admonition: neither the book nor its readers ought to be reducible to computable numbers.

    The transduction of literary language into digital bits eliminates the need for a phenomenological, reading subject, and it suggests too that literature––or even just language in a general sense––and humans in particular are ontologically reducible to data objects that can be “read” and subsequently “interpreted” by computational algorithms. As Cohen’s novel collapses the distinction between author, narrator, character, and medium, its narrator observes that “the only record of my one life would be this record of another’s” (9). But in this instance, the record of one’s (or another’s) life is merely the history of how personal computational technologies have effaced the phenomenological subject. How have we arrived at the theoretically permissible premise that “People matter, but they don’t occupy a privileged subject position distinct from everything else in the world” (Huehls 20)? How might the “turn toward ontology” in theory/philosophy be viewed as contributing to our present condition?

    * **

    Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age (2016) provides a brief, yet stylistically ironic and incisive interrogation into how recent iterations of post- or inhumanist theory have found a strange bedfellow in the rhetorical boosterism that accompanies the alleged affordances of digital technologies and big data. Despite the differences between these two seemingly unrelated discourses, they both share a particularly critical or diminished conception of the anthro- in “anthropocentrism” that borrows liberally from the postulates of the “ontological turn” in theory/philosophy (Rosenberg n.p.). While the parallels between these discourses are not made explicit in Jarzombek’s book, Digital Stockholm Syndrome asks us to consider how a shared commitment to an ontologically diminished view of “the human” that galvanizes both technological determinism’s anti-humanism and post- or inhumanist theory has found its common expression in recent philosophies of ontology. In other words, the problem Digital Stockholm Syndrome takes up is this: what kind of theory of ontology, Being, and to a lesser extent, subjectivity, appeals equally to contemporary philosophers and Silicon Valley tech-gurus? Jarzombek gestures toward such an inquiry early on: “What is this new ontology?” he asks, and “What were the historical situations that produced it? And how do we adjust to the realities of the new Self?” (x).

    A curious set of related philosophical commitments united by their efforts to “de-center” and occasionally even eject “anthropocentrism” from the critical conversation constitute some of the realities swirling around Jarzombek’s “new Self.”[4] Digital Stockholm Syndrome provocatively locates the conceptual legibility of these philosophical realities squarely within an explicitly algorithmic-computational historical milieu. By inviting such a comparison, Jarzombek’s book encourages us to contemplate how contemporary ontological thought might mediate our understanding of the historical and philosophical parallels that bind the tradition of in humanist philosophical thinking and the rhetoric of twenty-first century digital media.[5]

    In much the same way that Alexander Galloway has argued for a conceptual confluence that exceeds the contingencies of coincidence between “the structure of ontological systems and the structure of the most highly evolved technologies of post-Fordist capitalism” (347), Digital Stockholm Syndrome argues similarly that today’s world is “designed from the micro/molecular level to fuse the algorithmic with the ontological” (italics in original, x).[6] We now understand Being as the informatic/algorithmic byproduct of what ubiquitous computational technologies have gathered and subsequently fed back to us. Our personal histories––or simply the records of our data use (and its subsequent use of us)––comprise what Jarzombek calls our “ontic exhaust…or what data experts call our data exhaust…[which] is meticulously scrutinized, packaged, formatted, processed, sold, and resold to come back to us in the form of entertainment, social media, apps, health insurance, clickbait, data contracts, and the like” (x).

    The empty second-person pronoun is placed on equal ontological footing with, and perhaps even defined by, its credit score, medical records, 4G data usage, Facebook likes, and threefold of its Tweets. “The purpose of these ‘devices,’” Jarzombek writes, “is to produce, magnify, and expose our ontic exhaust” (25). We give our ontic exhaust away for free every time we log into Facebook because it, in return, feeds back to us the only sense of “self” we are able to identify as “me.”[7] If “who we are cannot be traced from the human side of the equation, much less than the analytic side.‘I’ am untraceable” (31), then why do techno-determinists and contemporary oracles of ontology operate otherwise? What accounts for their shared commitment to formalizing ontology? Why must the Self be tracked and accounted for like a map or a ledger?

    As this “new Self,” which Jarzombek calls the “Being-Global” (2), travels around the world and checks its bank statement in Paris or tags a photo of a Facebook friend in Berlin while sitting in a cafe in Amsterdam, it leaks ontic exhaust everywhere it goes. While the hoovering up of ontic exhaust by GPS and commercial satellites “make[s] us global,” it also inadvertently redefines Being as a question of “positioning/depositioning” (1). For Jarzombek, the question of today’s ontology is not so much a matter of asking “what exists?” but of asking “where is it and how can it be found?” Instead of the human who attempts to locate and understand Being, now Being finds us, but only as long as we allow ourselves to be located.

    Today’s ontological thinking, Jarzombek points out, is not really interested in asking questions about Being––it is too “anthropocentric.”[8] Ontology in the twenty-first century attempts to locate Being by gathering data, keeping track, tracking changes, taking inventory, making lists, listing litanies, crunching the numbers, and searching the database. “Can I search for it on Google?” is now the most important question for ontological thought in the twenty-first century.

    Ontological thinking––which today means ontological accounting, or finding ways to account for the ontologically actuarial––is today’s philosophical equivalent to a best practices for data management, except there is no difference between one’s data and one’s Self. Nonetheless, any ontological difference that might have once stubbornly separated you from data about you no longer applies. Digital Stockholm Syndrome identifies this shift with the formulation: “From ontology to trackology” (71).[9] The philosophical shift that has allowed data about the Self to become the ontological equivalent to the Self emerges out of what Jarzombek calls an “animated ontology.”

    In this “animated ontology,” subject position and object position are indistinguishable…The entire system of humanity is microprocessed through the grid of sequestered empiricism” (31, 29). Jarzombek is careful to distinguish his “animated ontology” from the recently rebooted romanticisms which merely turn their objects into vibrant subjects. He notes that “the irony is that whereas the subject (the ‘I’) remains relatively stable in its ability to self-affirm (the lingering by-product of the psychologizing of the modern Self), objectivity (as in the social sciences) collapses into the illusions produced by the global cyclone of the informatic industry” (28).”[10] By devising tricky new ways to flatten ontology (all of which are made via po-faced linguistic fiat), “the human and its (dis/re-)embodied computational signifiers are on equal footing”(32). I do not define my data, but my data define me.

    ***

    Digital Stockholm Syndrome asserts that what exists in today’s ontological systems––systems both philosophical and computational––is what can be tracked and stored as data. Jarzombek sums up our situation with another pithy formulation: “algorithmic modeling + global positioning + human scaling +computational speed=data geopolitics” (12). While the universalization of tracking technologies defines the “global” in Jarzombek’s Being-Global, it also provides us with another way to understand the humanities’ enthusiasm for GIS and other digital mapping platforms as institutional-disciplinary expressions of a “bio-chemo-techno-spiritual-corporate environment that feeds the Human its sense-of-Self” (5).

    Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age

    One wonders if the incessant cultural and political reminders regarding the humanities’ waning relevance have moved humanists to reconsider the very basic intellectual terms of their broad disciplinary pursuits. How come it is humanities scholars who are in some cases most visibly leading the charge to overturn many decades of humanist thought? Has the internalization of this depleted conception of the human reshaped the basic premises of humanities scholarship, Digital Stockholm Syndrome wonders? What would it even mean to pursue a “humanities” purged of “the human?” And is it fair to wonder if this impoverished image of humanity has trickled down into the formation of new (sub)disciplines?”[11]

    In a late chapter titled “Onto-Paranoia,” Jarzombek finally arrives at a working definition of Digital Stockholm Syndrome: data visualization. For Jarzombek, data-visualization “has been devised by the architects of the digital world” to ease the existential torture—or “onto-torture”—that is produced by Security Threats (59). Security threats are threatening because they remind us that “security is there to obscure the fact that whole purpose is to produce insecurity” (59). When a system fails, or when a problem occurs, we need to be conscious of the fact that the system has not really failed; “it means that the system is working” (61).[12] The Social, the Other, the Not-Me—these are all variations of the same security threat, which is just another way of defining “indeterminacy” (66). So if everything is working the way it should, we rarely consider the full implications of indeterminacy—both technical and philosophical—because to do so might make us paranoid, or worse: we would have to recognize ourselves as (in)human subjects.

    Data-visualizations, however, provide a soothing salve which we can (self-)apply in order to ease the pain of our “onto-torture.” Visualizing data and creating maps of our data use provide us with a useful and also pleasurable tool with which we locate ourselves in the era of “post-ontology.”[13] “We experiment with and develop data visualization and collection tools that allow us to highlight urban phenomena. Our methods borrow from the traditions of science and design by using spatial analytics to expose patterns and communicating those results, through design, to new audiences,” we are told by one data-visualization project (http://civicdatadesignlab.org/).  As we affirm our existence every time we travel around the globe and self-map our location, we silently make our geo-data available for those who care to sift through it and turn it into art or profit.

    “It is a paradox that our self-aestheticizing performance as subjects…feeds into our ever more precise (self-)identification as knowable and predictable (in)human-digital objects,” Jarzombek writes. Yet we ought not to spend too much time contemplating the historical and philosophical complexities that have helped create this paradoxical situation. Perhaps it is best we do not reach the conclusion that mapping the Self as an object on digital platforms increases the creeping unease that arises from the realization that we are mappable, hackable, predictable, digital objects––that our data are us. We could, instead, celebrate how our data (which we are and which is us) is helping to change the world. “’Big data’ will not change the world unless it is collected and synthesized into tools that have a public benefit,” the same data visualization project announces on its website’s homepage.

    While it is true that I may be a little paranoid, I have finally rested easy after having read Digital Stockholm Syndrome because I now know that my data/I are going to good use.[14] Like me, maybe you find comfort in knowing that your existence is nothing more than a few pixels in someone else’s data visualization.

    _____

    Michael Miller is a doctoral candidate in the Department of English at Rice University. His work has appeared or is forthcoming in symplokē and the Journal of Film and Video.

    Back to the essay

    _____

    Notes

    [1] I am reminded of a similar argument advanced by Tung-Hui Hu in his A Prehistory of the Cloud (2016). Encapsulating Flusser’s spirit of healthy skepticism toward technical apparatuses, the situation that both Flusser and Hu fear is one in which “the technology has produced the means of its own interpretation” (xixx).

    [2] It is not my aim to wade explicitly into discussions regarding “object-oriented ontology” or other related philosophical developments. For the purposes of this essay, however, Andrew Cole’s critique of OOO as a “new occasionalism” will be useful. “’New occasionalism,’” Cole writes, “is the idea that when we speak of things, we put them into contact with one another and ourselves” (112). In other words, the speaking of objects makes them objectively real, though this is only possible when everything is considered to be an object. The question, though, is not about what is or is not an object, but is rather what it means to be. For related arguments regarding the relation between OOO/speculative realism/new materialism and mysticism, see Sheldon (2016), Altieri (2016), Wolfendale (2014), O’Gorman (2013), and to a lesser extent Colebrook (2013).

    [3] For the full set of references here, see Bennett (2010), Hayles (2014 and 2016), and Hansen (2015).

    [4] While I cede that no thinker of “post-humanism” worth her philosophical salt would admit the possibility or even desirability of purging the sins of “correlationism” from critical thought all together, I cannot help but view such occasional posturing with a skeptical eye. For example, I find convincing Barbara Herrnstein-Smith’s recent essay “Scientizing the Humanities: Shifts, Negotiations, Collisions,” in which she compares the drive in contemporary critical theory to displace “the human” from humanistic inquiry to the impossible and equally incomprehensible task of overcoming the “‘astro’-centrism of astronomy or the biocentrism of biology” (359).

    [5] In “Modest Proposal for the Inhuman,” Julian Murphet identifies four interrelated strands of post- or inhumanist thought that combine a kind of metaphysical speculation with a full-blown demolition of traditional ontology’s conceptual foundations. They are: “(1) cosmic nihilism, (2) molecular bio-plasticity, (3) technical accelerationism, and (4) animality. These sometimes overlapping trends are severally engaged in the mortification of humankind’s stubborn pretensions to mastery over the domain of the intelligible and the knowable in an era of sentient machines, routine genetic modification, looming ecological disaster, and irrefutable evidence that we share 99 percent of our biological information with chimpanzees” (653).

    [6] The full quotation from Galloway’s essay reads: “Why, within the current renaissance of research in continental philosophy, is there a coincidence between the structure of ontological systems and the structure of the most highly evolved technologies of post-Fordist capitalism? [….] Why, in short, is there a coincidence between today’s ontologies and the software of big business?” (347). Digital Stockholm Syndrome begins by accepting Galloway’s provocations as descriptive instead of speculative. We do not necessarily wonder in 2017 if “there is a coincidence between today’s ontologies and the software of big business”; we now wonder instead how such a confluence came to be.

    [7] Wendy Hui Kyun Chun makes a similar point in her 2016 monograph Updating to Remain the Same: Habitual New Media. She writes, “If users now ‘curate’ their lives, it is because their bodies have become archives” (x-xi). While there is not ample space here to discuss the  full theoretical implications of her book, Chun’s discussion of the inherently gendered dimension to confession, self-curation as self-exposition, and online privacy as something that only the unexposed deserve (hence the need for preemptive confession and self-exposition on the internet) in digital/social media networks is tremendously relevant to Jarzombek’s Digital Stockholm Syndrome, as both texts consider the Self as a set of mutable and “marketable/governable/hackable categories” (Jarzombek 26) that are collected without our knowledge and subsequently fed back to the data/media user in the form of its own packaged and unique identity. For recent similar variations of this argument, see Simanowski (2017) and McNeill (2012).

    I also think Chun’s book offers a helpful tool for thinking through recent confessional memoirs or instances of “auto-theory” (fictionalized or not) like Maggie Nelson’s The Argonauts (2015), Sheila Heti’s How Should a Person Be (2010), Marie Calloway’s what purpose did i serve in your life (2013), and perhaps to a lesser degree Tao Lin’s Richard Yates (2010), Taipei (2013), Natasha Stagg’s Surveys, and Ben Lerner’s Leaving the Atocha Station (2011) and 10:04 (2014). The extent to which these texts’ varied formal-aesthetic techniques can be said to be motivated by political aims is very much up for debate, but nonetheless, I think it is fair to say that many of them revel in the reveal. That is to say, via confession or self-exposition, many of these novels enact the allegedly performative subversion of political power by documenting their protagonists’ and/or narrators’ certain social/political acts of transgression. Chun notes, however, that this strategy of self-revealing performs “resistance as a form of showing off and scandalizing, which thrives off moral outrage. This resistance also mimics power by out-spying, monitoring, watching, and bringing to light, that is, doxing” (151). The term “autotheory,” which was has been applied to Nelson’s The Argonauts in particular, takes on a very different meaning in this context. “Autotheory” can be considered as a theory of the self, or a self-theorization, or perhaps even the idea that personal experience is itself a kind of theory might apply here, too. I wonder, though, how its meaning would change if the prefix “auto” was understood within a media-theoretical framework not as “self” but as “automation.” “Autotheory” becomes, then, an automatization of theory or theoretical thinking, but also a theoretical automatization; or more to the point: what if “autotheory” describes instead a theorization of the Self or experience wherein “the self” is only legible as the product of automated computational-algorithmic processes?

    [8] Echoing the critiques of “correlationism” or “anthropocentrism” or what have you, Jarzombek declares that “The age of anthrocentrism is over” (32).

    [9] Whatever notion of (self)identity the Self might find to be most palatable today, Jarzombek argues, is inevitably mediated via global satellites. “The intermediaries are the satellites hovering above the planet. They are what make us global–what make me global” (1), and as such, they represent the “civilianization” of military technologies (4). What I am trying to suggest is that the concepts and categories of self-identity we work with today are derived from the informatic feedback we receive from long-standing military technologies.

    [10] Here Jarzombek seems to be suggesting that the “object” in the “objectivity” of “the social sciences” has been carelessly conflated with the “object” in “object-oriented” philosophy. The prioritization of all things “objective” in both philosophy and science has inadvertently produced this semantic and conceptual slippage. Data objects about the Self exist, and thus by existing, they determine what is objective about the Self. In this new formulation, what is objective about the Self or subject, in other words, is what can be verified as information about the self. In Indexing It All: The Subject in the Age of Documentation, Information, and Data (2014), Ronald Day argues that these global tracking technologies supplant traditional ontology’s “ideas or concepts of our human manner of being” and have in the process “subsume[d] and subvert[ed] the former roles of personal judgment and critique in personal and social beings and politics” (1). While such technologies might be said to obliterate “traditional” notions of subjectivity, judgment, and critique, Day demonstrates how this simultaneous feeding-forward and feeding back of data-about-the-Self represents the return of autoaffection, though in his formulation self-presence is defined as information or data-about-the-self whose authenticity is produced when it is fact-checked against a biographical database (3)—self-presence is a presencing of data-about-the-Self. This is all to say that the Self’s informational “aboutness”–its representation in and as data–comes to stand in for the Self’s identity, which can only be comprehended as “authentic” in its limited metaphysical capacity as a general informatic or documented “aboutness.”

    [11] Flusser is again instructive on this point, albeit in his own idiosyncratic way­­. Drawing attention to the strange unnatural plurality in the term “humanities,” he writes, “The American term humanities appropriately describes the essence of these disciplines. It underscores that the human being is an unnatural animal” (2002, 3). The plurality of “humanities,” as opposed to the singular “humanity,” constitutes for Flusser a disciplinary admission that not only the category of “the human” is unnatural, but that the study of such an unnatural thing is itself unnatural, as well. I think it is also worth pointing out that in the context of Flusser’s observation, we might begin to situate the rise of “the supplemental humanities” as an attempt to redefine the value of a humanities education. The spatial humanities, the energy humanities, medical humanities, the digital humanities, etc.—it is not difficult to see how these disciplinary off-shoots consider themselves as supplements to whatever it is they think “the humanities” are up to; regardless, their institutional injection into traditional humanistic discourse will undoubtedly improve both(sub)disciplines, with the tacit acknowledgment being that the latter has just a little more to gain from the former in terms of skills, technical know-how, and data management. Many thanks to Aaron Jaffe for bringing this point to my attention.

    [12] In his essay “Algorithmic Catastrophe—The Revenge of Contingency,” Yuk Hui notes that “the anticipation of catastrophe becomes a design principle” (125). Drawing from the work of Bernard Stiegler, Hui shows how the pharmacological dimension of “technics, which aims to overcome contingency, also generates accidents” (127). And so “as the anticipation of catastrophe becomes a design principle…it no longer plays the role it did with the laws of nature” (132). Simply put, by placing algorithmic catastrophe on par with a failure of reason qua the operations of mathematics, Hui demonstrates how “algorithms are open to contingency” only insofar as “contingency is equivalent to a causality, which can be logically and technically deduced” (136). To take Jarzombek’s example of the failing computer or what have you, while the blue screen of death might be understood to represent the faithful execution of its programmed commands, we should also keep in mind that the obverse of Jarzombek’s scenario would force us to come to grips with how the philosophical implications of the “shit happens” logic that underpins contingency-as-(absent) causality “accompanies and normalizes speculative aesthetics” (139).

    [13] I am reminded here of one of the six theses from the manifesto “What would a floating sheep map?,” jointly written by the Floating Sheep Collective, which is a cohort of geography professors. The fifth thesis reads: “Map or be mapped. But not everything can (or should) be mapped.” The Floating Sheep Collective raises in this section crucially important questions regarding ownership of data with regard to marginalized communities. Because it is not always clear when to map and when not to map, they decide that “with mapping squarely at the center of power struggles, perhaps it’s better that not everything be mapped.” If mapping technologies operate as ontological radars–the Self’s data points help point the Self towards its own ontological location in and as data—then it is fair to say that such operations are only philosophically coherent when they are understood to be framed within the parameters outlined by recent iterations of ontological thinking and its concomitant theoretical deflation of the rich conceptual make-up that constitutes the “the human.” You can map the human’s data points, but only insofar as you buy into the idea that points of data map the human. See http://manifesto.floatingsheep.org/.

    [14]Mind/paranoia: they are the same word!”(Jarzombek 71).

    _____

    Works Cited

    • Adler, Renata. Speedboat. New York Review of Books Press, 1976.
    • Altieri, Charles. “Are We Being Materialist Yet?” symplokē 24.1-2 (2016):241-57.
    • Calloway, Marie. what purpose did i serve in your life. Tyrant Books, 2013.
    • Chun, Wendy Hui Kyun. Updating to Remain the Same: Habitual New Media. The MIT Press, 2016.
    • Cohen, Joshua. Book of Numbers. Random House, 2015.
    • Cole, Andrew. “The Call of Things: A Critique of Object-Oriented Ontologies.” minnesota review 80 (2013): 106-118.
    • Colebrook, Claire. “Hypo-Hyper-Hapto-Neuro-Mysticism.” Parrhesia 18 (2013).
    • Day, Ronald. Indexing It All: The Subject in the Age of Documentation, Information, and Data. The MIT Press, 2014.
    • Floating Sheep Collective. “What would a floating sheep map?” http://manifesto.floatingsheep.org/.
    • Flusser, Vilém. Into the Universe of Technical Images. Translated by Nancy Ann Roth. University of Minnesota Press, 2011.
    • –––. The Surprising Phenomenon of Human Communication. 1975. Metaflux, 2016.
    • –––. Writings, edited by Andreas Ströhl. Translated by Erik Eisel. University of Minnesota Press, 2002.
    • Galloway, Alexander R. “The Poverty of Philosophy: Realism and Post-Fordism.” Critical Inquiry 39.2 (2013): 347-366.
    • Hansen, Mark B.N. Feed Forward: On the Future of Twenty-First Century Media. Duke University Press, 2015.
    • Hayles, N. Katherine. “Cognition Everywhere: The Rise of the Cognitive Nonconscious and the Costs of Consciousness.” New Literary History 45.2 (2014):199-220.
    • –––. “The Cognitive Nonconscious: Enlarging the Mind of the Humanities.” Critical Inquiry 42 (Summer 2016): 783-808.
    • Herrnstein-Smith, Barbara. “Scientizing the Humanities: Shifts, Collisions, Negotiations.” Common Knowledge  22.3 (2016):353-72.
    • Heti, Sheila. How Should a Person Be? Picador, 2010.
    • Hu, Tung-Hui. A Prehistory of the Cloud. The MIT Press, 2016.
    • Huehls, Mitchum. After Critique: Twenty-First Century Fiction in a Neoliberal Age. Oxford University Press, 2016.
    • Hui, Yuk. “Algorithmic Catastrophe–The Revenge of Contingency.” Parrhesia 23(2015): 122-43.
    • Jarzombek, Mark. Digital Stockholm Syndrome in the Post-Ontological Age. University of Minnesota Press, 2016.
    • Lin, Tao. Richard Yates. Melville House, 2010.
    • –––. Taipei. Vintage, 2013.
    • McNeill, Laurie. “There Is No ‘I’ in Network: Social Networking Sites and Posthuman Auto/Biography.” Biography 35.1 (2012): 65-82.
    • Murphet, Julian. “A Modest Proposal for the Inhuman.” Modernism/Modernity 23.3(2016): 651-70.
    • Nelson, Maggie. The Argonauts. Graywolf P, 2015.
    • O’Gorman, Marcel. “Speculative Realism in Chains: A Love Story.” Angelaki: Journal of the Theoretical Humanities 18.1 (2013): 31-43.
    • Rosenberg, Jordana. “The Molecularization of Sexuality: On Some Primitivisms of the Present.” Theory and Event 17.2 (2014):  n.p.
    • Sheldon, Rebekah. “Dark Correlationism: Mysticism, Magic, and the New Realisms.” symplokē 24.1-2 (2016): 137-53.
    • Simanowski, Roberto. “Instant Selves: Algorithmic Autobiographies on Social Network Sites.” New German Critique 44.1 (2017): 205-216.
    • Stagg, Natasha. Surveys. Semiotext(e), 2016.
    • Wolfendale, Peter. Object Oriented Philosophy: The Noumenon’s New Clothes. Urbanomic, 2014.
  • Eugene Thacker – Weird, Eerie, and Monstrous: A Review of “The Weird and the Eerie” by Mark Fisher

    Eugene Thacker – Weird, Eerie, and Monstrous: A Review of “The Weird and the Eerie” by Mark Fisher

    by Eugene Thacker

    Review of Mark Fisher, The Weird and the Eerie (Repeater, 2017)

    For a long time, the horror genre was not generally considered worthy of critical, let alone philosophical, reflection; it was the stuff of cheap thrills, pulp magazines, B-movies. Much of this has changed in the ensuing years, as a robust and diverse critical literature has emerged around the horror genre, much of which not only considers the horror genre as a reflection of society, but as an autonomous platform for posing far-reaching questions concerning the fate of the humans species, the species that has named itself. These are sentiments that have preoccupied recent writing on the horror genre, much of which borrows from developments in contemporary philosophy, and is attempting to expand the confines of horror beyond the usual fixation on gore, violence, and shock tactics. This hasn’t always been the case. Even today, writing on genre horror often tends towards “list” books (of the type The Top 100 Italian Horror Films From 1977, Volume IV), or books that are basically print-on-demand databases (The Encyclopedia of Asian Ghost Stories from the Beginning of Time, and Before That). These are rounded out by a plethora of introductory textbooks and surveys, usually aimed at film studies undergraduates (e.g. Key Terms in Cultural Studies: Splatterpunk), and opaque academic monographs of Lacanian psychoanalytic semiotic readings of horror film that themselves seem to be part of some kind of academic cult.

    While such books can be informative and helpful, reading them can be akin to the slightly woozy feeling one has after having gone down a combined Google/Wikipedia/YouTube rabbit-hole, emerging with bewildered eyes and terabytes of regurgitated data. However, recent writing on the horror genre takes a different approach, eschewing the poles of either the popular or the academic for a perhaps yet-to-be-named third space. One book that takes up this challenge is Mark Fisher’s The Weird and the Eerie, published this year. (Fisher is likely known to readers through his blog K-punk, which had been running for almost two decades before his untimely death.) What Fisher’s study shares with other like-minded books is an interest in expanding our understanding of the horror genre beyond the genre itself, and he does this by focusing on one of the deepest threads in the horror genre: the limits of human beings living in a human-centric world.

    As a case study, consider the opening passage from H.P. Lovecraft’s well-known short story “The Call of Cthulhu”:

    The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.

    With this – arguably the most foreboding opener ever written for a story – Lovecraft sets the stage for what is really an extended meditation on human finitude. Originally published in the February 1928 issue of the pulp magazine Weird Tales, “Cthulhu” ostensibly brings together the perspectives of deep time and deep space to reflect on the comparatively myopic and humble non-event that is human civilization – at least that’s how Lovecraft himself puts it. It is well known that Lovecraft took cues from the likes of Edgar Allan Poe, Algernon Blackwood, and Arthur Machen – influences that he himself notes. Equally well known is Lovecraft’s notorious xenophobia (often expressed in his correspondence as outright racism). Yet in spite of – or because of – this, Lovecraft remained unambiguous in his own approach to the horror genre. In his numerous essays, notes, and letters, he notes, with an unflinching misanthropy, how a horror story should evoke “an atmosphere of breathless and unexplainable dread of outer, unknown forces,” forces that point towards a “malign and particular suspension or defeat of those fixed laws of Nature which are our only safeguard against the assaults of chaos and the daemons of unplumbed space.” The “monsters” in such tales were far from the usual line-up of vampires, werewolves, zombies, and demons – all of which, for Lovecraft and his colleagues, end up serving as mere solipsistic reflections of human-centric hopes and fears. They are often described in abstract, elemental, almost primordial ways: “the colour out of space,” “the shadow out of time,” or simply “the lurking fear.”

    The story of “Cthulhu” itself  – which details the discovery of a cult devoted to an ancient, malefic, Elder Deity vaguely resembling a oozing winged cephalopod emerging from a hidden tomb of impossibly-shaped Cyclopean black geometry foretelling not only the end of the world but the deeper futility of the entirety of human civilization – the story itself has since obtained a cult status among horror authors, critics, and fans alike. In the early 20th century, like-minded tales of cosmic misanthropy were written by Lovecraft contemporaries Clark Ashton Smith, Robert E. Howard, and Robert Bloch, as well as by later authors of the weird tale such as Ramsey Campbell, Claitlín Kiernan, China Miéville, and Junji Ito. Like a slow-moving, tentacular meme, the Cthulhu “mythos” has reached far beyond the confines of literary horror. Film adaptations abound (the term “straight-to-video” no longer applies, but is still apt here). Video games, which nearly always end in despair and/or death. Role-playing games, complete with impossibly-shaped 10-sided black dice. A visit to any Comic Con will yield a dizzying array of comics, ‘zines, artwork, posters, bumper stickers, hoodies, Miskatonic University course catalogs, editions of the dreaded Necronomicon, and even Cthulhu plushies for the Lovecraft toddler. An industry is born. Today, distant cousins of Cthulhu can be seen in the Academy Award-nominated Arrival (2016), and the distinctly un-nominated burlesque that is Independence Day: Resurgence (2016). Cthulhu, it seems, has gone mainstream.

    Amid all the fondness for such abysmal and tentacular monstrosities, it is easy to overlook the themes that run through Lovecraft’s short tale, themes at once disturbing and compelling, and which mark the tradition often referred to as “supernatural horror” or “cosmic horror.” When Lovecraft characters happen upon strange creatures like Cthulhu (or worse, the Shoggoths), they don’t have the typical reactions. “Fear” is too simple a term to describe it; it encompasses everything without saying anything. But neither are they overcome by the more literary affects of “terror” or “horror,” like the characters of an old gothic novel. They have neither the time nor the patience for the critical distance afforded by a psychoanalytic “uncanny,” or the literary structures of the “fantastic.” Confronted with Cthulhu, Lovecraft’s characters simply freeze. They become numb. They go dark. Frozen thought. They can’t wrap their heads around that is right before them. What they “feel” is exactly this “inability of the human mind to correlate all its contents.” Forget the fear of death, I’ve just discovered a primordial, other-dimensional, slime-ridden necropolis of obsidian blasphemy that throws into question all human knowledge on this now-forsaken speck of cosmic dust we laughably call “our” planet.

    Yet, in all their pulpy, melodramatic, low-brow seriousness, the questions raised by Lovecraft and other writers in Weird Tales are also philosophical questions. They are questions that address the limits of human knowledge in a rapidly-changing world, a world that seems indifferent to the machinations of science or doctrinal exuberance of religion, impassive before the hubris of technological advance or the lures of political ideology – a cold “crawling chaos” lurking just beneath the fragile fabric of humanity. What the characters of such stories discover (aside from the usual train of madness, dread, and, well, death) is a kind of stumbling humbleness, the human brain discovering its own limit, enlightened only of its own hubris – the humility of thought.

    *

    This theme  – the limits of what can be known, the limits of what can be felt, the limits of what can be done – is central to Fisher’s The Weird and the Eerie. This is markedly different from other approaches to horror, which, however critical they may seem, often regard the horror genre as having an essentially therapeutic function, enabling us to purge, cope with, or work through our collective fears and anxieties. This therapeutic view of horror often becomes polarized between reactionary readings (a horror story that promotes the establishing or re-establishing of norms) or progressive readings (a horror story that promotes otherness, difference, and transgression of norms). And yet, in the final analysis, it is also hard to escape the sense that there is a certain kind of solipsism to the horror genre, that it is we human beings that remain at the center of it all, who have either constructed boundaries and bunkers and have once again staved off another threat to our collective identity, or who have devised clever ways of creating hybrids, fusions, and monstrous couplings with the other, thereby extending humanity’s long dreamed-of share of immortality.

    Whether reactionary or progressive, both responses to the horror genre involve a strategy in which the world in all its strangeness is transformed into a world made in our own image (anthropomorphism), or a world rendered useful for us as human beings (anthropocentrism). In spite of all the horrifying things that happen to the characters in horror stories, there is a sense in which the horror genre is ultimately a kind of humanism, a panegyric to the limitless potential of human knowledge, the immeasurable capacity for human feeling, the infinite promise of human sovereignty. This is, of course, not surprising, given the somber didactics of even the most extreme zombie apocalypses, vampiric mutations, or demonic plagues. Species self-interest is at stake. Humanity may be brought to the brink of extinction, only so that that same humanity may extend its mastery (self-mastery and mastery over its environment), and even obtain some form of ascendency over its own tenuous, existential status. Subtending the survivalist imperative of the horror genre and its pragmatic arsenal of mastering monsters of all kinds is another kind of mastery – a metaphysical mastery.

    But this is only one way of understanding the horror genre. The insight of books like Fisher’s is that the horror genre is also capable of chipping away at this species-specific sovereignty, taking aim at the twin pillars of anthropomorphism and anthropocentrism. Instead of being concerned with species self-interest and mastery, such horror stories tend more towards humility, hubris, and even, in its darkest moments, futility. It is a project that is doomed to failure, of course, and perhaps this why so many of the characters in the tales of Lovecraft, Algernon Blackwood, or Izumi Kyoka find themselves in worlds that are both untenable and unlivable. They end up with nothing but a bit of useless quasi-wisdom, scribbling away madly in a darkened forest room trying to make sense of it all not making any sense. Or they detach themselves from the humdrum human world of plans and projects, finding themselves inexorably pulled headlong into the ambivalent abyss of self-abnegation. Or worse – they simply continue to exist. What results is what we might call a “bleak humanism” – a horror story interested in humanity only to the extent that humanity is defined by its uncertainties, its finitude, its doubts – the humility of being human.

    Fisher’s terms are relatively clear. “What the weird and the eerie have in common is a preoccupation with the strange.” For Fisher, the strange is, quite simply, “a fascination for the outside […] that which lies beyond standard perception, cognition and experience.” But the weird and the eerie are quite different in how they apprehend the strange. As Fisher writes, “the weird is constituted by a presence – the presence of that which does not belong.” There is something exorbitant, out-of-place, and incongruous about the weird. It is the part that does not fit into the whole, or the part that disturbs the whole – threshold worlds populated by portals, gateways, time loops, and simulacra. Fundamental presumptions about self, other, knowledge, and reality will have to be rethought. “The eerie, by contrast, is constituted by a failure of absence or by a failure of presence. There is something where there should be nothing, or there is nothing where there should be something.” Here we encounter disembodied voices, lapses in memory, selves that are others, revelations of the alien within, and nefarious motives buried in the unconscious, inorganic world in which we are embedded.

    The weird and the eerie are not exclusive to the more esoteric regions of cosmic horror; they are also embedded in and bound up with quotidian notions of selfhood and the everyday relationship between self and world. The weird and eerie crop up in those furtive moments when we suspect we are not who we think we are, when we wonder if we do not act so much as we are acted upon. When everything we assumed to be a cause is really an effect. The weird and eerie are, ultimately, inseparable from the fabric of the social, cultural, and political landscape in which we are embedded. Fisher: “Capital is at every level an eerie entity: conjured out of nothing, capital nevertheless exerts more influence than any allegedly substantial entity.” There is a sense in which, for Fisher, the weird and the eerie constitute the poles of our ubiquitous “capitalist realism,” prompting us to re-examine not only presumptions concerning human agency, intentionality, and control, but also inviting a darker, more disturbing reflection on the strange agency of the inanimate and impersonal materiality of the world around us and within us.

    Fisher’s interest in Lovecraft stems from this shift in perspective from the human-centric to the nonhuman-oriented – not simply a psychology of “fear,” but the unnerving, impersonal calm of the weird and eerie. As scholars of the horror genre frequently note, Lovecraft’s tales are distinct from genre fantasy, in that they rarely posit an other world beyond, beneath, or parallel to this one. And yet, anomalous and strange events do take place within this world. Furthermore, they seem to take place according to some logic that remains utterly alien to the human world of moral codes, natural law, and cosmic order. If such anomalies could simply be dismissed as anomalies, as errors or aberrations in nature, then the natural order of the world would remain intact. But they cannot be so easily dismissed, and neither can they simply be incorporated into the existing order without undermining it entirely. Fisher nicely summarizes the dilemma: “a weird entity or object is so strange that it makes us feel that it should not exist, or at least that it should not exist here. Yet if the entity or object is here, then the categories which we have up until now used to the make sense of the world cannot be valid. The weird thing is not wrong, after all: it is our conceptions that must be inadequate.”

    *

    This dilemma (which literary critic Tzvetan Todorov called “the fantastic”) is presented in unique ways by authors of the weird tale and cosmic horror. Such authors refuse to identify the weird with the supernatural, and often refuse the distinction between the natural and supernatural entirely. They do so not via mythology or religion, but via science – or at least a peculiar take on science. In cosmic horror, the strange reality described by science is often far more unreal than any vampire, werewolf, or zombie. Fisher highlights this: “In many ways, a natural phenomenon such as a black hole is more weird than a vampire.” Why? Because the existence of the vampire, anomalous and transgressive as it may seem, actually reinforces the boundary between the natural order “in here” and a transcendent, supernatural order “out there.” “Compare this to a black hole,” Fisher continues, “the bizarre ways in which it bends space and time are completely outside our common experience, and yet a black hole belongs to the natural-material cosmos – a cosmos which must therefore be much stranger than our ordinary experience can comprehend.” Science, for all its explanatory power, inadvertently reveals the hubris of the explanatory impulse of all human knowledge, not just science.

    Authors such as Lovecraft were well aware of this shift in their approach to the horror genre. An oft-cited passage from one of Lovecraft’s letters reads: “…all my tales are based on the fundamental premise that common human laws and interests and emotions have no validity or significance in the vast cosmos-at-large.” To write the truly weird tale, Lovecraft notes, “one must forget that such things as organic life, good and evil, love and hate, and all such local attributes of a negligible and temporary race called mankind, have any existence at all.” So much for humanism, then. But Fisher is also right to note that Lovecraft’s tales are not simply horror tales. As Lovecraft himself repeatedly noted, the affects of fear, terror, and horror are merely consequences of human being confronting an impersonal and indifferent non-human world – what Lovecraft once called “indifferentism” (which, as he jibes, wonders “whether the cosmos gives a damn one way or the other”). There is an allure to the unhuman that is, at the same time, opaque and obscure. As Fisher writes, “it is not horror but fascination – albeit a fascination usually mixed with a certain trepidation – that is integral to Lovecraft’s rendition of the weird…the weird cannot only repel, it must also compel our attention.”

    This reaches a pitch in Fisher’s writing on author Nigel Kneale and his series of Quatermass films and TV shows. The Quatermass and the Pit series, for instance, opens with the shocking discovery of an alien spaceship buried within the bowels of a London tube station (which station I will not say). The strange, quasi-insect remains inside the ship point to another, very different form of life than that of terrestrial life. But the science tells them that the alien spaceship is actually a relic from the distant past. It seems that not only geology and cosmology, but human history will have to be rethought. Gradually, the scientists learn that the alien relics are millions of years old, and in fact a distant, early progenitor of human beings. We, in turns, out, are they – or vice-versa. The Quatermass series not only demonstrates the efficacy of scientific inquiry, it puts forth a further proposition: that science works too well. “Kneale shows that an enquiry into the nature of what the world is like is also inevitably an unraveling of what human beings had taken themselves to be…if human beings fully belong to the so-called natural world, then on what grounds can a special case be made for them?” Reality turns out to be weirder and more eerie than any fantastical world or alien civilization. This is what Fisher calls “Radical Enlightenment,” a kind of physics that goes all the way, a materialism to the nth degree, even at the cost of disassembling the self-aware and self-privileging human brain that conceives of it. Reversals and inversions abound. What if humanity itself is not the cause of world history but the effect of material and physical laws that we can only dimly intuit?

    This theme of Radical Enlightenment runs through Fisher’s book. While he does discuss works of fiction or film one would expect in relation to the horror genre (Lovecraft, Kubrick’s The Shining, David Lynch’s recent films), Fisher also offers ruminations on contemporary works (such as Jonathan Glazer’s 2013 film Under the Skin), as well as a number of evocative comparisons, such as a chapter on the weird effects of time loops in Rainer Werner Fassbinder’s film World on a Wire and Philip K. Dick’s novel Time Out Of Joint. There are also several surprises, including a meditation on the strange “vanishing landscapes” in M.R. James’s ghost stories and Brian Eno’s 1982 ambient album On Land. Also welcome is Fisher’s attentiveness to under-appreciated works in the horror genre, including the disquieting short fiction of Daphne du Maurier. In the span of a few carefully-written pages, Fisher follows the twists and turns of his twin concepts one chapter at a time, one example at a time, until it is revealed exactly how enmeshed the weird and the eerie are in culture generally.

    *

    The Weird and the Eerie is an evocative and carefully-written short study in cultural aesthetics. Far from the familiar line-up of vampires, zombies, and demons, Fisher’s eclectic examples speak directly to one of the central themes of the horror genre: the limits of human knowledge, the metamorphic shapes of fear, and the blurriness of boundaries of all types. His simple conceptual distinction quickly gives way to reversals, permutations, and complications, ultimately refusing any notion of a monstrous or alien unhumanness  “out there”; with Fisher, the unhuman is more likely to reside within the human itself (or as Lovecraft might write it, “the unhuman is discovered to reside within the human itself”).

    Many books on the horror genre are concerned with providing answers, using varieties of taxonomy and psychology to provide a therapeutic application to “our” lives, helping us to cathartically purge collective anxieties and fears. For Fisher, the emphasis is more on questions, questions that target the vanity and presumptuousness of human culture, questions regarding human consciousness elevating itself above all else, questions concerning the presumed sovereignty of the species at whatever cost – perhaps questions it’s better not to pose, at the risk of undermining the entire endeavor to begin with.

    I should let the reader decide which approach makes more sense, given the weird and/or eerie “Waldo-moment” in which we currently find ourselves. But the weird and the eerie are scalable, pervading broad cultural structures as well as the minutiae of personal ruminations. I’ve known Fisher as a colleague for some time. About a week after I had agreed to do this review, I heard via email of Fisher’s suicide. Someone I knew was previously there, over there, doing what they do, they way we so often presume a person’s presence in between moments of punctuated interaction. And then, suddenly, they’re not there. About a week after this, The Weird and the Eerie arrived in the mail. It was hard not to pick up the book and feel it had a kind of aura around it, as if it was some kind of final statement, a last communiqué. I had it on the table in a short stack with other books, and I kept half-expecting it to also vanish, as if its very presence there were incongruous. I would occasionally pick up the book and flip through it, as if secretly hoping to discover pages that weren’t there before. But my copy was the same as all the others. Besides, isn’t that essentially what a book is, a last word written by someone either long dead or who will die in the future? Maybe all books are eerie in this way.

    Eugene Thacker is the author of several books, including In The Dust Of This Planet (Zero Books, 2011) and Cosmic Pessimism (Univocal, 2015).

  • Ben Parker — What Is A Theory of the Novel Good For?

    Ben Parker — What Is A Theory of the Novel Good For?

    by Ben Parker

    Review of Guido Mazzoni, Theory of the Novel, translated from the Italian (2011) by Zakiya Hanafi, Harvard University Press, 2017.

    Because the novel is the most important product of modernity, any theory of the novel is also a theory of modernity. That modernity has been characterized in a variety of ways: as an unremitting catastrophe of Being—Georg Lukács’s The Theory of the Novel or René Girard’s Deceit, Desire, and the Novel; as the vulnerable legacy of humanist secularism—Erich Auerbach’s Mimesis; or epistemologically—Michel Foucault’s reading of Don Quixote as a crisis of signification. As Guido Mazzoni tells the story in his Theory of the Novel, modernity has been a long process of liberation from the implicit transcendence of collective cultural projects. We have now arrived at a moment where “the particular life represents the only horizon of sacredness that modern culture still recognizes.” Modernity is therefore the disruptive entropy of “unbelonging,” the triumph of “individualistic, anarchic, dispersive, centrifugal” forces over those of “collective transcendence.” By Mazzoni’s scorekeeping, the signal accomplishments of modernity are human rights, democracy, and relativism, but above all, “the concrete capacity to construct small spheres of autonomy.” The novel therefore marks “the entrance of democracy into literature,” because it is the vehicle par excellence of particularized private experience. Mazzoni prizes the novel for “its ability to make us see the world through the eyes and conscience of someone else, its ability to allow us to step into a possible life that is not ours.”

    Given this endpoint of absolute relativism—“Each person is an epicenter of absolute meaning”—Mazzoni has to construct his history of the novel retrospectively, as a gradual disburdening of the possibility of transcendence and collective horizons. He casts this ontological flattening in the light of an inner liberation of the novel form, although it could as easily be felt as a suffocating reduction. Mazzoni describes the first two centuries (1550-1750) of the novel’s history as an emancipation from the conceptual scaffolding of allegory and moral didacticism, on one hand, and from the strict delineations of classicist poetics (tragedy depicts a higher type of character, and comedy a lower) on the other. Because he was trained as a philologist, Mazzoni plunges the reader into a slough of terminological distinctions attending the birth of the novel: le roman, der Roman, il romanzo, romanice loqui, romanz, romance, novella, nouvelle, novela, novel. But his theory of genre rests upon a dubious metaphysics: rather than timeless Platonic forms, genres are “universals in re,” knots of emerging practices bound up with contemporary definitions and prescriptions. Instead of defining “the novel” retrospectively, which would mean fitting works like Tristam Shandy and The Golden Ass into the same Procrustean bed, Mazzoni sees the genre as the outcome of a complex fusion of heterogeneous conventions and literary corpuses. His approach is to “reconstruct the dialectic between the object and the words that enabled the object to be defined in the first place.” The drawback to this method is that the definition is never immanent to the novels themselves, but is derived from the belletristic scaffolding that is Mazzoni’s preferred archive. The scholarship on display—Mazzoni seems to have read every treatise and preface from the period—is unimpeachably exhaustive, even overwhelming. We learn that Don Quixote, for example, was not welcomed into the world as a novel but as a “comic romance.” But Mazzoni declines to pursue the question, what process of generic self-definition is Don Quixote itself engaged in? Nor does he see the retrospective genealogy of the novel as in large part an invention of the novel itself (as, for instance, the shelf of books in David Copperfield’s library). In any event, the upshot of this formative period is that the novel emerges as the “book of particular life,” a record of private persons, caught up in the “anarchy of the real,” rather than idealized or public figures made into abstract examples.

    Once the novel has broken free from allegory (whose political dimensions, overlooked by Mazzoni, have been detailed by Fredric Jameson), and we find ourselves in the nineteenth century, the next constraint to be discarded is melodrama. Melodrama gets painted as the bad outward form of psychology, which Mazzoni contrasts to the subtle analysis of interior life that culminates in James, Proust, and Woolf.  Thus melodrama turns out to be a convenient sorting mechanism for arriving at a set of all-too familiar preferences: Austen (but not Scott), Flaubert (but not Balzac), Eliot and Tolstoy (but not Dickens or Hugo). As with allegory, melodrama is classed as a transcendental and collective schema, averse to the finer gradations of “real life.” For melodrama, we are informed, belonged to a moment where “history had become a lived experience of the masses,” though “at a certain point this paradigm proved to be unrealistic.” It was no longer “plausible to think that people, subjects, or witnesses of an unprecedented transformation were involved in absolute conflicts.” What we have instead of large-scale history is the gradual extension of “our understanding of the interior life,” an ever-refined representational accuracy comparable to “the gains made in physics, astronomy, or anatomy.”

    By the time we reach the contemporary novel, the sphere of freedom that Mazzoni wants to find in the novel has been narrowed down to the horizon of sheer everydayness. We have exchanged the wild explorations of Robinson Crusoe, Gulliver, Edward Waverley, Natty Bumppo, and Huckleberry Finn for the boredom of Emma Bovary. All we are left with is the bad infinity of “real life” in its banal givenness. Freedom is surreptitiously redefined, from the kind of “unbelonging” of the earlier mode of “lighting out for the territory,” to the unbelonging of grousing individual discontent. No surprise that the contemporary authors Mazzoni endorses are Philip Roth, J.M. Coetzee (singling out Boyhood and Youth), Michel Houellebecq (The Elementary Particles), and Jonathan Littell. He doesn’t provide a reading of any of these novels (although he does cite a negative review of Littell’s The Kindly Ones). Knausgaard’s novel is something like an empirical confirmation of Mazzoni’s thesis about the tendency of the novel towards absolutely private particularity, absent any transcendent justification. Mazzoni’s concluding observation—“Inside our small local worlds, everything at stake has an unquestionable value” —could just as easily have been written by Knausgaard as a summary of the exhausting strife of representability at the heart of his book.

    In outline, then, Mazzoni’s account recapitulates the problematic of Lukács’s Theory of the Novel—“the refusal of the immanence of being to enter into empirical  life,” the pulverization of all transcendent projects—in order to render it unproblematic. What Lukács saw as “the dissonance special to the novel” was its capturing of the devastating ironies and grotesque realizations that the transcendent ideal is exposed to. For Mazzoni, however, such dissonance is simply “implausible,” a failure of perspective insufficiently immersed in the proliferating contingencies of “real life.” So, what for Lukács was the constitutive problematic of the novel—the hard-fought contest between the ideal and an inert (but ultimately victorious) reality—here turns out to be a detachable “extra” or a historical vestige. Mazzoni sees the struggle with the ideal as something that was gradually exorcised or shed during the novel’s development, as opposed to something essential to defining the genre. His argument then turns out to be another entry in the “end of grand narratives” narrative, or an instance of what Alain Badiou calls “democratic materialism”: we no longer believe in any Truths striving to be realized in the world, only in local particulars. With oracular resignation, Mazzoni announces that, starting with some generalized metaphysical eclipse in the nineteenth century, “Universal forces were no longer revealed in the experience of private persons.” One imagines him lecturing the great characters of fiction like a stern guidance counselor, for their stubborn lack of realism, in those moments of Lukácsian “dissonance” where they confront a churning abyss of unbearable meaning underlying an ongoing and inessential life: Don Quixote for attempting to revive chivalry by mounting his gaunt nag and donning a pasteboard visor; or Catherine Earnshaw for proclaiming, “I am Heathcliff!”; or Captain Ahab for hurling himself against the whale as striking at some “inscrutable malice” behind a mask; or Marlow for detecting, in the depths of the Congo, “the stillness of an implacable force brooding over an inscrutable intention.”

    To be sure, Mazzoni’s claim that the novel has freed itself from the transcendental has the force of self-evidence, if one surveys contemporary fiction. Mazzoni’s reading of novels in English cuts off at 2002, but (in addition to Knausgaard) Chris Kraus, Sheila Heti, Ben Lerner, and Rachel Cusk would all be pertinent here, as instances of flattened, quotidian perception, where the “microcosm” of private existence—voided of melodrama or narrative artifice—is elevated to “absolute importance.” Going further back, one could add other instances. John Updike, Frederick Exley, and Renata Adler come immediately to mind. Mazzoni doesn’t mention Norman Mailer, who is on quite another track, but whose “nonfiction novel” would be additional confirmation of the novel’s tendency to represent a reality divested of transcendent impulses. (At this point, however, one wonders whether it were not fictionality itself that represents the final burden of transcendence, whether Mazzoni’s sense of “the novel” is not just headed towards the documentary status of journalism, memoir, travel writing, etc.)

    On the other hand, some of the most acclaimed novels of recent years have resuscitated either melodrama (Hana Yanagihara’s A Little Life), or transcendental (religious) preoccupations (Marilynne Robinson’s work), or allegory (Yann Martel’s Life of Pi). To remark these works are also somewhat middlebrow and embarrassing, would introduce a dimension of aesthetic evaluation that Mazzoni never broaches. It’s worth noting, too, that Mazzoni’s own examples are not unproblematic. Although Houellebecq’s The Elementary Particles does duty for Mazzoni, his more recent The Possibility of an Island and Submission don’t fit the pulverization-of-collective-transcendence thesis at all. Houellebecq emerges, instead, as an (unevenly satirical) utopian thinker, closer to Jonathan Swift in the Houyhnhnms section of Gulliver’s Travels than to Roth’s Zuckerman novels. Mazzoni also cites the autobiographical novels of J.M. Coetzee, but his latest novels, The Childhood of Jesus and The Schooldays of Jesus, whatever else they may be, are obvious violations of Mazzoni’s rule against allegory.

    The unbearable scene he cites from Buddenbrooks, when little Hanno draws two lines under the last entry in the family tree, muttering, “I thought… I thought… there wouldn’t be anything more,” is indeed a powerful image of finitude. But Mann then went on to write the highly allegorical The Magic Mountain and Doctor Faustus. Dostoevsky is invoked in a number of contradictory ways—he is, on one hand, one of the first authors who is “still contemporary,” because of his techniques of characterization, but on the other hand, he presents a regrettable and lingering case of melodrama. What is never mentioned is that Dostoevsky’s oeuvre, from start to finish, is rent through with transcendental preoccupations. To take only the case of The Brothers Karamazov, what does one make of the beautiful moment in the final chapter, where the father of the slain child Ilyusha sees a flower fall on the snow, and rushes “to pick it up as though everything in the world depended on the loss of that flower”? This sense of absolute responsibility, of “everything in the world” depending on one’s posture towards salvation and loss, is the hard core of Dostoevsky’s meaning. If Mazzoni wants to insist that “we cannot go beyond” our immersion in factical being, that it is “the sole layer of existence that… distinguishes us from nothing,” then he will have to lose The Brothers Karamazov as a forward-looking work.

    I wrote above that the novel is the most important product of modernity. I forgot to add that modernity is in large part the product of the novel. The novel is one of the “workshops where ideals are manufactured,” to take an image from Nietzsche’s Genealogy of Morals. For instance, the continuous and rigorous thinking of responsibility throughout the novels of the Victorian period (paradigmatically, Great Expectations, Tess of the D’Urbervilles, and Lord Jim) constitutes as central a development of our ethical life as the subsequent Freudian theorization of same. The self-representation of the nineteenth-century social imaginary is largely created through the ways novels develop of “giving an account of oneself,” in Judith Butler’s phrase. The ultimate trouble with Theory of the Novel is that Mazzoni oscillates between seeing the novel as a co-creator of modernity, whereby “an essential aspect of the Western form of life takes shape and becomes an object of knowledge only through mimesis and fiction,” and seeing the novel (or cultural production as a whole) as validating (or falling into line with) larger systemic results, e.g. “the disintegrative force implicit in modern individualism,” or “the relativistic deflation of collective values.” We don’t know, finally, whether the Western “crisis of transcendence”—what for Lukács was an ongoing schism constitutive of the novel form—is simply a fait accompli restricting literary possibility, or whether one might hold the history of the novel itself accountable for this disintegration. Nor does Mazzoni see the novel as a possible reflection upon these outcomes, a perspective-taking that would refuse the enforcement of deflationary relativism.

    But might not the greatest novels be precisely such refusals? To return again to The Brothers Karamazov, we find there (in the remembrances of Father Zosima) a forestalling of Mazzoni’s conclusions, in almost identical language: “For all men in our age are separated into units, each seeks seclusion in his own hole, each withdraws from the others, hides himself, and hides what he has, and ends by pushing people away from himself… He is accustomed to relying only on himself, he has separated his unit from the whole, he has accustomed his soul to not believing in people’s help, in people or in mankind.” For Dostoevsky, at least, the novel is not a story of emancipation from transcendence. If the novel has nevertheless brought about this anomie and purgation of values, the novel goes on only in a perpetual fight against what it hath wrought.

    Ben Parker is assistant professor of English at Brown University. His current research is on recognition scenes in the nineteenth-century novel. He tweets @exyoungperson.

  • Sarah Brouillette — Couple Up: Review of “Family Values: Between Neoliberalism and the New Social Conservatism”

    Sarah Brouillette — Couple Up: Review of “Family Values: Between Neoliberalism and the New Social Conservatism”

    by Sarah Brouillette

    Review of Melinda Cooper, Family Values: Between Neoliberalism and the New Social Conservatism (New York: Zone Books, 2017)

    The basics of neoliberalism are by now well known. Pressured to be wary of public deficit spending, and trying to find ways to rejuvenate depressed economies, neoliberal governments cut spending on welfare and other social services, and turn the programs that do remain into job training “workfare.” Policies at the same time shift to give priority to the needs of businesses wanting to keep wages low, to offshore production, and to make few or no commitments to workers. The power of unions is undercut as a result, so it is decreasingly possible to look to that form of collectivity as a shelter.[1] Politicians, advisors, sympathetic management consultants and business professors meanwhile emphasize private initiative and personal merit as the keys to success. As a result, work has been trending toward the less regular, less routine, less secure, less protected by union membership, with wages stagnant and less likely to be supplemented by things like affordable public education, low rents, tax credits, and childcare benefit payments.

    The working individual suited to this environment will naturally possess certain traits, as people are encouraged to look to themselves for more and more of what they need. Everything becomes a matter of personal responsibility: invest smartly for the future, take out a loan to pay for college, be your own brand, find your joy, “live your life.” If there is a culture of neoliberalism, it is all about interiority and the individual psychic life: therapeutic culture, because there is little state funding for mental health treatment. Find out who you really are, do what you love, look within, take your natural resilience as the base of every struggle and its overcoming; experience setbacks, Pop Idol style, as welcome occasions to overcome every hurdle. Self-improve. Self-actualize.

    The causal relations are sometimes murky and eminently debatable. Don’t governments in fact fund wellness initiatives, especially targeting underprivileged communities? And what about all the counternarratives emphasizing the necessity of communities coming together – the British Tories’ “Big Society,” for instance? But the general account of neoliberalism is quite uniform. It pinpoints the force of biographization, responsibilization, individualization, self-management, a DIY ethos, and customization of personal preference as the lifeblood of the neoliberal order.[2]

    Against all this, Melinda Cooper’s Family Values: Between Neoliberalism and the New Social Conservatism argues that the key social unit of neoliberalism is not the individual but the family, and not just any family but the family in perpetual crisis. She presents the postwar Fordist family wage – basically, a state-backed wage high enough to support a family with only one parent working – as a “mechanism for the normalization of gender and sexual relationships” (8), and for this reason sees no reason to lament its demise. As an “instrument of redistribution,” she writes, it “policed the boundaries between women and men’s work and white and black men’s labor” and was “inseparable from the imperative of sexual normativity” (23). “Few African American men enjoyed the family wage privileges of the unionized industrial labor force,” and their disproportionately high unemployment is evidence of the “multiple exclusions serving to define the boundaries of state-subsidized reproduction” (35-6).

    Just as the “Fordist politics of class … established white, married masculinity as point of access to full social protection” (23), the fundamental concern for neoliberals like Gary Becker was how to respond to the breakdown of this masculinity and the family built around it. “Neoliberals are particularly concerned about the enormous social costs that derive from the breakdown of the stable Fordist family,” Cooper argues. They aim “to reestablish the private family as the primary source of economic security and the comprehensive alternative to the welfare state” (9). Basically, they want the traditional family intact as a compensation for precarity.

    The data show that in the neoliberal era private family wealth is increasingly decisive in “shaping and restricting social mobility” (125), and this is a result of concerted policymaking. In the 1960s, inflation eroded the wealth at the top tiers, as it translated into the deflation of financial assets. Inflation was at the time understood in precisely this way, as a redistributive tax, “intensifying progressive tendencies” of the period: “Free-market economists insinuated that inflation was a form of state-sanctioned fraud – a covert tax designed to extort wealth from investors and transfer is to the lower classes” (127). The neoliberal “paradigm shift in American fiscal and monetary policy” sets about ending this redistributive movement. If the Employment Act of 1946 wanted to “promote maximum employment, production and purchasing power,” where wage and price inflation were understood as signs of growth and as “benign trade-offs to full employment,” the neoliberals overturned all this.

    Figures such as Milton Friedman and Paul Volcker “turn[ed] inflation-targeting into the prime objective of monetary policy,” thus restricting the money supply and pushing up interest rates. Whereas bondholders in the 1970s saw assets depreciate and the Federal Reserve “deferred to the interests of unionized labor and welfare constituencies,” in the new era the Fed would strive “to repress wages and consumer prices in the service of asset price appreciation.” These policies led to a sure turnaround in the distribution of national income; the “share of national income flowing to financial investors went from negative or stagnant in the 1970s to ‘substantially positive’ in the 1980s”; while “labor’s share of national income declined proportionately” (134). By 1983, Cooper writes, “wealth concentration had reverted to its 1962 level and by the end of the decade had plummeted to levels comparable to 1929” (135).

    There has thus been, at the top tier, a massive “resurgence of large family fortunes” (137). Nearly everywhere else, though, with stagnant wages, unemployment, and the transfer of the costs of things like higher education and health care back to families, lack of access to familial wealth can condemn one to a lifetime of debt. Hence Cooper’s argument about the importance of the family: intergenerational familial support in the form of housing, or money, or willingness to be signatories to loans, is a neoliberal necessity for many, and the pressure to combine dependence on parents with married coupledom just compounds the effect.[3] According to statistics gathered by the Pew Research Center, 1960 was the year in which people under 25 were most likely to live independently. In more recent decades, however, young people have been exhorted to invest in the future, save for retirement, and acquire assets (houses and university degrees). At the same time, and often in relation to this, they have been forced into debt and into insecure employment. No wonder they are more inclined to live with parents or partners. Of course, there is such a thing as a non-normative family, and perhaps living independently from relatives is not something we should unduly idealize. Cooper’s interest, though, is in what sort of family arrangements government programs prefer, and how preferences shift given combined pressure from neoliberal economic policy and the new social conservativism. We will return to her idealization of independence, however.

    The more common argument, of course, is that neoliberalism is destructive to family life, as it encourages workers to be “low drag,” moveable, flexible, always working, losing any sense of a private life outside of work, and also alone in leisure in front of a personally selected entertainment service displayed on a privately watched device. Yet, as Annie McClanahan has recently argued, not many people are really these footloose mobile workers.[4] For most employers, it is probably more important that those they hire be replaceable than that they be mobile. Only workers in relatively elite sectors (high tech, higher education, entertainment) are in a better position if they can move from thing to thing without worrying about family obligations.[5]

    This is not to deny that there is now also a more general animus against the restrictions and burdens of family life – the boredom of marriage, and drudgery of raising children (all captured so well by a show like Mad Men, for instance, which crystalizes the individualizing ethos so perfectly). However, there is just as much pressure to maintain the bonds of coupledom, and this tension between rejection and embrace may in fact be the point worth emphasizing. It seems that people are increasingly wondering if marriage is “worth it,” while decreasingly being able to exit it, and this is a cause of general anxiety, finding outlet in things like the dating site for adulterers, Ashley Madison, which was notorious for a minute in 2016 after its user data was stolen. When it turned out that most of the male customers were at least some of the time corresponding with bots rather than real women, I couldn’t help thinking that in a way it didn’t matter: the point is that users find an outlet for their sense of being stuck in a social relation (marriage!) on which they are dependent. Indeed, the bot’s lack of reality, lack of availability, is what makes the “affair” appealingly nonthreatening to the user’s IRL relationships. Moralistic attacks on these men – the fact that some of those caught are family-values conservatives is, to be sure, a rich irony – miss the point: they are not having affairs; they are staying in unhappy marriages that they depend on in various ways.

    They depend on marriage because it is still the normative standard for people (if you aren’t married there is something wrong with you; if you don’t have kids you are deviant in some way). They depend on it in that they can’t afford a house without two salaries, because for tax purposes it is better to be a legally recognized couple, because the lifestyle they aspire to requires it, because caring for children alone is very hard, because shifting work hours and temporary contracts make the second salary a necessity, even if it too is precarious. They depend on it because they are too tired and generally physically weary to try to have any other sort of relationship. Being non-normative can feel like SO. MUCH. WORK. A film like 2009’s Up in the Air makes the point very well: the protagonist is the epitome of the roving high-powered executive entrepreneur (indeed, his job is to fire people), but his story is not a celebration of the escape from normativity. It is rather a lament about the psychic misery of solitude. The message is clear: couple up!  

    How did the family start to lose its normative power? For Cooper, conservatives skewering feminism, and more leftist thinkers trying to understand the foundations of neoliberalism, are in agreement about the force of 1960s and 1970s countercultural and antinormative critiques of the family. In Wolfgang Streeck’s analysis, the revolution in family law and intimate relationships – for example, the availability of no-fault divorce – destroyed the Fordist family wage because women were not stuck in the kitchen dependent on men any more. The family became a more flexible form because, in Cooper’s paraphrase of Streeck, feminists sought “an independent wage on a par with men,” eventually “transforming marriage from a long-term, noncontractual obligation into a contract that could be dissolved at will” (11). Cooper reads Eve Chiapello and Luc Boltanski’s argument as similar, in that they show how “the artistic left prepared the groundwork for the neoliberal assault on economic and social security by destroying its intimate foundations in the postwar family” (12). She quotes Nancy Fraser, also, who has written that “critique of the family wage … now supplies a good part of the romance that invests flexible capitalism with a higher meaning and moral point” (12). In each case, the idea is that feminism is somehow to blame for neoliberalization, because in seeking to free women from certain kinds of normative obligation and dependency they have demonized dependency in general, fetishizing independence from supports of any kind. Against these analyses, Cooper asks: what breakdown of the family, anyway? The apparent post-normativity of contemporary life is entirely compatible with the establishment of new norms. We continue to be form-determined after we no longer see social forms’ normative force. Put simply: the traditional family, which for Cooper is a family coerced into existence by exigency and normativity, is not broken enough.

    The economy in depression no longer affords the state-supported Fordist wage, but the family is re-inscribed and reformulated even as it is queried and undermined by antinormative movements. If the foundations of neoliberal policy are thoroughly economic, neoconservativism enters Cooper’s account as a largely compatible reaction formation. The neoconservative agenda, formed deliberately against the liberation movements of the 1960s and their challenge to the normativity of the traditional family, served neoliberalization far more than the countercultural left’s challenges to social convention. Cooper argues that, whereas nostalgia for the Fordist wage became a “hallmark of the left,” neoconservatives, allied with thrifty neoliberals, preferred “the strategic reinvention of a much older, poor-law tradition of private family responsibility.” In a policy formation that reflected both neoliberal and neoconservative thought, social welfare was not to disappear, but instead to be made into “an immense federal apparatus for policing the private family responsibilities of the poor” (21).

    As a public assistance program targeted at the noncontributing poor – workers paying into funds that would support them in the event of unemployment were always more palatable (34) – the fate of AFDC (Aid for Families with Dependent Children) is one of Cooper’s main cases. It allows her to show how social welfare extended to the poor – especially to single women, especially mothers, especially black mothers – became “associated with a general crisis of the American family” (29). As the composition of the program changed, with the number of African American women signing up outpacing that of white woman, and divorced or never-married women joined the rolls, fears were heightened. Because “racial and sexual normativities were truly foundational to the social order of American Fordism, determining just who would be included and who would be excluded from the redistributive benefits of the social wage” (36), the inclusivity evident in the 1960s in the AFDC’s provision for non-married mothers proved to be short-lived. Arguments for reinstating the stability offered by the traditional family had significant influence at this juncture.

    Nor were these arguments solely made by conservatives. In the 1960s there was in fact significant leftist promotion of the African American male-breadwinner family and a related impetus against “non-normative lifestyles of unattached African American women” (37); hence the tendency to identify the AFDC as a cause of family breakdown while promoting the “male breadwinner’s wage” (41). An article by Richard A. Cloward and Frances Fox Piven, published in The Nation in 1966 and presented as “a strategy to end poverty,” laments that the state was “substituting check-writing machines for male wage earners,” thereby “robb[ing] men of manhood, women of husbands, and children of father.” The authors continue: “To create a stable monogamous family, we need to provide men (especially Negro men) with the opportunity to be men, and that involves enabling them to perform occupationally” (qtd. 42).

    What they saw were “perverse disincentives to family formation built into the AFDC program” (43), whereas women left more to their own devices would naturally be more likely to find men to support them. With the 1970s economic downturn, and anxieties directed at inflation in particular, the program became a touchstone for debates for neoconservatives formulating their “new political philosophy of non-redistributive family values” (47). While neoliberals “called for an ongoing reduction in budget allocations dedicated to welfare—intent on undercutting any possibility that the social wage might compete with the free-market wage,” neoconservatives advocated an expanded role for state in regulating sexuality. On both fronts, the point was the urgent necessity of “reinstating the family as the foundation of social and economic order” (49).

    Cooper discusses Milton Friedman’s concern that the “natural obligations” that “once compelled children to look after their parents in old age” have given way to “an impersonal system of social insurance whose long-term effect is to usurp the place of the family” (58). Friedman wrote that whereas once “Children helped their parents out of love or duty,” they now “contribute to the support of someone else’s parents out of compulsion and fear” (qtd. 58). State-based redistribution was a poor substitute for proper familial support and wealth transmission. For Gary Becker, also, the postwar welfare state destroys the “natural altruism of the family” (60). Becker’s theory of human capital is perhaps the premier theorization of individual self-management and self-appreciation. Michel Foucault treated Becker’s work as exemplary of the way that neoliberal analyses entail “replacement every time of homo economicus as partner of exchange with a homo economicus as entrepreneur of himself, being for himself his own capital … a capital that we will call human capital inasmuch as the ability-machine of which it is the income cannot be separated from the human individual who is its bearer.”[6] Becker also featured recently in a Merriam-Webster tweet of the term “human capital” – “turning people into statistics since 1799,” the tweet quipped – which linked to the full dictionary entry, where Becker’s work is cited as “taking a holistic view of a person’s life and experiences as they can be applied within the workforce.” Becker took personal investment in one’s own human capital appreciation as preferable to state investment (the benefits of high human capital only accruing to oneself, after all), and thus supported rising tuition costs and the student loan industry as a major part of the growing importance of private credit. Yet Cooper shows that his arguments also preferred a supportive wealth-generating family: the older generations would back student loans where necessary, as they naturally want children and grandchildren to bear human capital that self-appreciates at a greater pace and with results that are more lucrative. Becker celebrated Ronald Reagan for restoring kinship bonds.

    Reagan drastically cut the AFDC, before Bill Clinton eliminated it. It was replaced with the TANF program (Temporary Assistance for Needy Families), whose availability was contingent on states’ willingness to track down and enforce paternity obligations. TANF’s defenders claimed it is better for a woman and her children to be reliant on alimony and child support than to turn to the government for assistance (67). Here we get to the heart of Cooper’s refutation of the idea that neoliberalism privileges the footloose free agent. In fact, in her account, neoliberalism is more likely to pressure people to sustain unhealthy and unsustainable family and intimate relationships, including tying children to fathers who do not know them or want them. Clinton’s extensive welfare reform reflected and codified what she calls “a new bipartisan consensus on the social value of monogamous, legally validated relationships.” His government reformed welfare spending while devising “initiatives to promote the moral obligations of family, including a special budget allocation to finance marriage promotion programs and … bonus funds to states that could demonstrate that they had successfully reduced illegitimate births without increasing the abortion rate” (68). Barack Obama’s “healthy marriage and responsible fatherhood” initiatives continued in this direction.

    Cooper suggestively connects these initiatives to the “first experiment in federal relief ever implemented by Congress”: the 1865 creation of the Freedmen’s Bureau following the Emancipation Proclamation of 1863. Before 1863, slaves were precluded from legally sanctioned marriage. The Freedman’s Bureau instructed that freedom to participate in the labor market came with “the right to marry and the responsibility to support wife and child” (79). Its support for freed slaves entailed a vigorous campaign to promote marriage, with Bureau agents authorized to perform marriages and a “sustained pedagogy of domestic life, schooling men in the notion that they were to become the breadwinners of the family and women in a new kind of economic dependence” (80). There were penalties for people cohabiting without marriage; and Bureau-assigned wage scales that penalized women, precisely because of the “social costs of dependency” that fell upon the state if forced to support unmarried women and their children (81).

    Like the more recent insistence that women secure alimony and child support before turning to welfare, these policies empowered men to assert rights over women and children. Indeed, the assumptions upon which they were based were not fundamentally threatened until the 1960s liberalization of family law, which made divorce easier and eased the stigmatizing of non-marital unions and cohabitation. “For an all too brief moment,” she argues, “revised AFDC rules allow divorced or never-married women and their children to live independently of a man while receiving a state-guaranteed income free of moral conditions” (97). That moment is over, however. “The modern child support system serves to demonstrate that the state is willing to enforce—indeed create—legal relationships of familial obligation and dependence where none have been established by mutual consent,” Cooper writes (105).

    We should pause here now on the figure of the never-married woman living independently thanks to welfare. Cooper argues that, in a context of relatively healthy public welfare spending, and of the pressures put on states by countercultural and antinormative activisms, there was a time when social welfare was “making women independent of individual men and freeing them from the obligations of the private family” (97). Hence, the fuel for the neoconservative backlash that soon followed – a backlash that gained traction because of the failing economy to which neoliberals were also turning their attention. A perfect storm. Yet Cooper’s celebration of the period in which social welfare possibly freed women from the constraints of marriage has her falling back into the trap she dismantles elsewhere: nostalgia for state provision.

    The image of the single woman with children, living with a state-based income “free of moral conditions,” reads as an idealization. Certainly, supporting children as a single parent on welfare has never been a cakewalk; and, are we meant to conclude that “freeing” men from the burdens of paternity is an unalloyed boon to women? She needs this figure, though. Cooper’s idealization of the state-supported single mother alerts us to the fact that her ultimate objection is not to social welfare but rather to the restriction of its benefits to the Fordist white male breadwinner, and to the way welfare programs get tied to normative policies and programs emphasizing the preferability of turning to family, especially marriage, to marshal the necessary resources to get by.[7] She avoids the stronger critique of social welfare, which might emphasize the global accumulative regime and resource extraction on which US prosperity was built, how nation-based welfare disperses the benefits of prosperity to some and not others, and the welfare state’s various regulatory and pacifying functions.[8]

    Does neoliberalism feel different to some people simply because it follows on the moment of postwar prosperity and the relatively expansive Keynesian social welfare that flowed from it, in which there was palpable faith in the civic virtue attending government spending on social programs? Neoliberal policies have threatened protections and comforts that these programs offered to some people – people like American and British university professors, who produce the analyses of the unique wrongs of the neoliberal order. Is all the worry about neoliberalism just a symptom of the decline of the hegemony of liberal democracy?

    The economy that supported the pre-neoliberal era of relatively high wages, and relatively generous public deficit spending on welfare and education, was also hugely resource extractive and suburbanizing. The capacity to redistribute wealth more evenly in the US was, in addition, contingent upon broader economic transformation that required dispossessions, expulsions, enclosures, primitive accumulations, US hegemony propped up by global wars, and the origins of the whole phenomenon of US industrial triumph after WWII in wartime accumulation and relative devastation across Europe.[9] Wherever one looks, the accumulation of wealth requires these devastations, making even the lushest times at the ADFC, and the possibility for a temporary flourishing of alternative kinds of family structures, into a troubled gain. For these reasons, it may be that work that avoids the terminology of neoliberalism, or uses it warily – work by Endnotes, by Silvia Federici, or by Robert Brenner, for instance – provides better purchase on contemporary conditions. Because when they fail to name the fundamental, global, totalizing causes of policy shifts, accounts of neoliberalism miss the ruthlessly intensifying dynamics of capital accumulation that are simply propelled onward with extended credit.[10]

    Finally, if Keynesian social welfare is a wage supplement designed to encourage consumer spending, in which sense is it wise to pit it against the dominance of commerce and private interests? If extensive public deficit spending on social programs and neoliberal monetarism are just different ways of managing the economy, and if one takes the capitalist economy as fundamentally anathema to universal human flourishing, to what extent should we worry about the difference that neoliberalism makes? Family Values doesn’t quite answer these questions. However, it does do the crucially important work of historicizing the rise of private credit in relation to family-values conservativism, and dismantling the left-liberal tendency to lament neoliberalization because it clawed back the gains of the immediate postwar period. Without suggesting that no gains were made, Cooper shows how they were thoroughly mitigated by normative racial, sexual and gender ascription – ascription that determined how to divvy up Fordism’s generous provisions, and that continues to push people, especially the already suffering, into unwanted contracts in life and work.

    Notes

    [1] For a recent account along these lines see Wendy Brown, Undoing the Demos: Neoliberalism’s Stealth Revolution (New York: Zone Books, 2015).

    [2] See for instance Ronen Shamir, “The age of responsibilization: on market-embedded morality,” Economy and Society (37.1: 2008): 1-19.

    [3] I discuss Cooper’s blistering account of the student loan industry elsewhere.

    [4] Annie McClanahan, “Becoming Non-Economic: Human Capital Theory and Wendy Brown’s Undoing the Demos,” Theory & Event 20.2 (2017): 510-519.

    [5] Even scholars suggesting that, in being less interested in keeping people in regular work, crisis-era capitalism allows for “queer liberation” from cis-hetero norms, insist in the next breath that some elements of queer life are tolerable and easy assimilated – think pink washing and gay marriage – and some are not.

    [6] Michel Foucault, The Birth of Biopolitics:  Lectures at the Collège de France 1978-1979, trans. Graham Burchell (Palgrave, 2008): 226.

    [7] In an earlier work, where the figure of the state-supported single mother is absent, her take is more ambivalent. She argues that the welfare state “undertakes to protect life by redistributing the fruits of national wealth to all its citizens, even those who cannot work, but in exchange it imposes a reciprocal obligation: its contractors must in turn give their lives to the nation” (Melinda Cooper, Life as Surplus: Biotechnology and Capitalism in the Neoliberal Era [University of Washington Press, 2008]: 8).

    [8] Gavin Walker has recently argued that “the function of ‘welfare’ within capitalism has never been something separate from its workings; rather, it is something co-emergent and central to the operation of the capital-relation itself”: “Rather than being a political development in which capital’s violence is ameliorated through social spending, we should rather understand the welfare state as the primary mechanism through which the process of primitive accumulation can be continuously sustained in the advanced capitalist countries” (“The ‘Ideal Total Capitalist”: On the State-Form in the Critique of Political Economy,” Crisis & Critique 3.3 [2016]: 434-455).

    [9] For an account along these lines see “Misery and Debt,” Endnotes 2 (April 2010): 20-51.

    [10] I owe this point to discussion with Tim Kreiner.

  • Naomi Waltham-Smith — Review of “Sonic Intimacy: Voice, Species, Technics (Or, How to Listen to the World)”

    Naomi Waltham-Smith — Review of “Sonic Intimacy: Voice, Species, Technics (Or, How to Listen to the World)”

    by Naomi Waltham-Smith

    Review of Dominic Pettman, Sonic Intimacy: Voice, Species, Technics (Or, How to Listen to the World) 

    What if the world had a voice? What would a world suffering under the burden of human dominance over the environment—what would that geological epoch known as the Anthropocene—say to us? Dominic Pettman asks us to imagine such a world in which not just human beings or animals but all living and inanimate objects, and even virtual technologies have voices. Sonic Intimacy invites us to tune into the seductive voice of an OS in Spike Jonze’s 2013 film Her, the swansong of the Sirens, the meowing of a cat, the melancholy songs of a lonely whale, the wind in the trees, even “the imploring squeal of a garden gate, crying out for oil” (49). This is a world in which listening, too, is not confined to human ears. In Pettman’s book, listening is even extended beyond the animal world in a range of examples both banal and symbolic: if mothers listen to their daughters’ voices on the phone and dogs to His Master’s Voice on the gramophone, lamps also prick up their ears at the clap of a hand and microphones listen for algorithmically determined shapes in order to identity specific words or even voices.

    Pettman’s call to hear those other voices and thus become those other kinds of listeners stems to no small degree from our deafness to what is arguably the greatest threat the world faces today and to the human and ecological crises that climate change is already precipitating. “Alarmed scientists try to tell us on a daily basis,” Pettman points out, “that we are not listening to the earth, which is—elliptically perhaps, and in its own cryptic way—trying to tell us that it is in trouble” (6–7). He argues that in the ongoing calamity that is the Anthropocene, it is vital that we challenge anthropocentric constructions of the voice and of the ear. If there is one main target in Sonic Intimacy, it is human exceptionalism. This critical outlook has shaped Pettman’s work in post-humanism more generally. For instance, the recent Creaturely Love observes how the images of human desire we construct tend to disavow our own animal natures.[1] Pettman’s earlier Human Error (published in 2011) explored mistaken efforts to define humanity in its opposition to machines and instead posits a cybernetic triangle of human, animal, and machine so as to decenter the human.[2] Humanity’s species-being, as he argued in that book, had become “specious-being,” not simply a mistaken identity, but the mistake of identity.

    Each of Sonic Intimacy’s four chapters explores a voice that is, if not post-human, in some way more or less than human—a negation of the human. The first, devoted to the voices that speak to us from machines, centers on a discussion of Jonze’s Her, in which a heart-broken man falls in love with his operating system “Samantha.” The film illustrates that bodies do not simply produce voices; conversely, voices can also produce bodies. As an awkward scene in which Samantha ventriloquizes the body of a mute stranger shows, acousmatic voices can be more involving and erotic than actual bodies. In this way Pettman establishes the idea of a sonic intimacy that is intimate precisely in having shed its physical presence. This observation leads Pettman to seek to explain the absence of “aural porn” on the internet (yes, dear reader, such are the surprising twists and turns of this riveting book!). If the voice, untethered from the overdetermined female body, were allowed to circulate unchecked, it would threaten the entire patriarchal system—a system that depends precisely on the exclusion and capture of an inarticulate cry consistently coded as female or animal. Hence—paving the way for the next chapter on the gendered voice—there exists a voyeuristic regime of listening that “wrenches a sexual sound from the body of the other” (21) in order to gratify the male listener with an assurance of their subjective agency.

    In this logic we can discern a trace of the critique of sovereignty advanced by Giorgio Agamben, a thinker whom Pettman evokes on more than one occasion and who, like Pettman, takes his inspiration from the deconstructive logic of exappropriation. Deconstructive essays such as Jacques Derrida’s “Tympan,” for example, suggest that philosophical listening does not simply exclude its outside but seeks to master it and make it its own. But Agamben’s point—as Pettman acknowledges in a note referencing the book Echolalias by Agamben’s translator Daniel Heller-Roazen (100n17)—is that what appears to be outside language is in fact its condition of possibility.[3] As Agamben argues in Language and Death, meaningful human speech can only emerge on condition that the inarticulate animal cry withdraws. Philosophy, though, has traditionally forgotten precisely this withdrawal that makes language possible (what Derrida calls the withdrawal of the withdrawal) and has imagined in its place in its place a bodily presence that appears to lie beyond the bounds of the linguistic. Agamben on the contrary argues that the apparently non-linguistic is nothing other the pure possibility of language that goes unheard in every act of speaking.[4]

    That much of this theory remains in the background leaves Pettman free to write engagingly without getting mired in thorny philosophical debates. Keeping the sustained theorizing largely underground lets Pettman’s prose sparkle. Provocative ideas flow with one intriguing example after another, but this is one of the moments when I would have welcomed a more rigorous corps-à-corps confrontation with Agamben’s theory of Voice. Agamben has a lot to say about what happens when the disavowed condition of possibility begins to circulate in an autonomous sphere—something he specifically connects to analyses of the glorious body, of commodification, and of pornography. Agamben’s commodified body is detached in the pure spectacle from its sacralization, its ineffability and its legally and culturally authorized uses and hence appears as a pure potentiality for new uses. How could Pettman develop Agamben’s reflections on pornography that have always focused on the visual, shifting the focus from visibility to audibility? And how would he situate his own arguments in relation to Agamben’s efforts to dislocate the aporias of metaphysics? When at the beginning of the book, Pettman recalls the prenatal experience of sound, how does this compare with Agamben’s notion of infancy (referenced only in passing at 108n5)? There is little discussion—with the possible exception of Hedy Lamarr’s silent on-screen orgasm—of voices that hold their capacity to sound in reserve.

    Pettman turns in the third chapter to the animal voice. In a chapter indebted to the late Derrida’s ideas on animality, the highlight is a scene with a cockatoo that Pettman contends “deconstructs the cherished metaphysics of (humanist) presence, far more economically and effectively than Derrida does in his writings” (62). The cockatoo was adopted by new owners after a bitter divorce but continues to reenact the no doubt traumatizing arguments it was forced to witness in its previous life with an invective of curse words hurled out with a bitter tone and even the aggravated body language of rejection and resentment. This scene illustrates the difficulty of assigning an owner to the voice: while it is on one level the bird’s voice, audible and present in the room, it also brings to life vividly the original arguing couple. This cockatoo, like the parrot that betrays its owner by reproducing the salacious sounds of the porn he secretly watches, reveals that it is not just imitative animals who are ventriloquized, but we humans too, especially “when we are in the ecstatic, agonistic throes of jouissance or fury.”

    From this Pettman draws the conclusion—albeit one that is hardly new—that there is no simple hierarchy of human over animal, for humans can readily be “reduced” to the “animalistic” under the pressure of certain circumstances. The more thoroughgoing Derridean point that this scene makes—one that Pettman hints at without saying it explicitly—is not only that the human-animal opposition may be deconstructed but that this moreover hinges on a more radical deconstruction of the proper tout court. There is no proper human voice not because humans sometimes cry out in animal voices or because animals sometimes seem to speak to one another. Rather, it is impossible to decide between the two because there is no voice that belongs to any of us, whether human or animal.

    Against a tradition that reserves meaningful speaking and listening as a uniquely human privilege, Pettman thus calls in the final chapter for us to lend our ears to all the voices of the earth, to the vox mundi in which all manner of creatures, entities, and phenomena are present to us. In this Pettman reveals that his concerns are not simply ecological or political but are also properly philosophical, even if he is sometimes coy about asserting this ambition. In other words, Pettman is interested in how Being is present to us as a voice—how it exists for us as we listen to those voices. To this extent, Sonic Intimacy is, despite the framing it often adopts, not chiefly about issues of technology, ecology, or desire. Rather, these themes become occasions to pursue an unashamedly philosophical project: that is, the deconstruction of the metaphysics of voice. To this extent, Pettman’s continues a sequence that extends from Heidegger through French deconstruction: philosophy as listening to Being.

    The parenthetical description in the subtitle “Or, How to Listen to the World,” reveals that there is one philosophical voice in particular that commands Pettman’s attention, even if it is not given the sustained hearing that one might expect. It is Jean-Luc Nancy who tells us, in the face of a rampant globalization that renders the world uninhabitable, that, to be a part of a world and not a mere agglomeration of wealth, we must “share a part of its inner resonances.”[5] Only then can the world take place and can we inhabit it. There are tantalizing references to Nancy scattered throughout the text. There’s a brief mention of his conception of ontology as resonant referral to explain the expropriation of the voice (44–45) and later there’s an unacknowledged and undeveloped evocation of Nancy’s phrase “birth to presence” (89).

    Pettman writes frequently of acousmatic voices where the actual sounding is separated from the source, like the cockatoo. It is tempting, therefore, to imagine Nancy as a kind of disavowed ventriloquist, for Sonic Intimacy—deliberately mixing metaphors here to show the contact between resonance-as-spacing and touch—has Nancy’s fingerprints all over it. The Birth to Presence begins precisely with the same question of defining the human that preoccupies Pettman. The epoch of representation, suggests Nancy, originates with human exceptionalism, with the moment at which the human species being acquired its identity by virtue of one defining characteristic or another. “There is, perhaps, no humanity (and, perhaps, no animality)” wonders Nancy, “that does not include representation.”[6] The task is to think the unraveling of this limit, to think “what, in man, passes infinitely beyond man.” So, if Nancy asks what it is in the human that exceeds the bounds of its exceptional determination, Pettman examines how the exceptional exceeds the bounds of its human definition and thus dissolves the exception. For example, if the human is defined by having a voice, there is part of the human that is not exhausted in its vocality, and there is part of vocality that is not exhausted by the category of the human. Voice and humanity do not coincide. These are two faces of a mutual contamination. Humanity is thereby liberated from its phonocentric determination and vocality spills over the edges of the human into animal cries and the sounds produced by plants, inanimate objects, and intangible algorithms—disseminated throughout the univocity of the vox mundi at large.

    Nancy’s terms of “listening,” “world,” and “being” bear distinctly Heideggerian overtones. Pettman dismisses Heidegger’s suggestion that the animal is poor in world and hence poor in hearing. Adopting Agamben’s critique of what he calls the “anthropological machine” and Derrida’s notion of animot, Pettman has elsewhere not hesitated to point out that Agamben himself fails to get beyond the Heideggerian horizon when he retains boredom, for instance, “as a uniquely human curse and/or privilege.”[7] It is precisely the attunement between beings and their environment that Pettman challenges with his notion of intimacy. He suggests that a sense of self—one intimacy with one’s self if you like—is produced “through the vocal back-and-forths with others—and with the environment” (59). Although Pettman here attributes this notion of back-and-forth to Deleuze and Guattari’s formulation of the refrain, it would surely not have escaped his attention that Nancy describes presence as a “coming and going,” a “back and forth”[8]—what he elsewhere calls a “diapason-subject.”[9]

    This leaves one wondering about the nature of the back-and-forths between Pettman and deconstruction. Does Nancy provide the tools to think about the voice beyond the horizon of anthropogenesis, or are the examples of post-human and non-human voices ways to realize the full implications of Nancy’s deconstruction of sonic presence? One challenge for the reader is that Pettman tends to marginalize precisely those thinkers with whom he is most intimate. He spills more ink, for instance, critiquing Adriana Cavarero than engaging with Derrida. A discussion of the concept of intimacy comes only in the conclusion and many of Pettman’s back-and-forths with deconstruction are reserved to endnotes. One thing that the book could define more clearly is the extent to which the deconstructions of phonocentrism and logocentrism are mutually implicated. In the main body of the text, Pettman suggests that voice is the foundation of logocentrism and in the notes he specifies more precisely that “phōnē is the necessary but not sufficient condition for logos.” Citing Derrida’s claim that phonocentrism appears to be universal, while logocentrism is not, he argues that “the trick is foreground the multitude of voices, without being ‘phonocentric’” (108n8), by which Pettman seems to mean without positing the voice as transcendental.

    There are two questions that remain. First, from the perspective of grammatology: why retain vocality at all even in its plurality? Derrida’s famous attack on Husserl targets the false notion that one is simultaneously present to oneself in hearing-oneself-speak. Already in Husserl the account of temporalization reveals that the supposed unity of the “now” is in fact divided from it—that is, is always already spacing. This is why Pettman insists, against Cavarero, on the significance of time-shifted contexts, in which presence is dispersed. The question remains, though: why continue to speak of a voice if one is thinking of something closely approximating Nancy’s resonant referral? One possible answer is that these voices stripped of logos and bodily presence, represent a pure intention to signify—something close to Agamben’s notion of Voice as the potentiality for language. As Nancy develops the idea that listening-as-resonance is the condition of possibility for sense, he cites a passage from Agamben in which he thinks of Voice as the rustling of animals in their retreat. It would be fascinating to see Pettman engage with this citation in order to specify more precisely the relation between voice and listening. For Pettmann, this relation is defined by the concept of intimacy, according to which a voice is what strives to make itself known to us, which calls us to pay attention to it, summons our listening and invites us to approach its “potentially enlightening alterity” (83). While Pettman is eager to distance himself from neo-Heideggerianism, what prevents this seductive allude from repeating the logic of the withdrawal of Being when the deictic voix-là that he coins, like Agamben’s Voice-as-shifter, consists in its own vanishing act (58)?

    The other point to make is one that could also be leveled at deconstruction: is dispersal and dissemination really an effective way to relinquish the transcendental? Pettman is clearly with Derrida on this point, but Catherine Malabou has made a convincing argument that Derrida’s attraction to a Genetian dissemination of aurality as a means to topple the Hegelian tower of Klang is just another attempt to avoid the economy of the transcendental without abandoning it.[10] The problem with the transcendental voice, as Pettman recognizes, is that it always presupposes another excluded voice. The category of human voice presupposes the other voice of machine and animal, but, even within the category of the human, the voice is divided into noise and speech, masculine and feminine, and so forth, always partitioning itself. In the economy of the transcendental, the voice becomes a fetish—which, in Derrida’s definition, can both be detached from a chain of voices to become the privileged one and also substitute for any other one in the chain.

    One can escape the contradiction by incorporating the externalized fetish into the system (the Hegelian metaphysical solution) or, as Malabou points out, you can deflate the phallus by bringing down everything around it so that nothing stands taller than anything else (the Derridean option). Pettman, for his part, challenges the privileged position of the voice and instead indulges in the substitution of one voice for another, a gradual slippage from one chapter to the next. The issue facing deconstruction applies here too, though: how to end the infinite regress of voices? In the end Pettman seems to settle for a voice of the world that is without beginning or end and that refuses to be subordinated to any totalizing project. The world is a space in which one is always listening out for another voice. One moment one hears it, the next one doesn’t.

    The form and style of Pettman’s book capture the character of this roving ear, always pricking up with the possibility of another intriguing example. Pettman is a very engaging writer, and the way he traverses contexts and theoretical horizons is thrilling. Sonic Intimacy slides from one voice into another, slipping out of one body into another, all the more easily because it wears its weighty themes very lightly. Philosophy, then, becomes less an instrument by which to prosecute an argument than a playful seduction designed to lure our ears from one idea to the next. Pettman’s writing is perhaps at its most exciting when it ignores expectations to pin down the voices of interlocutors and instead revels in throwing the voice, in making it seem as if it emanates from somewhere else. Pettman himself, whose body of writing gives the impression of an insatiable curiosity, is no doubt already chasing down other voices and other worlds. I urge readers, though, to let their ear linger a little longer over this intriguing little book that promises to help us discern voices where we least expect to hear them.

    Naomi Waltham-Smith is Assistant Professor of Music at the University of Pennsylvania. Her work sits at the intersection of music, sound studies, and continental philosophy. She is author of Music and Belonging Between Revolution and Restoration published by Oxford University Press, and is currently writing a book entitled The Sound of Biopolitics.

    Notes

    [1]    Dominic Pettman, Creaturely Love: How Desire Makes Us More and Less Than Human (Minneapolis: University of Minnesota Press, 2017).

    [2]    Dominic Pettman, Human Error: Species-Being and Media Machines (Minneapolis: University of Minnesota Press, 2011).

    [3]    Daniel Heller-Roazen, Echolalias: On the Forgetting of Language (New York: Zone, 2005).

    [4]    Agamben, Language and Death: The Place of Negativity, trans. Karen E. Pinkus (Minneapolis: University of Minnesota Press, 1991).

    [5]    Jean-Luc Nancy, The Creation of the World, Or, Globalization, trans.  François Raffoul and David Pettigrew (Albany, NY: State University of New York Press, 2007), 42.

    [6]    Jean-Luc Nancy, The Birth to Presence, trans. Brian Holmes et al., (Stanford: Stanford University Press, 1993), 1

    [7]    Dominic Pettman, Human Error, 237n71.

    [8]    Nancy, The Birth to Presence, 5.

    [9]    Jean-Luc Nancy, Listening, trans. Charlotte Mandell (New York: Fordham University Press, 2007), 16.

    [10]   Catherine Malabou, “Philosophy in Erection,” Paragraph 39, no. 2 (2016): 238–48.

  • Ben Murphy – The Universes of Speculative Realism: A Review of Steven Shaviro’s The Universe of Things: On Speculative Realism

    Ben Murphy – The Universes of Speculative Realism: A Review of Steven Shaviro’s The Universe of Things: On Speculative Realism

    Steven Shaviro’s The Universe of Things: On Speculative Realism (2014)

    Reviewed by Ben Murphy

    Steven Shaviro begins The Universe of Things (2014) promising a “new look” at Alfred North Whitehead “in light of” speculative realism. The terms of this preface ought to be reversed though, since what follows Shaviro’s introduction is actually a “new look” at speculative realism “in light of” some Whiteheadean ideas. This distinction is important: readers should not seek out The Universe of Things for an introduction to Whitehead qua Whitehead or even a “new look” at Whitehead vis-à-vis current issues of cultural and critical analysis. (Indeed, better options along these lines include, respectively, Shaviro’s own earlier book, Without Criteria (2009), and the more recent University of Minnesota Press collection The Lure of Whitehead (2014).) Universe, on the other hand, is better described as an attempt to map the cumulative geography of speculative realism, a philosophical movement which Shaviro stresses should be referred to in the plural: speculative realisms. Speculative realisms (and its sibling endeavors like object oriented ontology and new materialism) are perpetually in search of heterodox traditions and forgotten figures—philosophical antecedents sought for foundational credence and inspiration. And in this sense Shaviro’s incorporation of Whitehead is the latest in a lengthening line: Graham Harman recuperates a certain version of Heidegger, Jane Bennett returns to Spinoza and Bergson (among others), and, more far afield still, Ian Hamilton Grant champions Schelling’s Naturphilosophie. But if these and other thinkers raid the archive to consolidate new and distinct philosophical templates, Shaviro’s survey is decidedly more evaluative than constructive. Working Whitehead into the cracks of speculative realism, Shaviro widens that movement’s internal fractures in order to expose, and at most nuance—rather than overturn, reverse, or revamp—its prevailing assumptions.

    Shaviro’s critical take on speculative realism relies on two recurring moves: first, an overarching unification and, second, a subsidiary distinction. First, in the name of unity, Shaviro stresses that speculative realisms hold in common a core desire to step outside what he—following French philosopher Quentin Meillasoux—calls the correlationist circle. As reiterated by Shaviro, the primary target implied by this phrase is Kant’s position that the world is only knowable and approachable through thought. “We” can never grasp an object “in itself” or “for itself” in isolation from its relation to us, the thinking subjects. This insistence means that any account of the world and reality is fundamentally an account of the world and reality as accessed through and by human thought. Speculative realisms are unified in wanting to get beyond this self-reflexive loop. Quentin Meillasoux, Graham Harman, Ray Brassier, and Ian Hamilton Grant (the school’s four founding fathers)—as well as fellow travelers—shed the correlationist straight jacket by theorizing (or, better, speculating) about the real world, the world of the “great outdoors” (another Meillasoux coinage) or, as Eugene Thacker puts it in his “horror of philosophy” series, the world “without us.” (For a very different account which disputes whether “correlationism” refers to a fair or even a meaningful reading of Kant, see David Golumbia’s “’Correlationism’: The Dogma that Never Was,” recently published in bounday 2.) As Shaviro notes, there’s a timeliness to this “anti-correlationist” critique, since casting the philosophical net beyond the circumscribing human mind seems a deadly serious endeavor in the face of impending ecological catastrophe. Still, the warming planet is just the most obvious and palatable hook that initiates what Shaviro calls the “changed climate of thought” (4) recently amenable to speculative realism. And if both new materialism and object oriented ontology are more prone to non- or para-academic environmental and ecological interventions, then speculative realism is more interested in revisiting and recasting the history of philosophy.

    A commitment to outfoxing correlationism unites speculative realism, but Shaviro’s second move—that of division—hinges on pinpointing the particular strategies employed to achieve this revisionary project. Repeatedly in Universe, Shaviro splits speculative realism into two main factions. On the one hand, Meillasoux and Brassier pursue lines of thought that Shaviro calls “eliminativist”: for these admittedly nihilistic thinkers, correlationism is undone by the revelation that thought is “epiphenomenal, illusory, and entirely without efficacy” (73)—that thought doesn’t rightly and necessarily belong anywhere in the universe. For Shaviro, Brassier goes further in approaching the “extinction of thought” than Meillasoux, who saves thought from complete elimination by introducing a deus ex machina according to which thought and life emerge “ex nihilo” and simultaneously from a universe previously devoid of both (76). The contrast to this first faction is found in Harman, Grant, Levi Bryant, and Timothy Morton. Instead of proposing that thought is fundamentally inimical to the universe, this coalition of speculative realism wagers that agency and thought are everywhere. Positing the “sheer ubiquity of thought in the cosmos” (82), this position reaches its apotheosis for Shaviro in a panpsychic vision where all things—animate and otherwise—are sentient (if perhaps not exactly conscious). Shaviro places himself in this second faction only after making a further distinction that separates him from Harman in particular. Whereas Harman, according to Shaviro, stresses the withdrawn nature of objects—withdrawn in the sense that the object must always “recede” from its relations (30)—Shaviro joins Whitehead (and Latour) in making a distinction between epistemological withdrawnness and ontological relations (see 105). Where an object may always hold something in reserve from what is knowable to the perceiving mind (as Harman insists), even this measure of the object that is reserved may be affected and changed by modes of contact that elude knowledge and understanding. Because of “vicarious causation” and “immanent, noncognitive contact” (138, 148) (a mode of contact that Shaviro never satisfactorily distinguishes from more popular usages of the term “affect”), an “occult process of influence” occurs that is “outside” any correlation between “subject and object, or knower and known” (148). The object, then, is not so utterly withdrawn as Harman’s narrowly epistemological account suggests. So between eleminativism and panpsychicism as extremes of the speculative realism spectrum, Shaviro says, we’re faced with a “basic choice” (83).

    Describing correlationism and the various offerings to get beyond it is standard fare for speculative realism. But what Universe lacks in originality it compensates for with breadth of analysis and consistently careful, patient exposition. Shaviro admirably treats a wide swath of speculative realists (plus quite a few philosophical giants from both continental and analytical traditions), and he does so with a tone perpetually modulated for utter clarity. Absent is any of the obfuscating rhetoric or over-the-top claims that one might expect from someone who sets out to correct Kant. In part Shaviro’s achievement stems from his own outsider status. His rich body of academic work—on everything from film studies to music video aesthetics to sci-fi infused accelerationism—as well as the light touch on display here and throughout his superb and eclectic online presence (see: http://www.shaviro.com/) stand him in good stead as a welcome interlocutor and guide. Approaching speculative realism as a kindred but not coincident thinker, he’s able to recapitulate his own coming-to-terms with ideas in a way that translates well to other sympathetic non-initiates.

    Apart from style and tone, though, Shaviro’s approach is also commendable for a self-avowed pragmatism of ideas. In an aside in the first chapter, Shaviro applauds Isabelle Stengers for the insight that “the construction of metaphysical concepts always addresses certain particular, situated needs” (33). “The concepts that a philosopher produces,” Shaviro continues, “depend on the problems to which he or she is responding. Every thinker is motivated by the difficulties that cry out to him or to her, demanding a response” (33). While a fair representation of Shaviro’s own admirably simple and workmanlike prose, these statements also epitomize the generous spirit that urges Universe. Shaviro is careful to explain the fruits and situational benefits of every idea that he treats, perhaps especially those ideas that he wants to challenge—an attractive way of grounding philosophical ideas which, being speculative by definition, sometimes feel quite flighty.

    The discussion of panpsychism that spans chapters four and five is the most exciting and original element of Universe. In part this is because it draws on a body of work in cognitive science and the philosophy of biology that Shaviro knows well and that is fresh fodder for discussions of speculative realism. His discussion in this section also has the added charm of giving itself over to the speculative freedoms afforded to speculative realism itself. As Shaviro recognizes, speculative realism is at its best when it joins with speculative fiction in the common task of “extrapolation” (10). Thus in considering panpsychism we’re teased with the notion that slime molds have thoughts (88). Less bogged down by the minutia of distinctions between this SR thinker and that, Shaviro joins a more diverse group of thinkers to consider, for instance, Thomas Nagel’s question about what it’s like to be a bat. Well aware of the absurdities attendant to a truly panpsychic vision, Shaviro lets speculation carry the day, and it’s a pleasure to follow him through a romp that ties the questions of speculative realism to a longer intellectual tradition of sometimes strange twists and turns.

    Also helpful and fresh for speculative realism—although somewhat hard to square with the rest of this book—is Shaviro’s first chapter, which shows how Emmanuel Levinas helps us appreciate speculative realism even as Whitehead’s “aesthetic” mode of “contrast” departs from Levinas’ “ethical” encounter with the Other. Where for Levinas the encounter trumps self-concern, for Whitehead both self-concern (or “self-enjoyment”) and “concern” for the Other are poles best understand in balancing counterpoint (rather than conflict). Apart from being the most detailed analysis of Whitehead’s thought—and, indeed, his thought as it changed in his long arc of writing—this opening account is valuable for SR in arguing that a commitment to circumventing correlationism need not be an ethical project in the traditional sense. In other words, in Shaviro’s reading of Whitehead, a philosophy geared towards the object world “without us” isn’t premised on care. The problem here and elsewhere in Universe, though, is the fuzzy usage of the term “aesthetic.” As I’ve suggested, chapter one deploys this term opposite Levinasian ethics in a frustratingly negative mode of definition: aesthetics is said to be what is not ethics. While gaining some clarification in the volume’s titular chapter (see 52-54), the aesthetic remains unclear even when given new treatment in a discussion of Kant that occupies the last ten pages of the book. Here “aesthetic” is set against knowledge (or epistemology) rather than ethics, and, as my discussion of Shaviro’s disagreement with Harman suggests, “aesthetic” comes to mean something like noncognitive contact, or “affect.” If these disparate senses of the “aesthetic” are related or even mutually inclusive, Shaviro doesn’t do enough to show how.

    For all its merits, Universe suffers heavily from being stuck between monograph and essay collection. One searches in vain for the absent promise that the book’s chapters can be read collectively or in isolation, approached in order or at random. Such a promise, at least, would admit that the chapters don’t serially build to anything in particular. Lacking this or any other clues from Shaviro, though, we’re faced with seven relatively short offerings that loop back on one another with frustratingly little meta-commentary. Much of the mapping of speculative realism as I’ve described it above via unification and division, for instance, appears essentially verbatim in chapters two, six, and seven. The treatment of Harman—both agreement and disagreement—in particular makes continual reappearance. The same could be said of the discussion of panpsychism, which is interesting the first and perhaps even second time but quickly turns suspect as it is recycled through chapters three, four, and five with only the trimmings changed. The mere fact that bits of argument can appear at the beginning and end of the book in essentially the same form (and with Shaviro seemingly unaware of such repetitions) leaves the reader wondering about the value of a journey that feels constrained to a treadmill. A more cynical reader might look to, and find answer in the book’s editorial meta-data, which reveals that the first three chapters are previously published. Insofar as Universe excels at any one thing, then, it may be at academic entrepreneurialism—a feat of (re)publishing in which a triplet of core essays are surrounded with the sort of rhetorical packing peanuts which actually detract from ideas that would be more forceful as standalone articles. The reader already deep inside the sweep of SR may find plenty in this extended cut edition, but those more casually interested will be better served to read independently (as interests dictate) “Self-Enjoyment and Concern” (on Whitehead, Levinas, and SR), “The Actual Volcano” (Shaviro’s primary disagreement with Harman), and “The Universe of Things” (a broad strokes and bouncy introduction to the promises and riddles of SR, new materialism, and object ontology). Each has gems of insight owed to Shaviro’s exhaustive research, and reading them apart from one another—perhaps even in their original contexts—would lessen the rather tiresome burden of trying to figure out how they all fit together.

    Ben Murphy is a Ph.D. student at the University of North Carolina at Chapel Hill. He works on 19th and 20th century American literature, the history and philosophy of science, and critical theory. His essay on James Dickey’s Deliverance and film adaptation is forthcoming from Mississippi Quarterly (2017), and you can also find his writing at ETHOS: A Digital Review of Arts, Humanities, and Public Ethics and The Carolina Quarterly. Website: http://englishcomplit.unc.edu/people/ben-murphy

  • Devin Zane Shaw — Disagreement and Recognition between Rancière and Honneth

    Devin Zane Shaw — Disagreement and Recognition between Rancière and Honneth

    by Devin Zane Shaw

    In an interview from 2012, Jacques Rancière states in response to a question about the role of dialogue in philosophy:

    I don’t believe in the virtue of dialogue in the form of: here’s a thinker, here’s another thinker, they’re going to debate amongst themselves and that’s going to produce something. My idea is that it’s always books that enter into dialogue and not people….Dialogue is never, for me, what it appears to be, which is something like the lightning flash of an encounter, a live exchange.[i]

    We should, then, approach the recent Recognition or Disagreement: A Critical Encounter on the Politics of Freedom, Equality, and Identity (Columbia University Press, 2016), with a similarly circumspect attitude.

    The core of the book, edited by Katia Genel and Jean-Philippe Deranty, is the debate between Rancière and Axel Honneth that took place at the Institute for Social Research in Frankfurt in June 2009, but it also includes a supplementary text by each author and an essay from each editor. Given that the editors’ essays comprise, at eighty pages, forty-five percent of the text, one should be particularly attentive to the ways in which their interventions shape the reception of the debate that was the book’s occasion. Against the editors, I want to argue here that this debate demonstrates the incompatibility of Honneth’s and Rancière’s respective projects. Moreover, Rancière’s work cannot be reconceptualized in the terms of Honneth’s liberal iteration of critical theory without sacrificing precisely those parts of his thought that are the most inventive, interesting, and politically and intellectually subversive.

    My differences with Genel and Deranty can best be summarized through our respective interpretations of Rancière’s claim that, in his critique of Honneth, he has reconstructed a “‘’ [sic] conception of the theory of recognition” (95). In my view, Rancière critically appropriates the terms of “recognition” to show what it would require to become a theory of dissensus and disagreement. Deranty outlines what he takes to be Rancière’s concern with a theory of recognition that ranges from Althusser’s Lesson to Disagreement in order to demonstrate an “in-principle agreement” between Rancière and Honneth (37). First, he argues that many of the examples from Disagreement are based on historical research that Rancière conducted in the 1970s. Then Deranty adduces passages that mention recognition, such as Alain Faure and Rancière’s “Introduction” to La parole ouvrière (1976), where they refer to political struggle as “the desire to be recognized which communicates with the refusal to be despised” (quoted on 38). He also cites an early interpretation of Pierre-Simon Ballanche’s account of the plebeian revolt on Aventine Hill (an episode which also plays a crucial role in the argument of Disagreement) where Rancière writes that the “rebellion was characterized by the fact that it recognized itself as a speaking subject and gave itself a name.”[ii] Rancière continues, though: “Roman patrician power refused to accept that the sounds uttered from the mouths of the plebeians were speech, and that the offspring of their unions should be given the name of a lineage.”[iii] This description has little to do with Honneth’s account of recognition, in which individuals recognize their freedom and the freedom of others as mediated by established social institutions. And then Deranty concedes that “Rancière just disagrees with some of the key concepts used by Honneth,” which undermines the verbal parallels that he draws upon to signal their agreement (36, my emphasis). Indeed, their principled dispute about their respective concepts undermines the very possibility of an “in-principle agreement.” Therefore, to evaluate the relationship between Rancière’s egalitarian politics and Honneth’s theory of recognition we cannot rely on verbal parallels; instead, we must address how the concepts of recognition and disagreement play out in relation to a theory of the political subject, the relation between politics and the political, and problems concerning what Rancière calls “the police” and social normativity.

    To address these questions, I will begin with the final essay included in Recognition or Disagreement, Honneth’s “Of the Poverty of Our Liberty: The Greatness and Limits of Hegel’s Doctrine of Ethical Life.” Earlier in the book, Honneth claims that “all kinds of political orders have to give a certain description or legitimation for who is included in the political community,” and, indeed, political philosophy often aims to supply the legitimation for a given society’s norms that decide how and whether individuals and their practices are included or excluded from the political order (115). Hegel, on Honneth’s account, demonstrates the logical and practical coherence of the social objectivity of the various types of individual freedom, that is, how freedom relates, through recognition, to politics, work, and love.

    In the book’s concluding essay, Honneth examines, first, how Hegel reconciles two common, subjective concepts of individual freedom within his account of objective freedom as it is realized in ethical life. Both subjective concepts are abstract sides of modern political freedom. For Hegel, the transition to modernity entails conceptualizing social institutions as “making possible the realization of freedom” (160). In other words, on Hegel’s account, individual freedoms are mediated through institutions—and institutions are mediated and produced through the actualization or realization of individual freedoms. Thus, when Hegel reconciles the two subjective concepts of freedom, which approximate what Isaiah Berlin calls negative and positive freedom, he demonstrates that both fail to incorporate the objectivity of freedom as it is embodied in concrete social institutions. According to the “negative” concept of freedom, an individual is free insofar as they are unhindered by the actions of others. While Hegel incorporates this incomplete concept of freedom within his system as “abstract right,” which ensures state protections of individual life, property, and freedom of contract, he faults negative freedom for lacking a positive determination of what the subject can do, socially, with freedom. According to the “positive” concept of freedom, which Hegel largely derives from Kant, the basis of morality is autonomy, the self-legislating and self-reflexive activity of the subject. While this concept of freedom gives a positive foundation to what morality is, it nonetheless remains subjective, lacking a concrete relationship to social objectivity.

    These negative and positive concepts of freedom are, therefore, in Hegel’s terms, “merely” subjective, while Hegel aims to demonstrate that individual freedom is objective, that is, reflected and recognized within objective social institutions. This concept of objective freedom is not limited merely to how we understand social institutions. To say that freedom is objective delimits an important intersubjective feature of individual freedom. As Honneth points out, Hegel argues that we cannot rely on Kantian models of autonomy in friendship or love, since the self-limitation of my freedom in the experience of friendship or love is not a self-limitation; it is “precisely that the other person is a condition of realizing my own, self-chosen ends” (164). The realization of a given individual’s freedom entails concrete social situations that implicate the freedom of others, and it is because social institutions mediate our relations with others that they have objective reality. Hegel—and by extension, Honneth—maintains that institutions receive normative justification insofar as they reflect and embody the practices of individuals’ freedoms, and that social institutions, in turn, engender the emergence and expansion of individual freedoms.

    Now, one can see why Honneth follows Hegel through the discussion of objective freedom in the doctrine of ethical life: what both the negative and positive subjective concepts of freedom lack is recognition. In our institutions, Honneth suggests, we should be able to recognize not only our own intentions but also the intentions of other subjects. In addition, Hegel identifies three ethical spheres in which each individual’s freedoms are realized in relation to others’: personal relationships, the market economy, and politics. For these reasons, Honneth argues that the “general structure” of Hegel’s doctrine of ethical life, despite some shortcomings, “remains sound even today,” and that this doctrine provides “us with a normative vocabulary that we can use to assess the respective value of the various freedoms we practice” (169; 167). Nonetheless, Honneth also faults Hegel for treating “as sacrosanct” three historically specific institutions as the outcome of the self-realization of objective spirit: the family—“guided by the patriarchal prejudices of his own day”—the capitalist market economy, and constitutional monarchy (171). While Hegel did not explicitly address the possibility that these institutions could be transformed to “make them more amenable to the basic demand for relations of reciprocity among equals,” Honneth contends that Hegel’s account of morality hints toward how political practice can revise social norms and reorganize social institutions to make them more democratic (172). According to Honneth’s revision of Hegel, the inclusion of liberal rights and the possibility for “moral self-positioning” allows for individuals to engage in “morally articulated protest” (174). Thus Honneth allows for a continued moral progress within societies and social institutions to a degree that was not envisioned by Hegel.

    *

    Despite his Hegelian framework, and despite his debts to the Frankfurt School, Honneth’s project shares some of the central concerns of mainstream Anglo-American political philosophy today: the emphasis on processes of justification and establishing conditions of justice in order to evaluate institutional and normative frameworks. By contrast, Rancière’s political thought shares neither the methods nor goals of mainstream political philosophy. Todd May has already explored in detail the differences between Rancière and mainstream political philosophy (including Rawls, Nozick, Amartya Sen and Iris Marion Young). In May’s account, these political philosophers rely, whether they are proponents or critics of distributive theories of justice, on a concept of “passive equality”: “the creation, preservation, or protection of equality by governmental institutions.”[iv] Rancière, though, makes the stronger polemical claim that political philosophy embeds itself in, and offers justification for, regimes of inequality that he calls “the police” or “policing.” One of the most striking features of Rancière’s work is his claim that what we typically call politics, even in its most democratic forms (voting, deliberation, governance, and popular legitimation), is policing. In Disagreement, Rancière defines the police as:

    first an order of bodies that defines the allocation of ways of doing, ways of being, and ways of saying, and sees that those bodies are assigned by name to a particular place and task; it is an order of the visible and the sayable that sees that a particular activity is visible and that another is not, that this speech is understood as discourse and another as noise.[v]

    Since this definition of the police sounds very close to the way that Rancière often glosses his concept of “the distribution of the sensible,”[vi] we should specify that policing produces and reproduces relations of inequality, the stratification of roles within a given distribution of the sensible that partition individuals and groups according to inclusion and exclusion, such as those whose task it is to rule and those whose task it is to obey. Moreover, on Rancière’s account, politics—in May’s terms, “active equality”—is a dynamic of collective engagement and revolt that aims to subvert and resist the stratification and coercion of policing and social institutions. Given that Honneth’s account of recognition emphasizes how social institutions mediate and engender individual freedoms, it then follows that in Rancière’s terms Honneth’s theory of recognition would not be an account of politics as much as it is an account—though a progressive one at that—of policing.

    And yet, in “Critical Questions on the Theory of Recognition,” his critique of Honneth (and Chapter Three of the book), Rancière does not use the terms “police” or “policing.” Instead, he begins with the conditional hypothesis that his differences with Honneth are best articulated by treating their respective approaches as competing theories of recognition. At the outset, however, he signals his critical intent by suggesting that “the term ‘recognition’ might also emphasize a relationship between already existing entities,” these entities being individuals and established social institutions (83). When, then, Rancière concludes that he’s sketched, through his critique of Honneth, his own theory of recognition, he’s appropriated the language of critical theory to articulate a politics of dissensus and disagreement.

    Rancière pursues this hypothesis—that he and Honneth are outlining competing theories of recognition—in order to locate their central points of disagreement. In Disagreement, Rancière defines disagreement (la mésentente) as a specific kind of political challenge to a given order of policing, “a determined kind of speech situation in which one of the interlocutors at once understands [entend] and does not understand [entend] what the other is saying.”[vii] In French, the term la mésentente plays on different connotations of the verb entendre, between “to hear” and “to understand.” On Rancière’s account, the politics of disagreement emerges when the marginalized or oppressed (what he calls “the part with no part”) within a given social order challenge the ways in which society is policed, and often these challenges are phrased in terms that have readily accepted meanings within society. However, politically contentious terms, such as equality, rights, or justice are given inventive new meanings that challenge the normative frameworks of a given regime of policing; the part with no part who are contesting injustice and the police can “hear” the same demands but “understand” entirely different things. Many political theorists lament this ambiguity and aim to define it away. However, Rancière argues that the ambiguity of our contentious terms and ideals makes dissensus possible. That is, this ambiguity makes it possible to identify how these politically contentious terms circulate between policing and politics, how they come to articulate and combat inequality and coercion. For example: justice, for some, means due process and equal consideration before the law, while justice for movements such as Black Lives Matter opens on to both a broad indictment of how so-called due process legitimates injustice against African-Americans who are victims of police violence, and a broader vision of transformative social justice.

    In “Critical Questions on the Theory of Recognition,” Rancière uses disagreement in a broader, dialogical sense rather than its specific, political sense. He argues that dialogue—to be truly dialogical—must be an “act of communication [which] is already an act of translation, located on a terrain that we don’t master” (84). Dialogue always involves translation, distortion, but also invention; in terms of philosophy, it means that both interlocutors must think outside of their usual terminology: distortion remains “at the heart of any mutual dialogue, at the heart of the form of universality on which dialogue relies” (84). But Rancière also suggests that dialogue, in its more specific, political sense, requires acknowledging the “asymmetry in positions” between interlocutors. This claim summarizes his differences with Habermas, which he had previously outlined in Disagreement: acknowledging how asymmetry and power distort the ideals of political dialogue entails, in Rancière’s account, a stringent form of universalism that demands philosophers to confront not just institutional barriers to democratic deliberation, but also how the processes of deliberation function to exclude certain forms of political speech and action. Thus Rancière’s critical question: to what degree does Honneth’s theory of recognition rely on the presupposition that the demands of political subjects have always already been mediated by social institutions?

    To confront this question, Rancière proposes three working definitions of recognition. Two reflect common usage: on the one hand, recognition means the concurrence of a perception with prior knowledge, as when we recognize a friend, location, or information; on the other hand, recognition in the moral sense designates how we recognize other individuals as autonomous beings like ourselves. In both cases, Rancière notes, “re-cognition” functions as an act of confirmation. He then hypothesizes that recognition could also be conceptualized in the terms of what he calls a distribution of the sensible. Recognition, then, “focuses on the configuration of the field in which things, persons, situations, and arguments can be identified” (85). In this sense, recognition comes prior to any act of confirmation—and the critique of recognition entails disagreement over the conditions in which persons, things, or situations are understood as such.

    We could ask, for instance, how is it that a given regime of policing frames some enunciations as political demands against injustice and others as merely subjective complaints or even noise? And we could use an analysis of this situation to attack the broader norms that legitimate this distribution of speech and noise.  While Rancière acknowledges that Honneth’s account of recognition “echoes” his own polemical account, he raises a crucial question: to what degree does Honneth’s account rely on the two connotations of the common usage, presuming a stable distribution of the sensible or normative framework that relies on an “identitarian conception of the subject” that conflicts with a “conception of social relations as mutual” or dynamically or socially constructed (85)?

    First, Rancière contends that Honneth embraces an “anthropological-psychological” concept of the subject that is heavily indebted to a Hegelian “juridical definition of the person” (87). Thus Honneth’s account of the subject’s struggle for recognition emphasizes the affirmation of self-identity and self-integrity within the intersubjective structure of recognition. In other words, it’s the same integral individual subject who is seeks recognition within a multiplicity of situations related to love, work, or politics. Then, Rancière argues that this juridical model of the integral identity of the subject conflicts with its claim to articulating intersubjective social agency—a point encapsulated in Honneth’s summary of love and recognition in the book: “in friendship and love my experience is precisely that the other person is a condition of my realizing my own, self-chosen ends” (164). To say that love involves two individuals realizing their respective ends and interests through another is overly juridical. To Honneth, Rancière counterposes love as it is found in À la recherche du temps perdu, where Proust describes love as a dynamic and aesthetic construction of an other. Rancière writes:

    What appears at the beginning is the confused apparition of a multiplicity, an impersonal patch on a beach. Slowly the patch appears as a group of young girls, but is still a kind of impersonal patch. There are many metamorphoses in that patch, in the multiplicity of young girls, through to the moment when the narrative personifies this impersonal multiplicity, gives it the face of one person, the object of love, Albertine. (88)

    Rancière offers this counternarrative to show how our theoretical frameworks delimit the possibilities of social agency that we are able to recognize—a criticism that Honneth subsequently accepts.

    Rancière’s attention to this point can perhaps explain how Rancière’s terminology can be alternately powerful and abstract. When he opposes the politics of equality to policing, it readily calls to mind clashes between protestors and cops, though politics cannot be reduced to these terms. However, when he defines those subjects who confront the established order as the part with no part, this definition is far more abstract than saying marginalized and oppressed. But Rancière relies on this level of abstraction in order to avoid delimiting conditions of political agency that could delimit who this part is because it could exclude groups who have yet to emerge and who we cannot foresee.

    In general, for Rancière, political subjects are neither self-identical nor self-integral. Instead, political subjects emerge through a dynamic of what he calls disidentification, the rejection of the roles, places, and tasks assigned to bodies within a given regime of policing. We could interpret Proust’s description of love, then, as a metaphor for the dynamic of political subjectivation: political subjects emerge first as a multiplicity, at first an impersonal patch in the social field, until it takes shape through the invention of a name—for instance, #blacklivesmatter or #NoDAPL—for a collective disruption of or rebellion against the police order. Given that all regimes of policing are instantiations of social inequality and coercion, politics is, for Rancière, by definition egalitarian. It is equality, he argues, that leads to a much more exacting concept of universality than an account of politics that neglects the asymmetry between the political subjects who exist by virtue of contesting the social order and the established order of policing. Politics enacts the affirmation of “an equal capacity to discuss common affairs”; in other words, politics enacts the intellectual and political equality of anybody and everybody (93).

    The task of political thought is to ascertain how politics involves a “polemical configuration of the universal” (94). The Black Lives Matter movement began with a call for justice for Mike Brown in Ferguson, but, according to Keeanga-Yamahtta Taylor, its next stage involves both “engaging with the social forces that have the capacity to shut down sectors of work or production until our demands to stop police terrorism are met” and movement building through solidarity, which addresses how, while African-Americans “suffer most from the blunt force trauma of the American criminal justice system,” the broader normative framework of “law-and-order politics” functions to oppress the poor in general.[viii] From a standpoint informed by Rancière, the goal of political thought would be to identify the movements and practices that drive “the process of spreading the power of equality” in the here and now, to identify how specific movements involve a polemical force of universality to subvert and combat the normative frameworks of a given police order (94). Far from endorsing a theory of recognition, Rancière has redefined recognition as a politics of dissensus and disagreement.

    *

    Thus we have good reason to doubt Deranty’s claim of an in-principle agreement between Rancière and Honneth. Indeed, the editors and I reach very different conclusions regarding the significance of this debate because they accept Honneth’s theoretical framework to interpret it, while I refuse to subsume Rancière’s concepts under Honneth’s. The point here, though, is not establishing who has read Rancière or Honneth correctly, but to examine how these interpretations delimit what each thinker believes is politically possible and feasible.

    Our first difference concerns the supposed common ground shared by Rancière and Honneth. Though Rancière explicitly chooses to oppose “politics” (la politique), rather than “the political” (le politique) to “the police”, Honneth and the editors equivocate between “politics” and “the political.” However, the terms, especially in French philosophy, are distinct—which means Rancière has made a deliberate conceptual choice.[ix] Politics, on his account, designates a dynamic activity, while “the political” carries the connotation of an original, fundamental political sphere upon which policing has supervened. For Honneth, then, when Rancière discusses equality, he’s describing either an “original definition of the political community” (115) or a political anthropology in which human beings “are constituted by a wish or a desire to be equal to all others,” and this “egalitarian desire…brings about the exceptional moment of politics” (99). In their “Critical Discussion” included in the book, Rancière rightly rebuts both of these characterizations. He holds that if politics takes place, it does so through an egalitarian praxis opposed to the police. To treat Rancière’s politics as a political anthropology, imputing particular desires or motives to political subjects, implies that the debate is about whether human beings are motivated by either a desire for recognition or for equality. We could, in that case, resolve the debate with a political anthropology of desire.

    If this is not enough reason to reject Honneth’s way of framing the debate, he also characterizes recognition and disagreement as two complementary forms of struggle with different scopes—but this categorization carries with it an implicit normative claim that recognition is more practical. He argues that Rancière brusquely reduces “the political,” considered as “a stratified normative order of principles of recognition,” to policing (103). Therefore Rancière interprets this stratified normative order too rigidly, when these norms are given to conflicts over their meaning, that is, subject to reinterpretation and revision. For Honneth, the revisability of the normative order allows us to conceive of two types of political intervention: an internal struggle for recognition and an external struggle for recognition. In Honneth’s terms, Rancière focuses exclusively on the external struggle for recognition, which, while it combats the “political order as such,” ignores the “reformist” ambitions of the internal struggle for recognition that aims to reinterpret existing normative principles to make social institutions and their normative frameworks more democratic and inclusive.

    But Honneth’s distinction between the internal and external struggles for recognition is not merely descriptive, but also normative: given, he claims, the difficulties in formulating injustice in revolutionary terms, it’s more important in day-to-day politics to “deal with these small projects of redefinition or of reappropriation of the existing modes of political legitimation” (106). Unlike Honneth, Rancière does not prescribe the scope of political struggle within a given situation, since such a prescription functions to legitimate or delegitimize choices we make about what is to be done. These choices cannot be evaluated outside of the context of political struggle itself. But Honneth’s normative preference is part of his philosophical framework: if the freedom of individuals is engendered and mediated by social institutions and norms, and if self-integrity is one of the primary ends of the theory of recognition, then individuals should aim to reform and reinterpret these institutions and norms incrementally.

    From Rancière’s perspective, even if we grant that political freedom is sometimes engendered by existing social institutions, this does not entail that all parts of society should recognize these institutions as engendering their freedom. Those who are marginalized and oppressed could just as equally recognize how a given institution has functioned to exclude, marginalize, oppress, or immiserate them. The goal of politics for these political subjects need not or should not be—nor should we prescribe their goal to be—the reform of or formal recognition within these institutions that have historically oppressed them. From Rancière’s standpoint, it is right for the part with no part to combat and transform the very normative principles that legitimate and reinforce these institutions of inequality, and to prescribe reform rather than radical normative transvaluation serves to delegitimize the possibility of formulating and enacting broader goals of political struggle.

    Thus while Recognition or Disagreement presents the debate between Rancière and Honneth, it speaks to broader issues about the scope and aims of contemporary political thought. The contrast between Honneth and Rancière ably demonstrates Rancière’s stubborn refusal to engage in the processes of justification valorized by mainstream political theory—indeed, it serves as a stark reminder of how engaging in these problems often, (and in Rancière’s view, always) entails accepting profound social inequalities. However, this book is also important because it shows that if we mainstream Rancière’s work, as Genel and Deranty attempt to do, we lose those parts of his work that are most subversive and inventive—and we are left with only Honneth.

    Devin Zane Shaw teaches philosophy at Carleton University. He is the author of Egalitarian MomentsFrom Descartes to Rancière (Bloomsbury, 2016) and Freedom and Nature in Schelling’s Philosophy of Art (Bloomsbury, 2010).

    Notes

    [i] Jacques Rancière, The Method of Equality: Interviews with Laurent Jeanpierre and Dork Zabunyan, transl. Julie Rose. Malden: Polity, 2016, p. 183.

    [ii] Quoted on 38, but the reference is incomplete. See Rancière, Staging the People: The Proletarian and His Double, transl. David Fernbach. London: Verso, 2011, p. 37.

    [iii] Rancière, Staging the People, 37.

    [iv] Todd May, The Political Thought of Jacques Rancière: Creating Equality. University Park: Pennsylvania State University Press, 2008, p. 3.

    [v] Rancière, Disagreement: Politics and Philosophy, transl. Julie Rose. Minneapolis: University of Minnesota Press, 1999, p. 29.

    [vi] As Rancière defines it in Recognition or Disagreement, a distribution of the sensible is “a relation between occupations and equipments, between being in a specific space and time, performing specific activities, and being endowed with capacities of seeing, saying, and doing that ‘fit’ those activities. A distribution of the sensible is a set of relations between sense and sense, that is, between a form of sensory experience and an interpretation that makes sense of it. It is a matrix that defines a whole organization of the visible, the sayable, and the thinkable” (136).

    [vii] Disagreement, p. x.

    [viii] Keeanga-Yamahtta Taylor, From #BlackLivesMatter to Black Liberation. Chicago: Haymarket Books, 2016, pp. 217, 211.

    [ix] See Samuel A. Chambers, The Lessons of Rancière. Oxford: Oxford University Press, 2013, pp. 50–57.

     

  • Quinn DuPont – Ubiquitous Computing, Intermittent Critique

    Quinn DuPont – Ubiquitous Computing, Intermittent Critique

    a review of Ulrik Ekman, Jay David Bolter, Lily Díaz, Morten Søndergaard, and Maria Engberg, eds., Ubiquitous Computing, Complexity, and Culture (Routledge 2016)

    by Quinn DuPont

    ~

    It is a truism today that digital technologies are ubiquitous in Western society (and increasingly so for the rest of the globe). With this ubiquity, it seems, comes complexity. This is the gambit of Ubiquitous Computing, Complexity, and Culture (Routledge 2016), a new volume edited by Ulrik Ekman, Jay David Bolter, Lily Díaz, Morten Søndergaard, and Maria Engberg.

    There are of course many ways to approach such a large and important topic: from the study of political economy, technology (sometimes leaning towards technological determinism or instrumentalism), discourse and rhetoric, globalization, or art and media. This collection focuses on art and media. In fact, only a small fraction of the chapters do not deal either entirely or mostly with art, art practices, and artists. Similarly, the volume includes a significant number of interviews with artists (six out of the forty-three chapters and editorial introductions). This focus on art and media is both the volume’s strength, and one of its major weaknesses.

    By focusing on art, Ubiquitous Computing, Complexity, and Culture pushes the bounds of how we might commonly understand contemporary technology practice and development. For example, in their chapter, Dietmar Offenhuber and Orkan Telhan develop a framework for understanding, and potentially deploying, indexical visualizations for complex interfaces. Offenhuber and Telhan use James Turrell’s art installation Meeting as an example of the conceptual shortening of causal distance between object and representation, as a kind of Peircean index, and one such way to think about systems of representation. Another example of theirs, Natalie Jermijenko’s One Trees installation of one hundred cloned trees, strengthens and complicates the idea of the causal index, since the trees are from identical genetic stock, yet develop in natural and different ways. The uniqueness of the fully-grown trees is a literal “visualization” of their different environments, not unlike a seismograph, a characteristic indexical visualization technology. From these examples, Offenhuber and Telhan conclude that indexical visualizations may offer a fruitful “set of constraints” (300) that the information designer might draw on when developing new interfaces that deal with massive complexity. Many other examples and interrogations of art and art practices throughout the chapters offer unexpected and penetrating analysis into facets of ubiquitous and complex technologies.

    James Turrell, Meeting 2016
    MoMA PS1 | James Turrell, Meeting 2016, Photos by Pablo Enriquez

    A persistent challenge with art and media analyses of digital technology and computing, however, is that the familiar and convenient epistemological orientation, and the ready comparisons that result, are often to film, cinema, and theater. Studies reliant on this epistemology tend to make a range of interesting yet ultimately illusory observations, which fail to explain the richness and uniqueness of modern information technologies. In my opinion, there are many important ways that film, cinema, and theater are simply not like modern digital technologies. Such an epistemological orientation is, arguably, a consequence of the history of disciplinary allegiances—symptomatic of digital studies and new media studies originating from screen studies—and a proximate cause of Lev Manovich’s agenda-setting Language of New Media (2001), which relished in the mimetic connections resulting from the historical quirk that the most obvious computing technologies tend to have screens.

    Because of this orientation, some of the chapters fail to critically engage with technologies, events, and practices largely affecting lived society. A very good artwork may go a long way to exposing social and political activities that might otherwise be invisible or known only to specialists, but it is the role of the critic and the academic to concretize these activities, and draw thick connections between art and “conventional” social issues. Concrete specificity, while avoiding reductionist traps, is the key to avoiding what amounts to belated criticism.

    This specificity about social issues might come in the form of engagement with normative aspects of ubiquitous and complex digital technologies. Instead of explaining why surveillance is a feature of modern life (as several chapters do, which is, by now, well-worn academic ground), it might be more useful to ask why consumers and policy-makers alike have turned so quickly to privacy-enhancing technologies as a solution (to be sold by the high-technology industry). In a similar vein, unsexy aspects of wearable technologies (accessibility) now offer potential assistance and perceptual, physical, or cognitive enhancement (as described in Ellis and Goggin’s chapter), alongside unprecedented surveillance and monetization opportunities. Digital infrastructures—both active and failing—now drive a great deal of modern society, but despite their ubiquity, they are hard to see, and therefore, tend not to get much attention. These kinds of banal and invisible—ubiquitous—cases tend not to be captured in the boundary-pushing work of artists, and are underrepresented (though not entirely absent) in the analyses here.

    A number of chapters also trade on old canards, such as worrying about information overload, “junk” data whizzing across the Internet, time “wasted” online, online narcissism, business models based on solely on data collection, and “declining” privacy. To the extent that any of these things are empirically true—when viewed contextually and precisely—is somewhat beside the point if we are not offered new analyses or solutions. Otherwise, these kinds of criticisms run the risk of sounding like old people nostalgically complaining about an imagined world before technological or informational ubiquity and complexity. “Traditional” human values might be an important form of study, but not as the pile-on Left-leaning liberal romanticism prevalent in far too many humanistic inquiries of the digital.

    Another issue is that some of the chapters seem to be oddly antiquated for a book published in 2016. As we all know, the publication of edited collections can often take longer than anyone would like, but for several chapters, the examples, terminology, and references feel unusually dated. These dated chapters do not necessarily have the advantage of critical distance (in the way that properly historical study does), and neither do they capture the pulse of the current situation—they just feel old.

    Before turning to a sample of the truly excellent chapters in this volume, I must pause to make a comment about the book’s physical production. On the back cover, Jussi Parikka calls Ubiquitous Computing, Complexity, and Culture a “massively important volume.” This assessment might have been simplified by just calling it “a massive volume.” Indeed, using some back-of-the-napkin calculations, the 406 dense pages amounts to about 330,000 words. Like cheesecake, sometimes a little bit of something is better than a lot. And, while such a large book might seem like good value, the pragmatics of putting an estimated 330,000 words into a single volume requires considerable care to typesetting and layout, which unfortunately is not the case here. At about 90 characters per line, and 46 lines per page—all set in a single column—the tiny text set on extremely long lines strains even this relatively young reviewer’s eyes and practical comprehension. When trudging through already-dense theory and the obfuscated rhetoric that typically accompanies it (common in this edited collection), the reading experience is often painful. On the positive side, in the middle of the 406 pages of text there are an additional 32 pages of full-color plates, a nice addition and an effective way to highlight the volume’s sympathies in art and media. An extensive index is also included.

    Despite my criticisms of the approach of many of the chapters, the book’s typesetting and layout, and the editors’ decision to attempt to collocate so much material in a single volume, there are a number of outstanding chapters, which more than redeem any other weaknesses.

    Elaborating on a theme from her 2011 book Programmed Visions (MIT), Wendy H.K. Chun describes why memory, and the ability to forget, is an important aspect to Mark Weiser’s original notion of ubiquitous computing (in his 1991 Scientific American article). (Chun also notes that the word “ubiquitous” comes from “Ubiquitarians,” a Lutherans sect who believed Christ was present ‘everywhere at once’ and therefore invisible.) According to Chun’s reading of Weiser, to get to a state of ubiquitous computing, machines must lose their individualized identity or importance. Therefore, unindividuated computers had to remember, by tracking users, so that users could correspondingly forget (about the technology) and “thus think and live” (161). The long history of computer memory, and its rhetorical emergence out of technical “storage” is an essential aspect to the origins of our current technological landscape. Chun notes that prior to the EDVAC machine (and its strategic alignment to cognitive models of computation), storage was a well understood word, which etymologically suggested an orientation to the future (“stores look toward a future”). Memory, on the other hand, contained within it the act of recall and repetition (recall Meno’s slave in Plato’s dialogue). So, when EDVAC embedded memory within the machine, it changed “memory by making memory storage” (162). In doing so, if we wanted to rehabilitate Weiser’s original image, of being able to “think and live,” we would need to refuse the “deadening of the world brought about by memory as storage and realize the fundamentally collective nature of memory and writing” (162).

    Sean Cubitt does an excellent job of exposing the political economy of ubiquitous technologies by focusing on the ways that enclosure and externalization occur in information environments, interrogating the term “information economy.” Cubitt traces the history of enclosures from the alienation of fifteenth-century peasants from their land, the enclosure of skills to produce dead labour in nineteenth-century factories, to the conversion of knowledge into information today, which is subsequently stored in databases and commercialized as intellectual property—alienating individuals from their own knowledge. Accompanying this process are a range of externalizations, predominantly impacting the poor and the indigenous. One of the insightful examples Cubitt offers of this process of externalization is the regulation of radio spectrum in New Zealand, and the subsequent challenge by Maori people who, under the Waitangi Treaty, are entitled to “all forms of commons that pre-existed the European arrival” (218). According to the Maori, radio spectrum is a form of commons, and therefore, the New Zealand government is not permitted to claim exclusive authority to manage the spectrum (as practically all Western governments do). Not content to simply offer critique, Cubitt concludes his chapter with a (very) brief discussion of potential solutions, focusing on the reimagining of peer-to-peer technology by Robert Verzola of the Philippines Green Party. Peer to peer technology, Cubitt tentatively suggests, may help reassert the commons as commonwealth, which might even salvage traditional knowledge from information capitalism.

    Katie Ellis and Gerard Goggin discuss the mechanisms of locative technologies for differently-abled people. Ellis and Goggin conclude that devices like the later-model iPhone (not the first release), and the now-maligned Google Glass offer unique value propositions for those engaged in a spectrum of impairment and “complex disability effects” (274). For people who rely on these devices for day-to-day assistance and wayfinding, these devices are ubiquitous in the sense Weiser originally imagined—disappearing from view and becoming integrated into individual lifeworlds.

    John Johnston ends the volume as strongly as N. Katherine Hayles’s short foreword opened it, describing the dynamics of “information events” in a world of viral media, big data, and, as he elaborates in an extended example, complex and high-speed financial instruments. Johnston describes how events like the 2010 “Flash Crash,” when the Dow fell nearly a thousand points and lost a trillion dollars and rebounded within five minutes, are essentially uncontrollable and unpredictable. This narrative, Johnston points out, has been detailed before, but Johnston twists the narrative and argues that such a financial system, in its totality, may be “fundamentally resistant to stability and controllability” (389). The reason for this fundamental instability and uncontrollability is that the financial market cannot be understood as a systematic, efficient system of exchange events, which just happens to be problematically coded by high-frequency, automated, and limit-driven technologies today. Rather, the financial market is a “series of different layers of coded flows that are differentiated according to their relative power” (390). By understanding financialization as coded flows, of both power and information, we gain new insight into critical technology that is both ubiquitous and complex.

    _____

    Quinn DuPont studies the roles of cryptography, cybersecurity, and code in society, and is an active researcher in digital studies, digital humanities, and media studies. He also writes on Bitcoin, cryptocurrencies, and blockchain technologies, and is currently involved in Canadian SCC/ISO blockchain standardization efforts. He has nearly a decade of industry experience as a Senior Information Specialist at IBM, IT consultant, and usability and experience designer.

    Back to the essay

  • Andrew Martino – Exhuming the Text: Alice Kaplan’s “Looking for the Stranger: Albert Camus and the Life of a Literary Classic”

    Andrew Martino – Exhuming the Text: Alice Kaplan’s “Looking for the Stranger: Albert Camus and the Life of a Literary Classic”

    Alice Kaplan’s Looking for the Stranger: Albert Camus and the Life of a Literary Classic

    Reviewed by Andrew Martino

    Albert Camus never considered himself an existentialist. In fact, Camus never exclusively believed in any school of thought. Camus was the consummate outsider, the one who stood apart from those who subscribed to views that forced those subscribers into a narrow ideology, especially when that ideology mixed with violence, something Camus steadfastly resisted. If we had to place Camus into any category it would be that of the humanist caught in the absurd. Camus believed in life over death (without believing in an afterlife), yet this belief did not keep him from contemplating the question of suicide, the only serious philosophical problem confronting us, as he writes in The Myth of Sisyphus. Camus’ humble beginnings in extreme poverty and illiteracy in his native Algeria  testify to the power of the human spirit in the face of an indifferent world. When he was awarded the Nobel Prize for Literature in 1957 he expressed reservations and claimed that the prize should have gone to André Malraux, an early influence on his writing. Camus also realized that the Nobel would bring a certain celebrity that would complicate his life, perhaps even sabotaging his art. Add to this his “silence” on the Algerian problem and his very public and acrimonious break with Sartre, and Camus becomes a figure trapped in a world where he is increasingly unable to control his own image. Camus is a problematic figure who is claimed by both the Right and the Left, leaving the man and his writing caught in a political vortex. Focusing on the postcolonial aspect of The Stranger, Edward W. Said writes that Camus “is a moral man in an immoral situation.”[i] When Camus died at the age of 46 in a car accident in 1960, he left the world with the image of the charismatic young man, Bogart-like in his coolness, and still with the promise of great things to come. But a saint he was not. His numerous affairs and constant womanizing, his reluctance to act or speak out against French imperialism in Algeria, his disillusionment with and expulsion from the Communist Party, render him more human than academics might be comfortable with. Camus’ life was full of contradictions, full of silences. Yet, it was precisely from these contradictions and silences that Camus produced one of the most important and widely read books of the twentieth century.

     Looking back over the seven decades since the publication of The Stranger, Camus’ reluctance to situate (in the Sartrean sense of the term) himself in the bubble of existentialism, a bubble in which The Stranger and his relationship with Sartre placed him, the novel blazed a path that opened up fields where the absurd might be articulated, contemplated, and confronted from the inside (the modernist bent) rather than from above and beyond, as the canonical novels of the nineteenth century may have done. In her essay “French Existentialism,” Hannah Arendt briefly examines Sartre and Camus’ influence on the “new” movement where novels carry the weight of philosophy. Throughout that essay she also comments on Camus’ reluctance to be labeled an existentialist. “Camus has probably protested against being called an Existentialist because for him the absurdity does not lie in man as such or in the world as such but only in their being thrown together.”[ii] Here we have what is perhaps the most concise and articulate formulation of absurdist philosophy to date. Camus’ definition of absurdity, painstakingly mapped out in Caligula, The Stranger, and The Myth of Sisyphus, is not quite existentialism, but does contain existentialist DNA, especially Kierkegaardian and Dostoevskian (two of Camus’ patron saints) DNA. As Camus remarks in The Myth of Sisyphus: “I can therefore say that the Absurd is not in man (if such a metaphor could have a meaning) nor in the world, but in their presence together.”[iii] Camus’ definition of the absurd is also the epistemological curve in the road separating him from Sartre’s thinking. If Sartre’s philosophy can be distilled into his phrase “Hell is other people,” then Camus’ is a philosophy of the absurd dependent upon relationships among people. On the other hand, Camus’ articulation of the absurd, as we’ve seen above, resides in the relationship of humans with their world.

    Together, Sartre and Camus blazed a path where philosophy and art, in this case literature, met, thereby ushering in a new form of the novel, one that would examine existence from a philosophical perspective while making use of a form in which to mold these philosophical perspectives. What emerges from this is a hybrid. According to Randall Collins, “What was identified was a tradition of literary-philosophical hybrids. Sartre and Camus were key formulators of the canon, and themselves archetypes of the career overlap between academic networks and the writers’ market. The phenomenon of existentialism in the 1940s and 1950s added another layer to this overlap.”[iv] But this hybridization was more than a heady cerebral new movement in fiction; this hybrid constituted a new way of thinking about the world, a world that emerged primarily from a particular network of intellectuals at a particular time in Paris. Sartre and Camus are on the crest of this wave of existentialism and their thinking would go on to change the world.

    Alice Kaplan’s extraordinary new book Looking for the Stranger: Albert Camus and the Life of a Literary Classic, is a careful and meticulously researched examination of Camus’ 1942 novel. Kaplan is one of the leading scholars of twentieth century French culture and history. She is currently the John M. Masser Professor of French at Yale University where she also received her Ph.D. in French in 1981. She has published seven books, including: French Lessons: A Memoir (1993), The Collaborator: The Trial and Execution of Robert Brasillach (2000), and Dreaming in French: The Paris Years of Jacqueline Bouvier Kennedy, Susan Sontag, and Angela Davis (2012). In 2013 Kaplan edited and provided the introduction to The Algerian Chronicles, a collection of articles and essays Camus wrote from 1939-1958. Kaplan’s edited edition is the first time these writings have appeared in English, so she is no stranger to Camus and his place in twentieth century French culture.

    Early on Kaplan claims that Looking for the Stranger is actually a biography of Camus’ best known work, and one of the most famous and widely read texts of the twentieth-century. However, this does not mean that Kaplan foregoes a glimpse into Camus’ life, thus resurrecting the Barthesian “death of the author” debate. Instead, Kaplan goes looking for The Stranger in the author instead of the author in The Stranger; the difference is subtly stunning. In other words, her investigation is more preoccupied with the creative process and its cultural and social context than it is with getting to the author as a god-like figure. Camus always claimed that The Stranger was the second in a three part series exploring the absurd from three different perspectives: a novel (The Stranger), a dramatization (Caligula), and a philosophical work (The Myth of Sisyphus). But The Stranger is hardly a book that needs rescuing from obscurity, nor does Kaplan claim that it does. To date the novel has sold over ten million copies and is still read in over 40 languages. It is still on high school and college syllabi, thus making it required reading for young men and women. In fact, a student’s first encounter with existentialism and the absurd is likely to come from a reading of The Stranger. Instead, she offers us a more comprehensive look into the text, running down every lead, exploring every avenue that might expand our understanding of what makes The Stranger the text that it is.

    Kaplan begins by acknowledging the spectacular success of The Stranger, making it one of the most popular and important texts of the twentieth century. She quickly glosses over the critical reaction to The Stranger by pointing out that readings of the novel map some of the most important theoretical lenses that have influenced twentieth century thought. “In fact, you can construct a pretty accurate history of twentieth-century literary criticism by following the successive waves of analysis of The Stranger: existentialism, new criticism, deconstruction, feminism, postcolonial studies” (2). The Stanger, she claims, has influenced thinking of a diverse population that spans generations. Indeed, the remarkable staying power of the novel to remain relevant, perhaps even more relevant now than when it was published, is a feat that its author and its critics at the time could not have foreseen. I am not sure that students continue to read The Stranger with the commitment that they once did, but it is undeniable that the novel still matters, that it still provokes us into thinking, especially in a time when fundamentalism and terrorism are on the rise, and Europe and the United States are flirting with a new form of fascism in the guise of a renewed interest in ridged nationalism. But Kaplan is not necessarily interested in the public and academic reception of The Stranger. Instead, she claims that the novel’s readers and commentators have overlooked something from our reading of the text since its publication: that something is a biography of the novel. “Yet something essential is lacking in our understanding of the author and the book. By concentrating on themes and theories—esthetic, moral, political—critics have taken the very existence of The Stranger for granted” (2-3). She takes the unprecedented, and academically unpopular path that looks into the life of the author and the circumstances that allowed the author at a particular place and time to write one of the most powerful works of world literature. However, it is important to point out that Kaplan sets out to write a biography of the novel, and not the author. In fact, Camus’ life becomes a part of the puzzle that is The Stranger.

    Kaplan is not the first to comment on the unlikely success of The Stranger and its problematic birth. She is, however, the first to devote an entire book to an investigation, an investigation that is almost documentary-like in its approach, to the novel from conception to publication and beyond. And she accomplishes this brilliantly. Told in twenty-six short chapters, bookended by a prologue and an epilogue, Kaplan leads us into the depths of the novel in a highly engaging and thought-provoking fashion. In fact, the structure of her book presents its readers with the “life” of the novel, a life that has continued on long after the death of its creator. Drawing from a reservoir of sources, including Camus’ notebooks and her own trips to Algeria, Looking for the Stranger is a scholarly adventure story. As Kaplan claims in her acknowledgements: “I looked for The Stranger in libraries, in archives, in neighborhoods on three continents” (219). Of course, the idea of The Stranger was with her all of the time, but what makes Kaplan’s book so provocative is precisely the lengths she goes to in search of the novel. Kaplan explores The Stranger in three parts: before its publication, during its publication, and after its publication.

    In the first chapter Kaplan gives us the image of a young man in front of a bonfire burning various papers that link him to a past, a past that could be dangerous to him and those who know him. But as Kaplan tells it, the young Camus could not bring himself to burn all of his letters and writings. What he saved would act as a cache of material, both physical and remembered, from which he would extract and rework into a slim, simply told tale of a man who fails to cry at his mother’s funeral and, by a series of circumstances, ends up shooting an unnamed Arab on a beach, only to be arrested, tried, convicted and sentenced to death. Yet, the reader is never quite sure if the protagonist is convicted and sentenced to death because of the murder or his refusal to conform to the rules of a society that demands that one cry at one’s mother’s funeral. The image of the bonfire given to us by Kaplan is a powerful one. As we travel with her deeper into her investigation, we learn that the bonfire was a kind of rite Camus needed to perform in order to purge his mind and soul so that he could go on to write what he felt needed to be written—unimpeded by ghosts, but still attentive to their silences, which spoke to and through him.

    Throughout the spring of 1940, six years after the bonfire, Camus worked furiously on The Stranger, almost in total isolation holed up in his miserable hotel room in Montmartre, interrupted only to work for five hours a day at Paris-Soir. The twenty-six year old was as cut off from the world as he had ever been. Alone in a foreign city, with German bombs exploding all over France, Camus fought his loneliness and misery by throwing himself into his writing. Not yet divorced from his first wife, Simone Hié, his fiancé Francine Faure refused to accompany him to Paris. The only thing he brought with him was the first chapter of The Stranger and a few of his press clippings. Kaplan: “His sense of separation from everyone he loved put him in a state of mind that was both painful and enabling” (71). Like Camus’ biographer Olivier Todd, Kaplan highlights the importance of Camus’ isolation when he first arrives in Paris. Camus believed that the failure of A Happy Death, his abandoned first novel, was due to his inability to write without interruption. Camus’ isolation in Paris enabled him, out of necessity, to devote all of his attention to The Stranger. Kaplan’s research offers us a marvelous glimpse into the creative process Camus used, or perhaps more accurately, was host to, during his writing of the novel. Kaplan claims that Camus wrote The Stranger almost line for line, as if he were dictating a story he was seeing play out before his eyes. Where he struggled with the writing of A Happy Death, The Stranger seems to have emerged almost fully formed, complete.

    However, his writing of The Stranger does not mean that it was without its problems. In fact, the birth of The Stranger was long and fraught with difficulties both internal and external. Until his arrival in Paris, Camus struggled with getting into the narrative, creating a new story, as well as using material from A Happy Death. Interestingly, most reviewers of Kaplan’s book, Robert Zaretsky, himself an accomplished Camus scholar, and John Williams in particular[v], have devoted a majority of their reviews to the shortage of paper in France as the novel was set to go to press. “To say that the very existence of The Stranger was threatened by the material conditions of the war is no exaggeration, since paper supplies were becoming more and more precious. It looked at one point as if Camus would have to supply his own paper stock!” (136). Camus was in Oran with his family at the time, and was happy to help Gallimard with locating paper. The novel came very close to not being published, but paper stock was found at the last minute and Camus was not obliged to supply his own.

    Once the novel was published it was met with immediate success. But perhaps its success was not so unusual after all. From the beginning Camus wanted the French publishing world, located in Paris, to represent him. In the chapter “A Jealous Teacher and a Generous Comrade,” Kaplan tells the story of Camus’ almost frantic correspondence with Jean Grenier and Pascal Pia, the teacher and the comrade, respectfully, and their influence on The Stranger in its early stages. More importantly, if Camus were to move from a provincial author to a wider audience, one that would include the whole of Europe and possibly America, he would have to seek publication outside of Algeria. As Kaplan notes: “Yet Paris was still the center of book publishing in France, and if Camus wanted to publish outside Algeria, he’d eventually have to find a way to get his manuscript to the capital” (107). This, it seems to me, provides the necessary evidence that Camus was thinking bigger than his native land. He desired a world stage, a stage that would allow his work to be read by the widest possible public and Gallimard was the publisher that could provide him with that opportunity. In his book The Existentialist Moment: The Rise of Sartre as a Public Intellectual, Patrick Baert illustrates the importance of publishing, especially those publishing houses in Paris, for providing the necessary outlet for ideas. “Intellectual ideas spread mainly through publications. Whether through books, magazines, or articles, publishing is central to the rise of intellectual movements. For such movements to be successful, authors have to be well connected to the main publishers and need to have sufficient freedom and power to be able to write what they want to write.”[vi] The network Gallimard could provide Camus with would plug him into some of the most resonant writers and thinkers of the time. As mentioned above, The Stranger was not just a novel, but also an important piece of a longer meditation on the absurd. Therefore, Camus’ relationship with Gallimard, as Kaplan points out, is a key component to his rise to international prominence. Quite frankly, without Gallimard, The Stranger might not have met with its tremendous success.

    Camus’ association with Gallimard was not the only key to his success, however. Gallimard’s star and existentialism’s major voice, Jean-Paul Sartre, also had a lot to do with the success of The Stranger. In his celebrated review of The Stranger, originally published in 1943, Sartre almost single handedly anoints Camus into the French intellectual network, thus solidifying his reputation as a resonant French intellectual. Still, early on in his review Sartre points out that, like its author, The Stranger is a book from “across the sea,” highlighting Camus’ Algerian heritage. Sartre’s generous and insightful review gives a certain intellectual legitimacy to the novel. Sartre: “The Stranger is a classical work, a work of order, written about the absurd and against the absurd.”[vii] This Apollonian form, in the Nietzschean sense, of the novel further reinforces the boundary lines that mark the absurd context, a context that we might fold into the Dionysian, again in the Nietzschean sense.

    But it would be a mistake to consider The Stranger a French novel; it is, in almost every sense, an Algerian novel, a novel obsessed with the sun and the sea. What is perhaps closer to the novel’s intention is, at least in part, a Mediterranean world in a colonial context. In other words, the pied noirs who enjoy French citizenship and the protection it offers as opposed to Arab subjectivity. Arab subjectivity is one of the chief criticisms postcolonial scholars hurl at The Stranger and its author. Yet, a purely postcolonial reading of The Stranger severely limits our understanding of the novel. As David Carroll points out, “I would even say that to judge and indict Camus [as Edward Said does] for his “colonialist ideology” is not to read him; it is not to treat his literary texts in terms of the specific questions they actually raise, the contradictions they confront, and the uncertainties and dilemmas they express. It is not to read them in terms of their narrative strategies and complexity. It is to bring everything back to the same political point and ignore or underplay everything that might complicate or refute such a judgment.”[viii] The postcolonial lens that has dominated readings of The Stranger has also relegated it and its creator to a graveyard for Eurocentric authors. Kaplan’s attention to detail, however, locates the nameless murdered Arab in The Stranger in a central, one might even say, privileged, position. Almost from the beginning, Kaplan admits to being nearly obsessed with the figure of the nameless Arab. Indeed, the namelessness of this character is one of the pivotal points in her book. As Kaplan discovers, there was a nameless Arab in Camus’ life, one that would lead him straight to the central scene in The Stranger.

    In 2015 Other Press published the English translation of Kamel Daoud’s The Meursault Investigation, a retelling of The Stranger from the point of view of the brother of the Arab killed on the beach by Meursault. Daoud, an Algerian journalist living in Oran, writes for Quotidien d’Oran, a French language newspaper in Algeria. The Meursault Investigation is an interesting book that reads more in the style of Camus’ The Fall than The Stranger. The protagonist, speaking to us in the first person from a bar in Oran, informs us that there are other facts in the case that we did not hear, the chief among these is the name of his brother, Meursault’s victim, Musa: “Who was Musa? He was my brother. That’s what I’m getting at. I want to tell you the story Musa was never able to tell. When you opened the door of this bar, you opened a grave, my young friend” (4). Daoud’s text comes dangerously close to being fan fiction. However, there is something profoundly relevant in the novel. The Meursault Investigation demonstrates a deeper understanding of The Stranger, and Camus’ style. In order to write this book, Daoud proves that he knows The Stranger intimately, and his contribution to the story is, indeed, worthy of consideration. The Meursault Investigation demands to be read, digested, and then read again in the context to the cultural as well as the literary conditions of Algeria before, during, and after its independence.

    Kaplan devotes nearly an entire chapter (chapter 26) to Daoud’s novel and the figure of the unnamed Arab who appears in nearly spectral form in The Stranger. She tells us that she had a meeting with Daoud in 2014 in Oran, in which he claimed “we don’t read The Stranger the same way as Americans, French, Algerians” (210).  Kaplan’s reading of Daoud’s novel is a revelatory experience for her, and by association, for us. She strategically situates The Meursault Investigation both within and beyond the lens of postcolonial theory.

    Kaplan’s research into the source of the killing of the Arab scene in The Stranger is a remarkable piece of journalism. Her investigation led her through the towns and alleyways of Oran, to dusty archives, and populated streets, all despite an Algerian travel advisory for those holding a United States passport. “For two years, I had traveled to places in France and Algeria connected to The Stranger: I had walked down the former rue de Lyon in Algiers, past Camus’s childhood home. With photographer Kays Djilali, I climbed the steep Chemin Sidi Brahim, knocking on doors until we found the House Above the World, now the home of three generations of Kabyle women who speak neither French nor Arabic. With Father Guillaume Michel from Glycines Study Center in Algiers, I drove out to gold and blue vistas of Tipasa. In Paris, I stood in the dreary spot on the hill of Montmartre where Camus wrote in solitude” (211). At the end of the trail is a name: Kaddour Touil, and a story.

    Kaplan’s research demonstrates that it is not really Camus the author who haunts The Stranger, but rather it is the specter of Meursault who haunts Camus, both in life and after death. Meursault, as Olivier Todd informs us, is a combination of several people Camus knew. “The character of Meursault was inspired by Camus, Pascal Pia, Pierre Galindo, the Bensoussan brothers, Sauveur Galliero, and Yvonne herself. Marie was not Francine. Camus the writer mastered his novel in a way that Camus the man did not control in his life. Meursault never asked himself any questions, whereas Camus was always examining his actions and motivations.”[ix] Authors routinely use what and who they know for characters and their actions in books, but Camus’ relationship with Meursault seems to be as complicated as that character’s relationship with the reader. Kaplan’s book sheds a new light on the complexities of those relationships.

    The Stranger is truly a work of world literature, in the sense that David Damrosch defines the concept.[x] With The Stranger we have an Algerian author who wrote in French but was influenced by Danish, Russian, and German thinking, and was stylistically influenced by American authors like Hemingway and James M. Cain. Alice Kaplan gives us a view of The Stranger that joins a growing chorus of scholarship on the controversial book and its author. She provides keen insight that opens up other avenues of thinking about that book and its author. Camus’ influence seems to be growing, not diminishing as we move deeper into the twenty-first century, and this is needed, especially given the growing resurgence of nationalism and isolationist polices, i.e. Brexit and Trump. Perhaps it’s only literature, and international fiction in particular, that can save us from ourselves. In this age of social media epitomized by the egotistical selfie, international fiction has become more important than ever. Kaplan’s book reminds us that nothing exists in a vacuum, that great works of art come about contextually and pan-culturally. The Stranger might never have been a success without the French existentialist network of the time.

    Andrew Martino is Professor of English at Southern New Hampshire University where he also directs the University Honors Program. He has published on contemporary literature and is currently finishing a manuscript on the concept of security in the work of Paul Bowles.

    Notes

    [i] Edward W. Said. Culture and Imperialism. (New York: Vintage Books, 1994), 174.

    [ii] Arendt, Hannah. “French Existentialism.” Essays in Understanding: 1930-1954. (New York: Schocken Books, 1994), 192.

    [iii]Albert Camus. The Myth of Sisyphus. Trans. Justin O’Brien. ) New York: Vintage Books, 1991), 30.

    [iv] Randall Collins. The Sociology of Philosophies: A Global Theory of Intellectual Change. (Cambridge, Massachusetts: The Belknap Press of Harvard University Press, 2002), 764.

    [v] See Zaretsky’s review in Los Angeles Review of Books (https://lareviewofbooks.org/article/biography-zaretsky-kaplan-camus/) and Williams’ review in the New York Times (Sept. 15, 2016).

    [vi] Patrick Baert. The Existentialist Moment: The Rise of Sartre as a Public Intellectual. (Cambridge, England: Polity Press, 2015), 138-139.

    [vii] Jean-Paul Sartre. “The Stranger Explained.” We Have Only This Life to Live: The Selected Essays of Jean-Paul Sartre 1939-1975. Ed. Ronald Aronson and Adrian Van Den Hoven. (New York: New York Review Books, 2013), 43.

    [viii] David Carroll. Albert Camus the Algerian: Colonialism, Terrorism, Justice. (New York: Columbia University Press, 2007), 15.

    [ix] Todd, Olivier. Albert Camus: A Life. (New York: Alfred A. Knopf. 1997), 107.

    [x] Here I am thinking specifically of Damrosch’s theory of circulation. See David Damrosch’s What is World Literature. (New Jersey: Princeton University Press, 2003) for a full definition of the concept.

  • Daniel Greene – Digital Dark Matters

    Daniel Greene – Digital Dark Matters

    a review of Simone Browne, Dark Matters: On the Surveillance of Blackness (Duke University Press, 2015)

    by Daniel Greene

    ~

    The Book of Negroes was the first census of black residents of North America. In it, the British military took down the names of some three thousand ex-slaves between April and November of 1783, alongside details of appearance and personality, destination and, if applicable, previous owner. The self-emancipated—some free, some indentured to English or German soldiers—were seeking passage to Canada or Europe, and lobbied the defeated British Loyalists fleeing New York City for their place in the Book. The Book of Negroes thus functioned as “the first government-issued document for state-regulated migration between the United States and Canada that explicitly linked corporeal markers to the right to travel” (67). An index of slave society in turmoil, its data fields were populated with careful gradations of labor power, denoting the value of black life within slave capitalism: “nearly worn out,” “healthy negress,” “stout labourer.”  Much of the data in The Book of Negroes was absorbed from so-called Birch Certificates, issued by a British Brigadier General of that name, which acted as passports certifying the freedom of ex-slaves and their right to travel abroad. The Certificates became evidence submitted by ex-slaves arguing for their inclusion in the Book of Negroes, and became sites of contention for those slave-owners looking to reclaim people they saw as property.

    If, as Simone Browne argues in Dark Matters: On the Surveillance of Blackness, “the Book of Negroes [was] a searchable database for the future tracking of those listed in it” (83), the details of preparing, editing, monitoring, sorting and circulating these data become direct matters of (black) life and death. Ex-slaves would fight for their legibility within the system through their use of Birch Certificates and the like; but they had often arrived in New York in the first place through a series of fights to remain illegible to the “many start-ups in slave-catching” that arose to do the work of distant slavers. Aliases, costumes, forged documents and the like were on the one hand used to remain invisible to the surveillance mechanisms geared towards capture, and on the other hand used to become visible to the surveillance mechanisms—like the Book—that could potentially offer freedom. Those ex-slaves who failed to appear as the right sort of data were effectively “put on a no-sail list” (68), and either held in New York City or re-rendered into property and delivered back to the slave-owner.

    Start-ups, passports, no-sail lists, databases: These may appear anachronistic at first, modern technological thinking out of sync with colonial America. But Browne deploys these labels with care and precision, like much else in this remarkable book. Dark Matters reframes our contemporary thinking about surveillance, and digital media more broadly, through a simple question with challenging answers: What if our mental map of the global surveillance apparatus began not with 9/11 but with the slave ship? Surveillance is considered here not as a specific technological development but a practice of tracking people and putting them into place. Browne demonstrates how certain people have long been imagined as out of place and that technologies of control and order were developed in order to diagnose, map, and correct these conditions: “Surveillance is nothing new to black folks. It is a fact of antiblackness” (10). That this ”fact” is often invisible even in our studies of surveillance and digital media more broadly speaks, perversely, to the power of white supremacy to structure our vision of the world. Browne’s apparent anachronisms make stranger the techniques of surveillance with which we are familiar, revealing the dark matter that has structured their use and development this whole time. Difficult to visualize, Browne shows us how to trace this dark matter through its effects: the ordering of people into place, and the escape from that order through “freedom acts” of obfuscation, sabotage, and trickery.

    This then is a book about new (and very old) methods of research in surveillance studies in particular, and digital studies in general, centered in black studies—particularly the work of critical theorists of race such as Saidiya Hartman and Sylvia Wynter who find in chattel slavery a prototypical modernity. More broadly, it is a book about new ways of engaging with our technocultural present, centered in the black diasporic experience of slavery and its afterlife. Frantz Fanon is a key figure throughout. Browne introduces us to her own approach through an early reflection on the revolutionary philosopher’s dying days in Washington, DC, overcome with paranoia over the very real surveillance to which he suspected he was subjected. Browne’s FOIA requests to the CIA regarding their tracking of Fanon during his time at the National Institutes of Health Clinical Center returned only a newspaper clipping, a book review, and a heavily redacted FBI memo reporting on Fanon’s travels. So she digs further into the archive, finding in Fanon’s lectures at the University of Tunis, delivered in the late 1950s after being expelled from Algeria by French colonial authorities, a critical exploration of policing and surveillance. Fanon’s psychiatric imagination, granting such visceral connection between white supremacist institutions and lived black experience in The Wretched of the Earth, here addresses the new techniques of ‘control by quantification’—punch clocks, time sheets, phone taps, and CCTV—in factories and department stores, and the alienation engendered in the surveilled.

    Browne’s recovery of this work grounds a creative extension of Fanon’s thinking into surveillance practices and surveillance studies. From his concept of “epidermalization”—“the imposition of race on the body” (7)—Browne builds a theory of racializing surveillance. Like many other key terms in Dark Matters, this names an empirical phenomenon—the crafting of racial boundaries through tracking and monitoring—and critiques the “absented presence” (13) of race in surveillance studies. Its opposition is found in dark sousveillance, a revision of Steve Mann’s term for watching the watchers that, again, describes both the freedom acts of black folks against a visual field saturated with racism, as well as an epistemology capable of perceiving, studying, and deconstructing apparatuses of racial surveillance.

    Each chapter of Dark Matters presents a different archive of racializing surveillance paired with reflections on black cultural production Browne reads as dark sousveillance. At each turn, Browne encourages us to see in slavery and its afterlife new modes of control, old ways of studying them, and potential paths of resistance. Her most direct critique of surveillance studies comes in Chapter 1’s precise exegesis of the key ideas that emerge from reading Jeremy Bentham’s plans for the Panopticon and Foucault’s study of it—the signal archive and theory of the field—against the plans for the slave ship Brookes. It turns out Bentham travelled on a ship transporting slaves during the trip where he sketched out the Panopticon, a model penitentiary wherein, through the clever use of lights, mirrors, and partitions, prisoners are totally isolated from one another and never sure whether they are being monitored or not. The archetype for modern power as self-discipline is thus nurtured, counter to its own telling, alongside sovereign violence. Browne’s reading of archives from the slave ship, the auction block, and the plantation reveal the careful biopolitics that created “blackness as a saleable commodity in the Western Hemisphere” (42). She asks how “the view from ‘under the hatches’” of Bentham’s Turkish ship, transporting, in his words, “18 young negresses (slaves),” might change our narrative about the emergence of disciplinary power and the modern management of life as a resource. It becomes clear that the power to instill self-governance through surveillance did not subordinate but rather partnered with the brutal spectacle of sovereign power that was intended to educate enslaved people on the limits of their humanity. This correction to the Foucauldian narrative is sorely necessary in a field, and a general political conversation about surveillance, that too often focuses on the technical novelty of drones, to give one example, without a connection to a generation learning to fear the skies.

    Stowage of the British slave ship Brookes under the regulated slave trade act of 1788
    “Stowage of the British slave ship Brookes under the regulated slave trade act of 1788.” Illustration. 1788. Library of Congress Rare Book and Special Collections Division Washington, D.C.

    These sorts of theoretical course corrections are among the most valuable lessons in Dark Matters. There is fastidious empirical work here, particularly in Chapter 2’s exploration of the Book of Negroes and colonial New York’s lantern laws requiring all black and indigenous people to bear lights after dark. But this empirical work is not the book’s focus, nor its main promise. That promise comes in prompting new empirical and political questions about how we see surveillance and what it means, and for whom, through an archaeology of black life under surveillance (indeed, Chapter 4, on airport surveillance, is the one I find weakest largely because it abandons this archaeological technique and focuses wholly on the present). Chapter 1’s reading of Charles William Tait’s prescriptions for slave management, for example, is part of a broader turn in the study of the history of capitalism where the roots of modern business practices like data-driven human resource management are traced to the supposedly pre-modern slave economy. Chapter 3’s assertion that slave branding “was a biometric technology…a measure of slavery’s making, marking, and marketing of the black subject as commodity” (91) does similar work, making strange the contemporary security technologies that purport the reveal racial truths which unwilling subjects do not give up. Facial recognition technologies and other biometrics are calibrated based on what Browne calls a “prototypical whiteness…privileged in enrollment, measurement, and recognition processes…reliant upon dark matter for its own meaning” (162). Particularly in the context of border control, these default settings reveal the calculations built into our security technologies regarding who “counts” enough to be recognized. Calculations grounded in an unceasing desire for new means with which to draw clear-cut racial boundaries.

    The point here is not that a direct line of technological development can be drawn from brands to facial recognition or from lanterns to ankle bracelets. Rather, if racism, as Ruth Wilson Gilmore argues, is “the state-sanctioned or extralegal production and exploitation of group-differentiated vulnerability to premature death,” then what Browne points to are methods of group differentiation, the means by which the value of black lives are calculated and how those calculations are stored, transmitted, and concretized in institutional life. If Browne’s cultural studies approach neglects a sustained empirical engagement with a particular mode of racializing surveillance—say, the uneven geography produced by the Fugitive Slave Act, mentioned in passing in relation to “start-ups in slave catching”—it is because she has taken on the unenviable task of shifting the focus of whole fields to dark matter previously ignored, opening a series of doors through which readers can glimpse the technologies that make race.

    Here then is a space cleared for surveillance studies, and digital studies more broadly, in an historical moment when so many are loudly proclaiming that Black Lives Matter, when the dark sousveillance of smartphone recordings has made the violence of institutional racism impossible to ignore. Work in digital studies has readily and repeatedly unearthed the capitalist imperatives built into our phones, feeds, and friends lists. Shoshanna Zuboff’s recent work on “surveillance capitalism” is perhaps a bellwether here: a rich theorization of the data accumulation imperative that transforms intra-capitalist competition, the nature of the contract, and the paths of everyday life. But her account of the growth of an extractive data economy that leads to a Big Other of behavior modification does not so far have a place for race.

    This is not a call on my part to sprinkle a missing ingredient atop a shoddy analysis in order to check a box. Zuboff is critiqued here precisely because she is among our most thoughtful, careful critics of contemporary capitalism. Rather, Browne’s account of surveillance capitalism—though she does not call it that—shows that race does not need to be introduced to the critical frame from outside. That dark matter has always been present, shaping what is visible even if it goes unseen itself. This manifests in at least two ways in Zuboff’s critique of the Big Other. First, her critique of Google’s accumulation of  “data exhaust” is framed primarily as a ‘pull’ of ever more sites and sensors into Google’s maw, passively given up users. But there is a great deal of “push” here as well. The accumulation of consumable data also occurs through the very human work of solving CAPTCHAs and scanning books. The latter is the subject of an iconic photo that shows the brown hand of a Google Books scanner—a low-wage subcontractor, index finger wrapped in plastic to avoid cuts from a day of page-turning—caught on a scanned page. Second, for Zuboff part of the frightening novelty of Google’s data extraction regime is its “formal indifference” to individual users, as well as to existing legal regimes that might impede the extraction of population-scale data. This, she argues, stands in marked contrast to the midcentury capitalist regimes which embraced a degree of democracy in order to prop up both political legitimacy and effective demand. But this was a democratic compromise limited in time and space. Extractive capitalist regimes of the past and present, including those producing the conflict minerals so necessary for hardware running Google services, have been marked by, at best, formal indifference in the North to conditions in the South. An analysis of surveillance capitalism’s struggle for hegemony would be greatly enriched by a consideration of how industrial capitalism legitimated itself in the metropole at the expense of the colony. Nor is this racial-economic dynamic and its political legitimation purely a cross-continental concern. US prisons have long extracted value from the incarcerated, racialized as second-class citizens. Today this practice continues, but surveillance technologies like ankle bracelets extend this extraction beyond prison walls, often at parolees’ expense.

    A Google Books scanner’s hand
    A Google Books scanner’s hand, caught working on WEB Du Bois’ The Souls of Black Folk. Via The Art of Google Books.

    Capitalism has always, as Browne’s notes on plantation surveillance make clear, been racial capitalism. Capital enters the world arrayed in the blood of primitive accumulation, and reproduces itself in part through the violent differentiation of labor powers. While the accumulation imperative has long been accepted as a value shaping media’s design and use, it is unfortunate that race has largely entered the frame of digital studies, and particularly, as Jessie Daniels argues, internet studies, through a study of either racial variables (e.g., “race” inheres to the body of the nonwhite person and causes other social phenomena) or racial identities (e.g., race is represented through minority cultural production, racism is produced through individual prejudice). There are perhaps good institutional reasons for this framing, owing to disciplinary training and the like, beyond the colorblind political ethic of much contemporary liberalism. But it has left us without digital stories of race (although there are certainly exceptions, particularly in the work of writers like Lisa Nakamura and her collaborators), perceived to be a niche concern, on par with our digital stories of capitalism—much less digital stories of racial capitalism.

    Browne provides a path forward for a study of race and technology more attuned to institutions and structures, to the long shadows old violence casts on our daily, digital lives. This slim, rich book is ultimately a reflection on method, on learning new ways to see. “Technology is made of people!” is where so many of our critiques end, discovering, once again, the values we build into machines. This is where Dark Matters begins. And it proceeds through slave ships, databases, branding irons, iris scanners, airports, and fingerprints to map the built project of racism and the work it takes to pass unnoticed in those halls or steal the map and draw something else entirely.

    _____

    Daniel Greene holds a PhD in American Studies from the University of Maryland. He is currently a Postdoctoral Researcher with the Social Media Collective at Microsoft Research, studying the future of work and the future of unemployment. He lives online at dmgreene.net.

    Back to the essay

  • Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    By Audrey Watters

    ~

    This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology and the Media, Art, and Text PhD Program. The slides are also available here.

    Thank you very much for inviting me here to speak today. I’m particularly pleased to be speaking to those from Sociology and those from the English and those from the Media, Art, and Text departments, and I hope my talk can walk the line between and among disciplines and methods – or piss everyone off in equal measure. Either way.

    This is the last public talk I’ll deliver in 2016, and I confess I am relieved (I am exhausted!) as well as honored to be here. But when I finish this talk, my work for the year isn’t done. No rest for the wicked – ever, but particularly in the freelance economy.

    As I have done for the past six years, I will spend the rest of November and December publishing my review of what I deem the “Top Ed-Tech Trends” of the year. It’s an intense research project that usually tops out at about 75,000 words, written over the course of four to six weeks. I pick ten trends and themes in order to closely at the recent past, the near-term history of education technology. Because of the amount of information that is published about ed-tech – the amount of information, its irrelevance, its incoherence, its lack of context – it can be quite challenging to keep up with what is really happening in ed-tech. And just as importantly, what is not happening.

    So that’s what I try to do. And I’ll boast right here – no shame in that – no one else does as in-depth or thorough job as me, certainly no one who is entirely independent from venture capital, corporate or institutional backing, or philanthropic funding. (Of course, if you look for those education technology writers who are independent from venture capital, corporate or institutional backing, or philanthropic funding, there is pretty much only me.)

    The stories that I write about the “Top Ed-Tech Trends” are the antithesis of most articles you’ll see about education technology that invoke “top” and “trends.” For me, still framing my work that way – “top trends” – is a purposeful rhetorical move to shed light, to subvert, to offer a sly commentary of sorts on the shallowness of what passes as journalism, criticism, analysis. I’m not interested in making quickly thrown-together lists and bullet points. I’m not interested in publishing clickbait. I am interested nevertheless in the stories – shallow or sweeping – that we tell and spread about technology and education technology, about the future of education technology, about our technological future.

    Let me be clear, I am not a futurist – even though I’m often described as “ed-tech’s Cassandra.” The tagline of my website is “the history of the future of education,” and I’m much more interested in chronicling the predictions that others make, have made about the future of education than I am writing predictions of my own.

    One of my favorites: “Books will soon be obsolete in schools,” Thomas Edison said in 1913. Any day now. Any day now.

    Here are a couple of more recent predictions:

    “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.” – that’s Sebastian Thrun, best known perhaps for his work at Google on the self-driving car and as a co-founder of the MOOC (massive open online course) startup Udacity. The quotation is from 2012.

    And from 2013, by Harvard Business School professor, author of the book The Innovator’s Dilemma, and popularizer of the phrase “disruptive innovation,” Clayton Christensen: “In fifteen years from now, half of US universities may be in bankruptcy. In the end I’m excited to see that happen. So pray for Harvard Business School if you wouldn’t mind.”

    Pray for Harvard Business School. No. I don’t think so.

    Both of these predictions are fantasy. Nightmarish, yes. But fantasy. Fantasy about a future of education. It’s a powerful story, but not a prediction made based on data or modeling or quantitative research into the growing (or shrinking) higher education sector. Indeed, according to the latest statistics from the Department of Education – now granted, this is from the 2012–2013 academic year – there are 4726 degree-granting postsecondary institutions in the United States. A 46% increase since 1980. There are, according to another source (non-governmental and less reliable, I think), over 25,000 universities in the world. This number is increasing year-over-year as well. So to predict that the vast vast majority of these schools (save Harvard, of course) will go away in the next decade or so or that they’ll be bankrupt or replaced by Silicon Valley’s version of online training is simply wishful thinking – dangerous, wishful thinking from two prominent figures who will benefit greatly if this particular fantasy comes true (and not just because they’ll get to claim that they predicted this future).

    Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

    Fantasy. Fortune-telling. Or as capitalism prefers to call it “market research.”

    “Market research” involves fantastic stories of future markets. These predictions are often accompanied with a press release touting the size that this or that market will soon grow to – how many billions of dollars schools will spend on computers by 2020, how many billions of dollars of virtual reality gear schools will buy by 2025, how many billions of dollars of schools will spend on robot tutors by 2030, how many billions of dollars will companies spend on online training by 2035, how big will coding bootcamp market will be by 2040, and so on. The markets, according to the press releases, are always growing. Fantasy.

    In 2011, the analyst firm Gartner predicted that annual tablet shipments would exceed 300 million units by 2015. Half of those, the firm said, would be iPads. IDC estimates that the total number of shipments in 2015 was actually around 207 million units. Apple sold just 50 million iPads. That’s not even the best worst Gartner prediction. In October of 2006, Gartner said that Apple’s “best bet for long-term success is to quit the hardware business and license the Mac to Dell.” Less than three months later, Apple introduced the iPhone. The very next day, Apple shares hit $97.80, an all-time high for the company. By 2012 – yes, thanks to its hardware business – Apple’s stock had risen to the point that the company was worth a record-breaking $624 billion.

    But somehow, folks – including many, many in education and education technology – still pay attention to Gartner. They still pay Gartner a lot of money for consulting and forecasting services.

    People find comfort in these predictions, in these fantasies. Why?

    Gartner is perhaps best known for its “Hype Cycle,” a proprietary graphic presentation that claims to show how emerging technologies will be adopted.

    According to Gartner, technologies go through five stages: first, there is a “technology trigger.” As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage: the “peak of inflated expectations.” So many promises have been made about this technological breakthrough. Then, the third stage: the “trough of disillusionment.” Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly – this is the “slope of enlightenment.” Eventually the new technology becomes mainstream – the “plateau of productivity.”

    It’s not that hard to identify significant problems with the Hype Cycle, least of which being it’s not a cycle. It’s a curve. It’s not a particularly scientific model. It demands that technologies always move forward along it.

    Gartner says its methodology is proprietary – which is code for “hidden from scrutiny.” Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place technologies on the line. But most of the time when Gartner uses the word “methodology,” it is trying to signify “science,” and what it really means is “expensive reports you should buy to help you make better business decisions.”

    Can it really help you make better business decisions? It’s just a curve with some technologies plotted along it. The Hype Cycle doesn’t help explain why technologies move from one stage to another. It doesn’t account for technological precursors – new technologies rarely appear out of nowhere – or political or social changes that might prompt or preclude adoption. And at the end it is simply too optimistic, unreasonably so, I’d argue. No matter how dumb or useless a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted. Where would you plot the Segway, for example? (In 2008, ever hopeful, Gartner insisted that “This thing certainly isn’t dead and maybe it will yet blossom.” Maybe it will, Gartner. Maybe it will.)

    And maybe this gets to the heart as to why I’m not a futurist. I don’t share this belief in an increasingly technological future; I don’t believe that more technology means the world gets “more better.” I don’t believe that more technology means that education gets “more better.”

    Every year since 2004, the New Media Consortium, a non-profit organization that advocates for new media and new technologies in education, has issued its own forecasting report, the Horizon Report, naming a handful of technologies that, as the name suggests, it contends are “on the horizon.”

    Unlike Gartner, the New Media Consortium is fairly transparent about how this process works. The organization invites various “experts” to participate in the advisory board that, throughout the course of each year, works on assembling its list of emerging technologies. The process relies on the Delphi method, whittling down a long list of trends and technologies by a process of ranking and voting until six key trends, six emerging technologies remain.

    Disclosure/disclaimer: I am a folklorist by training. The last time I took a class on “methods” was, like, 1998. And admittedly I never learned about the Delphi method – what the New Media Consortium uses for this research project – until I became a scholar of education technology looking into the Horizon Report. As a folklorist, of course, I did catch the reference to the Oracle of Delphi.

    Like so much of computer technology, the roots of the Delphi method are in the military, developed during the Cold War to forecast technological developments that the military might use and that the military might have to respond to. The military wanted better predictive capabilities. But – and here’s the catch – it wanted to identify technology trends without being caught up in theory. It wanted to identify technology trends without developing models. How do you do that? You gather experts. You get those experts to consensus.

    So here is the consensus from the past twelve years of the Horizon Report for higher education. These are the technologies it has identified that are between one and five years from mainstream adoption:

    It’s pretty easy, as with the Gartner Hype Cycle, to look at these predictions and note that they are almost all wrong in some way or another.

    Some are wrong because, say, the timeline is a bit off. The Horizon Report said in 2010 that “open content” was less than a year away from widespread adoption. I think we’re still inching towards that goal – admittedly “open textbooks” have seen a big push at the federal and at some state levels in the last year or so.

    Some of these predictions are just plain wrong. Virtual worlds in 2007, for example.

    And some are wrong because, to borrow a phrase from the theoretical physicist Wolfgang Pauli, they’re “not even wrong.” Take “collaborative learning,” for example, which this year’s K–12 report posits as a mid-term trend. Like, how would you argue against “collaborative learning” as occurring – now or some day – in classrooms? As a prediction about the future, it is not even wrong.

    But wrong or right – that’s not really the problem. Or rather, it’s not the only problem even if it is the easiest critique to make. I’m not terribly concerned about the accuracy of the predictions about the future of education technology that the Horizon Report has made over the last decade. But I do wonder how these stories influence decision-making across campuses.

    What might these predictions – this history of the future – tell us about the wishful thinking surrounding education technology and about the direction that the people the New Media Consortium views as “experts” want the future to take. What can we learn about the future by looking at the history of our imagining about education’s future. What role does powerful ed-tech storytelling (also known as marketing) play in shaping that future? Because remember: to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

    It’s both convenient and troubling then these forward-looking reports act as though they have no history of their own; they purposefully minimize or erase their own past. Each year – and I think this is what irks me most – the NMC fails to looks back at what it had predicted just the year before. It never revisits older predictions. It never mentions that they even exist. Gartner too removes technologies from the Hype Cycle each year with no explanation for what happened, no explanation as to why trends suddenly appear and disappear and reappear. These reports only look forward, with no history to ground their direction in.

    I understand why these sorts of reports exist, I do. I recognize that they are rhetorically useful to certain people in certain positions making certain claims about “what to do” in the future. You can write in a proposal that, “According to Gartner… blah blah blah.” Or “The Horizon Reports indicates that this is one of the most important trends in coming years, and that is why we need to commit significant resources – money and staff – to this initiative.” But then, let’s be honest, these reports aren’t about forecasting a future. They’re about justifying expenditures.

    “The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Let’s consider: is there something about the field of computer science in particular – and its ideological underpinnings – that makes it more prone to encourage, embrace, espouse these sorts of predictions? Is there something about Americans’ faith in science and technology, about our belief in technological progress as a signal of socio-economic or political progress, that makes us more susceptible to take these predictions at face value? Is there something about our fears and uncertainties – and not just now, days before this Presidential Election where we are obsessed with polls, refreshing Nate Silver’s website obsessively – that makes us prone to seek comfort, reassurance, certainty from those who can claim that they know what the future will hold?

    “Software is eating the world,” investor Marc Andreessen pronounced in a Wall Street Journal op-ed in 2011. “Over the next 10 years,” he wrote, “I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.” Buy stock in technology companies was really the underlying message of Andreessen’s op-ed; this isn’t another tech bubble, he wanted to reinsure investors. But many in Silicon Valley have interpreted this pronouncement – “software is eating the world” – as an affirmation and an inevitability. I hear it repeated all the time – “software is eating the world” – as though, once again, repeating things makes them true or makes them profound.

    If we believe that, indeed, “software is eating the world,” that we are living in a moment of extraordinary technological change, that we must – according to Gartner or the Horizon Report – be ever-vigilant about emerging technologies, that these technologies are contributing to uncertainty, to disruption, then it seems likely that we will demand a change in turn to our educational institutions (to lots of institutions, but let’s just focus on education). This is why this sort of forecasting is so important for us to scrutinize – to do so quantitatively and qualitatively, to look at methods and at theory, to ask who’s telling the story and who’s spreading the story, to listen for counter-narratives.

    This technological change, according to some of the most popular stories, is happening faster than ever before. It is creating an unprecedented explosion in the production of information. New information technologies, so we’re told, must therefore change how we learn – change what we need to know, how we know, how we create and share knowledge. Because of the pace of change and the scale of change and the locus of change (that is, “Silicon Valley” not “The Ivory Tower”) – again, so we’re told – our institutions, our public institutions can no longer keep up. These institutions will soon be outmoded, irrelevant. Again – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    These forecasting reports, these predictions about the future make themselves necessary through this powerful refrain, insisting that technological change is creating so much uncertainty that decision-makers need to be ever vigilant, ever attentive to new products.

    As Neil Postman and others have cautioned us, technologies tend to become mythic – unassailable, God-given, natural, irrefutable, absolute. So it is predicted. So it is written. Techno-scripture, to which we hand over a certain level of control – to the technologies themselves, sure, but just as importantly to the industries and the ideologies behind them. Take, for example, the founding editor of the technology trade magazine Wired, Kevin Kelly. His 2010 book was called What Technology Wants, as though technology is a living being with desires and drives; the title of his 2016 book, The Inevitable. We humans, in this framework, have no choice. The future – a certain flavor of technological future – is pre-ordained. Inevitable.

    I’ll repeat: I am not a futurist. I don’t make predictions. But I can look at the past and at the present in order to dissect stories about the future.

    So is the pace of technological change accelerating? Is society adopting technologies faster than it’s ever done before? Perhaps it feels like it. It certainly makes for a good headline, a good stump speech, a good keynote, a good marketing claim, a good myth. But the claim starts to fall apart under scrutiny.

    This graph comes from an article in the online publication Vox that includes a couple of those darling made-to-go-viral videos of young children using “old” technologies like rotary phones and portable cassette players – highly clickable, highly sharable stuff. The visual argument in the graph: the number of years it takes for one quarter of the US population to adopt a new technology has been shrinking with each new innovation.

    But the data is flawed. Some of the dates given for these inventions are questionable at best, if not outright inaccurate. If nothing else, it’s not so easy to pinpoint the exact moment, the exact year when a new technology came into being. There often are competing claims as to who invented a technology and when, for example, and there are early prototypes that may or may not “count.” James Clerk Maxwell did publish A Treatise on Electricity and Magnetism in 1873. Alexander Graham Bell made his famous telephone call to his assistant in 1876. Guglielmo Marconi did file his patent for radio in 1897. John Logie Baird demonstrated a working television system in 1926. The MITS Altair 8800, an early personal computer that came as a kit you had to assemble, was released in 1975. But Martin Cooper, a Motorola exec, made the first mobile telephone call in 1973, not 1983. And the Internet? The first ARPANET link was established between UCLA and the Stanford Research Institute in 1969. The Internet was not invented in 1991.

    So we can reorganize the bar graph. But it’s still got problems.

    The Internet did become more privatized, more commercialized around that date – 1991 – and thanks to companies like AOL, a version of it became more accessible to more people. But if you’re looking at when technologies became accessible to people, you can’t use 1873 as your date for electricity, you can’t use 1876 as your year for the telephone, and you can’t use 1926 as your year for the television. It took years for the infrastructure of electricity and telephony to be built, for access to become widespread; and subsequent technologies, let’s remember, have simply piggy-backed on these existing networks. Our Internet service providers today are likely telephone and TV companies; our houses are already wired for new WiFi-enabled products and predictions.

    Economic historians who are interested in these sorts of comparisons of technologies and their effects typically set the threshold at 50% – that is, how long does it take after a technology is commercialized (not simply “invented”) for half the population to adopt it. This way, you’re not only looking at the economic behaviors of the wealthy, the early-adopters, the city-dwellers, and so on (but to be clear, you are still looking at a particular demographic – the privileged half.)

    And that changes the graph again:

    How many years do you think it’ll be before half of US households have a smart watch? A drone? A 3D printer? Virtual reality goggles? A self-driving car? Will they? Will it be fewer years than 9? I mean, it would have to be if, indeed, “technology” is speeding up and we are adopting new technologies faster than ever before.

    Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

    Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues (and this is from his recent book The Rise and Fall of American Growth: The US Standard of Living Since the Civil War), to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

    Let’s return briefly to those Horizon Report predictions again. They certainly reflect this belief that technology must be speeding up. Every year, there’s something new. There has to be. That’s the purpose of the report. The horizon is always “out there,” off in the distance.

    But if you squint, you can see each year’s report also reflects a decided lack of technological change. Every year, something is repeated – perhaps rephrased. And look at the predictions about mobile computing:

    • 2006 – the phones in their pockets
    • 2007 – the phones in their pockets
    • 2008 – oh crap, we don’t have enough bandwidth for the phones in their pockets
    • 2009 – the phones in their pockets
    • 2010 – the phones in their pockets
    • 2011 – the phones in their pockets
    • 2012 – the phones too big for their pockets
    • 2013 – the apps on the phones too big for their pockets
    • 2015 – the phones in their pockets
    • 2016 – the phones in their pockets

    This hardly makes the case for technological speeding up, for technology changing faster than it’s ever changed before. But that’s the story that people tell nevertheless. Why?

    I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

    But I also pay attention to these sorts of stories because there’s that part of me that is horrified at the stuff – predictions – that people pass off as true or as inevitable.

    “65% of today’s students will be employed in jobs that don’t exist yet.” I hear this statistic cited all the time. And it’s important, rhetorically, that it’s a statistic – that gives the appearance of being scientific. Why 65%? Why not 72% or 53%? How could we even know such a thing? Some people cite this as a figure from the Department of Labor. It is not. I can’t find its origin – but it must be true: a futurist said it in a keynote, and the video was posted to the Internet.

    The statistic is particularly amusing when quoted alongside one of the many predictions we’ve been inundated with lately about the coming automation of work. In 2014, The Economist asserted that “nearly half of American jobs could be automated in a decade or two.”“Before the end of this century,” Wired Magazine’s Kevin Kelly announced earlier this year, “70 percent of today’s occupations will be replaced by automation.”

    Therefore the task for schools – and I hope you can start to see where these different predictions start to converge – is to prepare students for a highly technological future, a future that has been almost entirely severed from the systems and processes and practices and institutions of the past. And if schools cannot conform to this particular future, then “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Now, I don’t believe that there’s anything inevitable about the future. I don’t believe that Moore’s Law – that the number of transistors on an integrated circuit doubles every two years and therefore computers are always exponentially smaller and faster – is actually a law. I don’t believe that robots will take, let alone need take, all our jobs. I don’t believe that YouTube has been rendered school irrevocably out-of-date. I don’t believe that technologies are changing so quickly that we should hand over our institutions to entrepreneurs, privatize our public sphere for techno-plutocrats.

    I don’t believe that we should cheer Elon Musk’s plans to abandon this planet and colonize Mars – he’s predicted he’ll do so by 2026. I believe we stay and we fight. I believe we need to recognize this as an ego-driven escapist evangelism.

    I believe we need to recognize that predicting the future is a form of evangelism as well. Sure gets couched in terms of science, it is underwritten by global capitalism. But it’s a story – a story that then takes on these mythic proportions, insisting that it is unassailable, unverifiable, but true.

    The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.

    Image credits: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28. And a special thanks to Tressie McMillan Cottom and David Golumbia for organizing this talk. And to Mike Caulfield for always helping me hash out these ideas.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • Vassilis Lambropoulos – A Review of Aamir Mufti’s “Forget English!”

    Vassilis Lambropoulos – A Review of Aamir Mufti’s “Forget English!”

    514ywdifl6l-_sx327_bo1204203200_Aamir R. Mufti:  Forget English!  Orientalisms and World Literatures (Harvard University Press, 2016)

    Reviewed by Vassilis Lambropoulos

    This essay was peer-reviewed by the editorial board of b2o: an online journal

    Aamir Mufti’s Forget English! exposes the regulatory operations of presumably borderless world literature.  Second, it questions the cultural control of presumably egalitarian global English.  Next, it traces the Orientalist administration of presumably universal colonial knowledge.  Readers may agree with all this despite the repeated warnings that these three systems remain closely implicated not only in the objects of study but also in epistemological critique.  Mufti’s most radical proposition comes last:  The basis of the modern national and global cultural field is the institution of literature, that is, the disciplinary literary regimen that includes the askeses of composition, the exercises of pleasure, the practices of interpretation, and the technologies of education.  Mufti’s critique of critique itself as an aesthetic ethics ought to be disturbing.  In what follows, I will repurpose his project, reshuffling its case studies, to foreground its ultimate target, literary ideology, namely, the constitutive antinomies of the interpretive freedom, the self-imposed limits and controls of aesthetic understanding.  I will do that by narrating the institutional story of “literature” that underlies his anatomy of world literature.

    Mufti proposes that today, as a popular project of translation, circulation, criticism, and scholarship, “world literature” turns an opaque and unequal process of violent appropriation into a supposedly transparent and equal one of free communication.  Its inviting name occludes “the ways in which contemporary critical thinking unwittingly replicates logics of a longer provenance in the colonial and postcolonial eras” (248).  This is particularly evident in multicultural celebrations of the Global South.  Mufti warns against “the triumphalist ‘We are the World’ tone so clearly discernible in the self-staging of world literature in our times.  In many ways, the rubric ‘postcolonial literature’ as used in the Global North now serves as a means of domesticating those radical energies – and not just linguistic or cultural differences – [for example, the now defunct “Bandung” internationalism] into the space of (bourgeois) world literature as varieties of local practice – as Indian, African, or Middle Eastern literary practices, for instance” (92).  Instead of liberal appeals to “diversity” and its token-like selections, what is needed is “a concept of world literature (and practices of teaching it) that work to reveal the ways in which diversity itself (national, religious, civilizational, continental) is a colonial and Orientalist problematic, one that emerges precisely on the plane of equivalence that is literature” (250).  Sensitivity to diversity and respect for difference may express noble sentiments but do nothing to question the values dominating the literary and academic market.

    Studies of scholars in world literature often “are salutary in having emphasized inequality as the primary structural principle of world literary space rather than difference, which has been the dominant preoccupation in the discussion of world literature since the late eighteenth century, including in Goethe’s late-in-life elaboration of the idea of Weltliteratur.  But they give us no account whatsoever of the exact nature of these forms of inequality and the sociocultural logics through which they have historically been instituted, logics of the institution of inequality that incorporate notions and practices of ‘difference’ and proceed precisely through them” (33).  Whether they are describing a “world system” or a “republic of letters,” these scholars fail “to understand the mutual imbrication of inequality and difference” (33) in their operations, which is as short sighted as studying autopoiesis in Niklas Luhmann but not Cornelius Castoriadis.  Mufti does not elaborate a new model of doing world literature.  Instead, he examines how this comprehensive approach to culture has been devised and institutionalized for some two hundred fifty years, starting with the observation that its current resurgence is “a post-1989 development, which has appeared against the background of the larger neoliberal attempt to monopolize all possibilities of the international into the global life of capital.  This mode of appearance of the literatures of the Global South in the literary sphere of the North is thus linked to the disappearance of those varieties of internationalism that had sought in various ways to bypass the circuits of interaction, transmission, and exchange of the emergent global bourgeois order in the postwar and early postcolonial decades in the interest of the decolonizing societies of the South” (91).  Mufti seeks “to unmask and to make available for criticism and analysis” (20) world literature in the twenty first century as the main “field force” (199) of the project to subsume all centrifugal possibilities for an international literature under the monopoly of global cultural capital.  He treats it simultaneously as a “concept,” a “field of study,” and a set of “practices and institutional frameworks” (10), and uses a genealogical approach for a “critical-historical examination of a certain constellation of ideas and practices in its accretions and transformations over time” (19-20).  In what follows I discuss much less the numerous and wonderful cases to focus on the larger historical trajectory produced by this approach.

    The genealogy of world literature begins with the role that “literature as national institution” (3) played “in the emergence of the hierarchies that structure relations between societies in the modern world” (97).  An international literary space first formed in Europe as a structure of rivalries among the traditions (58) emerging in the “intra-European ‘competitive’ vernacularization,” which was later followed by its “colonial absorption and transformation” (76).  The standardization of the vernaculars was a central part of “a project of ethnonational or civilizational nationalism in linguistically diverse and multicultural societies” (148).  This made possible the formation of “literature” as a separate domain of writing and reading out of diverse guild, church, local, and other traditions.  “The nationalization of languages over the past two centuries all over the world . . . transformed former extensive and dispersed cultures of writing . . . into narrowly conceived ethnonational spheres” (146).  Through an extensive philological and interpretive operation “often-overlapping bodies of writing came to acquire, through a process of historicization, distinct personalities as ‘literature’ along national lines” (97).  This is how literature achieves centrality in all constellations of national arts.  “The (now universal) category of literature itself . . . marks this process of assimilation of diverse cultures of writing” (80).  New practices of reading claim existing textual regimes for new purposes and milieus while new elites are also trained to curate them.  “In this process of the acquisition of literary history, the textual corpus acquires, first of all, the attributes of literariness.  That is to say, . . . it enters the world literary system as one among many other literatures, being subject henceforth to the requirements and measures of literariness, replacing the models and modes of evaluation internal to the textual corpus itself.  Furthermore, in the moment of its historicization, it undergoes a shift of orientation within the larger social formation, being reinscribed within a discursive system for the attribution of a literature to a language, understood as the unique possession and mode of expression of a people” (141).

    A foundational act of historicization produced for the first time the terms of a distinct and independent literary history, anchoring a regional tradition in a national logic (143).  When a premodern corpus of undifferentiated writing acquired such a prestigious history, its newly self-regulating “works” entered literary modernity (38-9).   The admission of a corpus “into world literary space as a distinct literary tradition has characteristically taken place since the nineteenth century through its acquisition of a narrative of (‘national’) historical development” (131).  A literary history proper legitimized the literary modernity of a writing tradition by granting it national authority.

    Thus the word “literature” in the term world literature “marks the plane of equivalence and compatibility between historically distinct and particular practices of writing” (240).  The word “world” in “world literature” is a world of nations, the new regimes of sovereignty.  “’World’ and ‘nation’ are in a determinate relationship of mutual reinforcement here, rather than simply one of contradiction or negation” (77).  When world literature is invoked, it is important to keep in mind “the forms of nationalization of language, literature, and culture installed . . . precisely in and through the world-historical process that is the emergence of world literature” (130).  Literature and nation are mutually authenticating and reinforcing:  They confirm the antiquity and autonomy of one another. “The concept and practices of world literature, far from representing the superseding of national forms of identification of language, literature, and culture, emerged for the first time precisely along the forms of . . . nation-thinking” (97).  In addition, world literature played an important role in the orientation of national literatures toward the global space to which every nation could make its own “distinct national contribution” (112).  This role ought to be placed in an even broader global context since it is important to stress that “the emergence and modes of functioning of world literature, as the space of interaction between and articulation of the ‘national’ or regional literatures, are elements of the much-wider historical process of the emergence of the modern, bourgeois state and its dissemination worldwide, under colonial and semicolonial conditions, as the normative state-form of the modern era” (98).  Literature strengthened the claim of the national state against other state forms by giving voice to its organic character.

    It is in this broader context that Mufti introduces world literature as “the (bourgeois) understanding and experience of the world as an assemblage of ‘literary’ or expressive traditions, whose very ground of possibility was the Orientalist knowledge revolution” (90).  Tracing “the historical dialectic of Orientalism and/as world literature” (38) within literary studies since the late 18th century (99), he highlights the production of entirely new objects of study and insists on the central role “that philological Orientalism played in producing and establishing a method and a system for classifying and evaluating diverse forms of textuality, now all processed and codified uniformly as literature” (80).  If national literature was from the beginning world literature too, this was based on Orientalist assumptions.  Mufti’s strong thesis is that “a genealogy of world literature . . . leads to the classical phase of modern Orientalism in the late eighteenth and early nineteenth centuries, an enormous assemblage of projects and practices that was the ground for the emergence of the concept of world literature as for the literary and scholarly practices it originally referenced” (19).  The project of philological Orientalism, from the microscopic level of the text to the macroscopic one of the library, produces an entire hermeneutics, which “may be understood as a set of processes for the reorganization of language, literature, and culture on a planetary scale that effected the assimilation of heterogeneous and dispersed bodies of writing onto the plane of equivalence and evaluability that is (world) literature, fundamentally transforming in the process their internal distribution and coherence, their modes of authorization, and their relationship to the larger social order and social imaginaries in their place of origin” (145).  In a nutshell, this is how the colonial Orient was collected, archived, studied, and administered, and the regimes of the truth of the empire established and imposed.

    Orientalism should be understood not only as the apparatus that produced the Orient as a domain of interpretation and administration but additionally as “the cultural system that for the first time articulated a concept of the world as an assemblage of ‘nations’ with distinct expressive traditions, above all ‘literary’ ones.  Orientalism thus played a crucial role in the emergence of the cultural logics of the modern bourgeois world, an element of European self-making, first of all” (35).  In this respect, as in others, the author acknowledges his predecessor, Edward Said, whose  “entire effort in Orientalism was (at one level) to argue for the centrality of Orientalism, as cultural logic and enterprise, to the emergence of modern European culture, to Europe’s self-making” (75).  Mufti illustrates his argument with a fascinating example, proposing that the “lyricization of poetry in the West,” that is, the “gradual expansion of . . . ‘lyric’ norms of expression . . .  to encompass” all practices of reading and writing poetry, is “an intercultural and worldwide process” that can be traced back to the “Orientalist ‘discovery’ of the ‘ancient’ poetic traditions of the ‘Eastern nations’” (71).   By considering the Orient/Occident interplay, a genealogy of the early concepts and practices of world literature shows how a “’lyric’ sensibility emerged in Europe at the threshold of modernity in the encounter with ‘Oriental’ verse and, having taken over the universe of poetic expression in the West, became a benchmark and a test for ‘Oriental’ writing traditions themselves, erasing in the process all memory of its intercultural origins” (74).

    Together, philological Orientalism and (adopting a contrast of Erich Auerbach’s, Herder’s “Nordic” national rather than Vico’s “Latinate civilizatory”) philosophical historicism made the new concept of world literature possible.  The combined Orientalist and historicist thinking legitimized both the different manners of being human and “the same manner of being different” (77).  In addition to its contribution to European self-making, Orientalism contributed to world making as well and deserves to be studied “as an articulated and effective imperial system of cultural mapping, which produced for the first time a conception of the world as an assemblage of civilizational entities, each in possession of its own textual and/or expressive traditions” (20).  Oriental mapping structured “the cultural logic of the modern, bourgeois West in its outward orientation” (11) and facilitated the expansionist “transformation of societies on a world scale” (90).  In non-Western societies it fabricated “forms of cultural authority tied to the claim to authenticity of (religious, cultural, and national) ‘tradition’” (27).

    Orientalism was first activated in the production, periodization, and territorialization of India.  “What the early generation of Orientalists encountered on the subcontinent was not one single culture of writing but rather a loose articulation of different, often overlapping but also mutually exclusive, systems based variously on Persian, Sanskrit, and a large number of the vernacular registers, often more than one in a single language, properly speaking” (104-5).  To make sense of this variety and complexity, they re-structured it completely on the basis of the only model they knew and trusted, the historicist narrative of an evolutionary national history.  “The German and eventually pan-European discourse of world literature is thus fundamentally indebted to and predicated on” (104) the British colonial project of Indological philology, launched near the end of the 18th century.  “It is in this manner, by providing the materials and the practices of a new cosmopolitanism (as well as indigenist or particularist) conception of the world as linguistic and cultural assemblage, that English began to supplant the neoclassical order on the continent in which above all others French and France had provided the norms for literary production” (109).  Non-Western textual traditions entered the literary space as “literature” through the revolution of the philological knowledge that included the “discovery” of classical languages in the East and the invention of their family tree (58).  Eastern writing practices were absorbed into “literature” when their ancient works were classicized, that is, established as the original tradition of a civilization and arranged as its core national canon.

    Mufti documents “that Orientalist theories of cultural difference are grounded in a notion of indigeneity as the condition of culture – a chronotope, properly speaking, of deep habitation in time – and that therefore nationalism is a fundamentally Orientalist cultural impulse” (37).  What he calls the “chronotope of the indigenous” (74) consists of “spatiotemporal figures of habitation” (74) deeply rooted in both place/territory and time/history (129).  Its territorially common ground validates “the authenticity of tradition” (112).  Consequently, the task of genealogical inquiry is “to give a historical account of the acquisition of literary history . . . by a vast, diffuse, and internally differentiated body of writing … a historical (and critical) account of the . . . ascription of historicality . . . structured around the chronotope of the indigenous” (143).  The Orientalist practice of indigenization standardized the pluralist logic of a pre-modern cultural space into a differentiated linguistic-literary field and ushered it into the colonial “world republic of letters.”

    The “dual process of indigenization” (116) of language, literature, and culture, which incorporates of the intertwined strategies of historicism and Orientalism, consisted in classicizing (say, into Sanskrit) a civilization (say, the Indo-Persian one) and vernacularizing (say, into Urdu and Hindi) its cosmopolitanism (say, the subcontinental one).  Τhus, through indigenization, Indian writing essentialized itself into a national literature in order to be admitted to the Orientalist canon of world literature and join the global system of different and unique cultures.  The overlapping colonial cultural projects of indigenization “in the name of return to the origin” (173) and vernacularization as recovery of “authenticity” (251) are inseparable from bourgeois modernization (119).  “It is thus in English as cultural system, broadly conceived – namely, in the new Indology and its wider reception in the Euro-American world – that the subcontinent was first conceived of in the modern era as a single cultural entity, a unique civilization with its roots on the Sanskritic and more particularly Vedic texts of the Aryans. . . .  The idea that India is a unique national civilization in possession of a ‘classical’ culture was first postulated on the terrain of literature, that is, in the very invention of the idea of Indian literature in the course of the philological revolution” (109).  The encounter between Oriental philology and Occidental literature produced a national literary model that inspired the Indian national sentiment and identity (115) and created the “institution of Indian literature” (37, 73).

    I have constructed here the chronological genealogy of world literature that drives Mufti’s argument, the linear story that is plotted in his book through complex discussions of practices, notions, and texts.  The “world” of world literature consists of indigenous cultures using vernaculars to sustain literature as their national institution.  Their heterogeneity is predicated on standardized difference, their cosmopolitanism is based on the nation-state, their unity guaranteed by unequal power relations, and they can all be traced to the Orientalist construction of the colonial archive, be it registry, collection, or museum.  Mufti puts into practice with great integrity and virtuosity his conviction that “the task of criticism today is at the very least the untangling and rearranging of the various elements presently congealed into seemingly distinct and autonomous objects of divergent literary histories.  The critical task of overcoming the colonial logics persistently at work in the formation of literary and linguistic identities today is thus indistinguishable from the task of pushing against the multiple identarian assumptions, colonial and Orientalist in nature, of Hindi and Urdu’s mutual and religiously marked distinctness and autonomy.  A post-colonial philology of this literary and linguistic complex can never adequately claim to be produced from a position uncontaminated by the language polemic that now constitutes it and can only proceed by working through its terms.  This secular-critical task, furthermore, corresponds not to the erection of some image of a heterogeneous past but to the elaboration of the contradictory contemporary situation of language and literature itself” (128-9).  Forgetting English is possible only in English.

    He advocates resistance both to the colonial gaze and national authenticity, asking fellow scholars to “forget” (that is, learn to question by working with) not only English and the “world” in world literature but also the prefix in post-colonial.  “If, on the one hand, I urge world literature studies to take seriously the colonial origins of the very concept and practices they take as their objet of study, on the other, I hope to question the more or less tacit nationalism of many cotemporary attempts to champion the cultural products of the colonial and postcolonial world against the dominance of European and more broadly Western cultures and practices” (53).  This position exemplifies notion of a contrapuntal criticism that takes into account intertwined perspectives and discourses. “No self-described attempt to ‘return’ to tradition, religious or secular, can sustain its claim to be autonomous of ‘the West’ as Other. . . . No attempt at self-definition and self-exploration can therefore bypass a historical critique of the West and its emergence into this particular position of dominance.  And, in this sense, the critique of the West and the logics of its imperial expansion from a postcolonial location is in fact a self-critique, since this location is at least partially a product of that historical process” (153-4).

    While both Orientalism and Occidentalism/Anglicism seek to capture an “one-world” reality, they are caught between the local and the cosmopolitan, the particular and the universal (3).  By consciously operating within these tensions without being at home in either of their poles, the exilic perspective introduced by Auerbach and later advocated by Said can avoid both cosmopolitan detachment and communal narcissism.  An “exilic rethinking of the philology of world literature” (41) would become the basis for a radicalized “philology as homeless practice” (200), for a “historically engaged and linguistically attuned” (241) secular criticism with a “missing homeland” (202).  Supporting neither transnational nor autochthonous social imaginaries, it can provide a dialectically alert account of concrete cultural circumstances “because it captures simultaneously the violent exclusions of the national frame, the material reality of its (physical as well as symbolic) borders, the dire need to overcome its destructive fixations, and its inescapability in the present moment” (194).

    In his conclusion, addressing the central case of post-colonial subcontinent, Mufti supplements the exilic perspective with an additional one, also drawn from twentieth century experience, which promises to offer intrinsic means of study by drawing explicitly on partition as condition and modality since the “politics of linguistic and literary indigenization is a distinct element in the larger historical process that culminated in the religio-political partition of India in 1947 and is thus at the same time an important element in the history of the worldwide institution of world literature” (38).  In a manner reminiscent of the ways in which post-Heideggerian thought puts metaphysics “under erasure,” Mufti puts the subcontinent under partition.  “In light of the historical analysis of the cultural logic of Orientalism-Anglicism operating in the long, fitful, and ongoing process of bourgeois modernization in the subcontinent that I have attempted here, the task of criticism with respect to the field of culture and society in the region is therefore to adopt partition as method, to enter this field and inhabit the processes of its bifurcation, partition not merely as event, result, or outcome but rather as the very modality of culture, a political logic that inheres in the core concepts and practices of the state” (200).  Not a closed part of the past or even its living memory, partition is “the very condition of possibility of nation-statehood and therefore the ever-renewed condition of national experience in the subcontinent” (201).  The political logic of partition is inherent in the normative majoritarianism of the modern nation-state which by definition entails the minoritarization of certain groups and practices, a crisis of legitimacy leading to the partition of society (200-1).  “To argue for partition as method is, therefore, to argue for extracting submerged modes of thinking and feeling from the ongoing historical experience that is partition” (202).

    Furthermore, in the twenty first century this condition operates far beyond the subcontinent.  Ours is a time of proliferating boundaries where the traditional institution of the border of the nation-state is undergoing internal and external challenges and transformations, with some of its functions “redistributed throughout social space” (7) and others globalized, turning it into a “universalized institution” (201).  What is the meaning of world literature in a world where borders are traversing urban, regional, national, and transnational environments and literature often functions as a generalized cartography?  With this question I will proceed to indicate just a few of the many fields of inquiry where this book deserves to be studied and activated.

    Mufti’s notion of “partition as method,” which enriches the problematic of books like Asia as Method:  Toward Deimperialization (21010) by Kuan-Hsing Chen’s and Border as Method (2013) by Sandro Mezzadra and Brett Neilson, should be of obvious interest to Border Studies, an interdisciplinary field that since the 1980s has been examining geographical, political, economic, cultural, and other boundaries primarily in Asia, Africa, and Latin America and with an emphasis on matters of migration and gender.  The field started by looking at legal, political, and lexical definitions but it has been expanding to consider how borderscapes are narrated, performed, and de-legitimized in the Global South.  An anatomy of world literature would complement current studies of the ways in which, in addition to lands, borderings distribute languages, communities, stories, signs, and jurisdictions.  The order of literature since its national and Oriental origins shows borders working as epistemological devices and markers of relations rather than lines and locations.

    An adjacent and even more interdisciplinary field is the study of territories and their flux in the integrated post-industrial world.  Influenced by the work of Deleuze & Guattari (with their interests from “minor literature” to plateaus to nomadology), it has radically shifted emphasis from the structure to the flow of capital and the dominant econo-semiotic system, which Mufti too has done with literature.  The “assemblage of enunciation” might fit well with his notion of the writing corpus, and the “plane of immanence” with his “plane of equivalence.”  Most importantly, the Deleuzian “rhythm” of difference and repetition would resonate with the contrapuntal circulation of literature in the post-colonial milieu.

    The sociology of culture would benefit greatly from attention to the emergence of the literary sphere and its citizenry, whose members often belong to the national intellectual aristocracy.  Given its interest in the ways in which Bourdieu’s habitus operates according to a logic of practice, it would examine the subfield of literature within the objects, norms, and practices of the cultural field.  Mufti’s work on production and appropriation, and above all domination through symbolic power, provide numerous examples of the kind of capital gained and interest served by disinterested taste as competence and distinction as performance.

    The quest for cultural capital and symbolic power has been driven by the counter-political ideology of the aesthetic state, a milieu and habitus where aesthetic practices constitute the highest form of politics.  Mufti contributes greatly to an understanding of this regime, including the institutions it establishes and cherishes.  The bourgeois subject, who is the citizen of that ideal state, responds to the functional differentiation of society in distinct borderlands with the democratization of art and the sacralization of high culture. Through the proper literary education, fiction and poetry train readers to achieve a Kantian freedom of aesthetic autonomy by giving the interpretive law to themselves above the constraints of any internal or external partition.

    The path from the sociology of culture to its ideology may lead next to its ethics, namely, art as a spiritual ascesis.  Mufti has discussed the political rationality of the humanities and the aesthetically administered university.  His rigorous genealogical approach may be supplemented by Ian Hunter’s interest in humanism and the pre-national state of the sixteenth and seventeenth centuries as well as the aesthetic discipline of literary cultivation that emerged with Romantic literature and philosophy.  The origins of the philological skills that mobilized Orientalism to create world literature may also lie in a combination of artistic pleasure as worldly ethical competence with literary criticism as a moral practice of the self, that is, in the aesthetico-ethical training of the self in interpretive (self-)problematization which first produced the reader of literature.

    In addition to chronicling the emergence of world literature, Aamir Mufti’s Forget English! reflects on “just about the most encompassing cultural concept of our times, the notion of the systematic totality of the expressive productions of nothing less than humanity in its entirety.” (252).  Through a genealogy of literary comparison it raises the question of doing comparative humanities on a global level.  That is why it ought to have a broad scholarly and pedagogical impact.  This is not a book that scholars may read with profit, and then simply add to their bibliography and syllabus.  It invites reflection on what it means to compare at a time of universal comparability, that is, when everything is comparable (and also appears contemporary) to everything else.  Rather than seeking to add unknown or neglected materials to our canons, it challenges us to reconfigure canon making itself as well as the way we put together panels, collective volumes, or institutes.  Ultimately, Mufti is proposing that, in addition to new critiques, World Humanities needs new ways of constituting the humanities as a common.

    Vassilis Lambropoulos is the C. P. Cavafy Professor of Modern Greek in the Departments of Classical Studies and Comparative Literature of the University of Michigan.  He is the author of Literature as National Institution (1988).

  • Elizabeth Losh — Hiding Inside the Magic Circle: Gamergate and the End of Safe Space

    Elizabeth Losh — Hiding Inside the Magic Circle: Gamergate and the End of Safe Space

    by Elizabeth Losh, The College of William and Mary

    The Gamergate controversy of recent years has brought renewed public attention to issues around online misogyny, as feminist game developers, critics, scholars, and fans of independent video gaming have been targeted by very intense campaigns of digital harassment that seem to threaten their fundamental rights to personal privacy, bodily safety, and sexual agency. Feminists under attack by users of the hashtag #GamerGate complain of being silenced, as they report being disciplined for imagined infractions of supposed sexual, social, journalistic, and ludic norms in computational culture with punishing messages of censure, ridicule, exclusion, and violence. As noted by the mainstream news media, extremely aggressive tactics have been deployed, including leaking women’s sensitive private information – such as unlisted addresses and social security numbers – to the web (a practice known as “doxxing”), placing false reports with law enforcement or emergency first responders (a practice known as “swatting”), and highly personalized stalking with rapid escalations of threats of graphic violence that are often sexualized as rape or racialized as lynching. Although it may be important for the eloquent first-person testimony of the terrorized women themselves to be given priority as speech acts that command attention in resisting prevailing misogyny, the women’s antagonists often are allowed to remain invisible. Furthermore, allies presuming to advocate for the feminist victims of Gamergate may not adequately honor their stated wishes for peace, privacy, and closure that those experiencing online violence may express (Quinn 2015). This essay attempts to examine the larger discursive context of Gamergate and why hardcore gamers who were fans of AAA videogames – often with military storylines and first-person shooter game mechanics – constructed a seemingly illogical and paranoid explanatory theory about so-called “social justice warriors” (Bokhari et al. 2015) or “SJWs,” pursuing unfair advantage to sway the game industry.

    How do we understand how Gamergaters’ claims for noninterference and sovereignty in game worlds and online forums function alongside their claims for no-holds-barred investigations and public debates? Common rhetorical tactics deployed by Gamergaters include using rights-based language to further this specific variant of the men’s rights movement (Esmay 2014) and making appeals to the values of a supposedly rational public sphere (MSMPlan 2015). As these hardcore gaming fans deny the materiality, affect, embodiment, labor, and situatedness of new media, they also affirm positive notions about the exceptionalism of a realm defined – in Nicholas Negroponte’s terms – by bits rather than atoms. Gamergaters are particularly vehement in denying that “online violence” is a possibility with tweets such as “>violence >online pick one” and “will you please point me to the online killing fields where all the bodies from violence online are kept?” (Wernimont 2015). The Gamergate vision of digital culture is one of disembodied and immaterial interactions in which emotional harm is considered to be nonviolent.

    According to Gamergate accounts, the assumption that hardcore gamers representing masculine white privilege were under attack was also apparently buttressed by a number of online articles by game journalists suggesting that that the species was endangered and soon to be extinct. Gamers were declared “over” (Alexander 2014), at their “end” (Golding 2014), or facing the “death” of their collective identity (Plunkett 2014). The arguments made for years by feminist game collectives for pursuing the large market share in lower-status “casual” games, often played by women, had finally seemed to create inroads for independent developers. At the same time Gamergaters described their defensive position as a response to what they often characterized as a feminist “incursion” or “invasion” of gaming that was conceptualized as a substantive attack or threat to gamers. So-called “men’s rights” proponents – who may characterize themselves as “Men’s Human Rights Activists” – differentiated themselves from the distributed and heterogeneous population of gamers but also proclaimed that “the same people attacking Gamergate have been attacking us for years, using exactly the same tactics” (Esmay 2014). According to Breitbart columnist Yiannopoulos (2014a), “cultural warriors” arrived on the scene of gaming like “genocidal, psychopathic aliens in Independence Day;” these “social justice warriors” allegedly attempted to colonize a diverse community, but their “killjoy” advances were repelled and defenders declared them “not welcome in the gaming community.” According to this columnist, supposedly “politeness and persistence” had guaranteed victory in “the culture wars against guilt-mongerers, nannies, authoritarians and far-Left agitators.” While Sara Ahmed (2010) has explicitly called for self-identified “feminist killjoys” to disrupt the perpetuation of patriarchal false consciousness and the enforcement of positive affect in society, the perceived opponents of Gamergate are often cast as the aggressors despite what may be deep desires to participate in the gaming communities that exclude them.

    Decades before Gamergate, Dutch game theorist Johan Huizinga (2014) described what he called the “magic circle” of the temporary world constituted by a game, which appears to function as an isolated “consecrated spot” within which “special rules obtain” for performances apart from everyday concerns (10). Gamergaters often use similar terminology to discuss how game spaces should be intended to serve as a refuge from real-world behavioral constraints and the restrictions of social roles, as in the case of one Breitbart blogger seeking to exclude “angry feminists” and “unethical journalists” from interference with game play.

    Gamers, as dozens of readers have told me in the relatively short time I have been covering the controversy now called #GamerGate, play games to escape the frustrations and absurdities of everyday life. That’s why they object so strongly to having those frustrations injected into their online worlds. The war in the gaming industry isn’t about right versus left, or tolerance versus bigotry: it’s between those who leverage video games to fight proxy wars about other things, introducing unwanted and unwarranted tension and misery, and those who simply want to enjoy themselves. (Yiannopoulos 2014a)

    Gamergate advocates claim that video games are expected to be arenas where gamers can assert their sovereignty and self-determination in spaces that can’t be “leveraged” or annexed to “fight proxy wars” by non-gamer outsiders.

    According to Huizinga (2014), the arena of game play is characterized by the freedom of voluntary participation, disinterested behavior, and an opposition to serious conduct. Similar criteria also often are presented as premises for action in the rhetoric of Gamergate enthusiasts in their comments on various sites for public debate. For example, feminist game developers and critics may be accused of coercing and manipulating potential allies who are journalists through sexual liaisons, romantic promises, or appeals to social justice that invoke guilt and shame. Feminist opponents of Gamergaters are also characterized on sites such as Breitbart as “self-promoters” and “opportunists” and labeled as “egotistical” people who “beg for sympathy and cash” (Yiannopoulos 2014b). Thus, according to the logic of free choice, feminist “social justice-oriented art” in digital culture is aimed at “robbing players of agency and individualism” in every possible kind of engagement (Yiannopoulos 2014b).

    Personal freedom and a separation from material interests or a profit motive are often cited as ethical values shared by Gamergate, although many of its tactics are not at all solemn or high-minded. Active Gamergaters on the Escapist and 8chan emphasize their own diverse and distributed structure, and these anarchic swarms of participants take action “for the lulz,” much as members of Anonymous and 4chan have engaged in outing and calling out campaigns (Coleman 2013). Images of feminist gamers are altered with editing software, phrases like “online violence” are mocked, and fake identities are manufactured with puns and inside jokes. For example, in a crowd-funding effort to promote women in games who disavowed feminist “SJWs,” Gamergate forum members created an elaborate green-eyed and hoodie-wearing fictional persona intended to represent a pro-Gamergate libertarian “everywoman.” The avatar dubbed “Vivian James” wears the four-leafed clover of 4chan, “tough-loves video games,” and “loathes dishonesty and hypocrisy” (“The Birth of Vivian” 2015).

    While Gamergaters emphasize “personal responsibility” and “individual agency” (Yiannopoulos 2014b) as values, feminist critics tend to emphasize interdependence and states of being always-already subject to the coercions of others. In Huizinga’s (2014) terms, feminists inside the magic circle may be perceived as “spoil-sports” who must be “ejected” from the “community,” because they are attempting to break the magic world by failing to acknowledge its misogynistic conventions (11-12). As Anastasia Salter (2016) notes, in Huizinga’s analysis the spoil-sport is most visible in “boys’ games,” thereby establishing solidarity around youthful masculinity as the norm.

    By discussing misogyny in different venues for conversation among networked publics in game forums, blogs, or vlogging communities, and even within live multi-player gaming itself, feminists are cast as a disruptive presence.  Social justice warriors must be treated as aggressors to be repulsed by Gamergaters from the magic circles of game worlds in order to reclaim these spaces and return them to their proper exceptional status and thus maintain their security from real-world incursions.

    Of course, the concept of “safe space” has been central to the history of the women’s liberation movement and its associated consciousness-raising efforts. After all, feminists have reasoned that safe space might be necessary to explore intimate issues about sexuality and reproductive health – which might even include techniques for gynecological self-examination championed by foundational texts like Our Bodies, Ourselves – and safe space would also be needed to share confidences about personal histories of rape, domestic violence, and other forms of gendered trauma. How safe space is constituted can be developed along a number of different axes. For example, as awareness about “microaggressions” – a term used to describe the automatic or unconscious utterance of subtle insults (Solorzano, Ceja, & Yosso 2000) – has proliferated, participants at feminist events may be asked to be mindful of their own assumptions, privileges, and power relations in social gatherings. The full sensorium of potential kinds of assault may also be invoked in defining safe spaces, so those speaking loudly or wearing scent may be prohibited from these activities to protect those intolerant, averse, or allergic to certain stimuli.

    Feminists themselves have been reevaluating the assumed need for safe space for a variety of reasons. While media outlets grappling with the concept of “trigger warnings” may characterize any special treatment of vulnerable individuals as coddling or “hiding from scary ideas” (Shulevitz 2015), feminists themselves are often concerned about how the gestures of exclusion mandated by protective impulses enforce particular norms counter to the goal of empowerment. Some argue that “brave spaces” that encourage public acts of asserting identity or declaring solidarity may be more productive than private “safe spaces” (Fox 2004). Homogeneous safe spaces designed for the security of cisgendered whites may be criticized as excluding transgender people (Browne 2009) or people of color (Halberstam 2014). As Betty Sasaki (2002) observes, “safety” can become “the code word for the absence of conflict, a tacit and seductive invitation to collude with the unspoken ideological machinery of the institutional family” (47). And Donadey (2009) points out the irony “that radical feminist pedagogy tends to replicate the assumptions of the bourgeois concept of the public sphere” (214).

    In addition to using the #Gamergate and #SJW (for “social justice warrior”) hashtags on social media platforms such as Twitter, Gamergate adherents frequently use #NotYourShield, which indicates that feminists shouldn’t be shielded from criticism merely because they might claim alliances with underrepresented groups, such as women or minorities, given the fact that members of these groups might not identify with feminism or feel exploited, disenfranchised, or excluded from hardcore gaming communities. #NotYourShield allies of Gamergate may embrace the quintessential hardcore gamer identity of AAA titles with military themes, or may indicate that they are content with conventionally feminized casual games played on mobile devices and don’t want to interfere with so-called “real” games. While Gamergaters may protect the borders of their own magic circles, they criticize those who claim feminist discourse operates in safe spaces devoid of challenges from opponents. Affixing the #NotYourShield piece of metadata to a message supports Gamergaters’ contentions that feminists use the victimization of women and people of color to shield themselves unfairly from rebuttals or tests of truth claims. In videos such as “#NotYourShield – We Are Gamers,” choruses of voices are carefully curated to emphasize “corruption” and “censorship” as features of feminism, and “transparency” and call-out culture as features of Gamergate.

    Although Huizinga’s (2014) magic circle may be more open to public spectatorship than the private sphere of feminist safe space, it is also a zone of exception that is marked off by “secrecy” and “disguise,” according to Homo Ludens (13). Even if the rules for the magic circle are assumed to be uncontested, and the space of play is accepted as apart from the everyday world, the exceptional territory of game play could be a space of less violence (if mockery of authoritarian rulers is tolerated in the case of the Bakhtinian carnivalesque) or more violence (if physical injuries from contact sports are permitted that would normally be prosecuted as assault). Nonetheless, according to Edward Castronova (2007), the membrane of the magic circle “can be considered a shield of sorts, protecting the fantasy world from the outside world. The inner world needs defining and protecting because it is necessary that everyone who goes there adhere to the different set of rules” (147).

    Feminist game critics have begun to question Huizinga’s (2014) concept of a zone of exceptionalism, particularly as the legal, economic, and social consequences of game play are manifested in a variety of “real world” contexts. For example, Mia Consalvo (2009) challenges Castronova’s belief that “fantasy worlds” are a separate domain: “even as he might wish for such spaces, such worlds must inevitably leave the hands of their creators and are then taken up (and altered, bent, modified, extended) by players or users—indicating that the inviolability of the game space is a fiction, as is the magic circle, as pertaining to digital games” (411). Within game spaces of conflict and collaboration, players may bring different agendas into the magic circle, and thus it might be more difficult than Huizinga (or Castronova) imagines to reach consensus about the common rules of play. For example, when a guild of players in World of Warcraft decided to hold a funeral in an area for player-versus-player combat, other participants justified attacking the solemn ceremony in a coordinated raid on the grounds of asserting existing play conventions (Losh 2009). Consalvo further claims the static, formalist vision of bounded play that is grounded in structuralist theory, which is articulated by Huizinga and his disciples, ignores the fact that context is constantly being evaluated by players. Instead of the magic circle, she posits that players “exist or understand ‘reality’ through recourse to various frames” (415).

    For women, queer and transgender persons, and people of color who identify as gamers, neither magic circle nor safe space often seem descriptive of the harsh settings of their game play experiences. As Lisa Nakamura (2012) observes, playing as a woman, a person of color, or a queer person requires extraordinary game skills and talent at a level of hyper-accomplishment because of the extremely rigorous “difficulty setting” of playing in an identity position other than straight white male. Unfortunately, to be an exceptional individual in an exceptional space is often punished rather than rewarded. Moreover, as a woman of color, Shonte Daniels (2014) has insisted that “gaming never was a safe space for women” because “their identity makes them vulnerable to threats or harassment.” However, she also speculates that Gamergate may prove to be “both a blessing and a curse,” given how much attention to online misogyny has been generated by the intensity and egregiousness of Gamergate behavior.

    Many date the Gamergate controversy from fall 2014 – when harassment of dozens of feminists in the videogame industry, including game developers Zoë Quinn and Brianna Wu and cultural critic Anita Sarkeesian, made headlines. However, online misogyny and gender-based aggression have had a long history in digital culture that goes back to bulletin boards, MOOs, and MUDs and the existence of virtual rape in early forms of cyberspace (Dibbell 1998). To coordinate the current campaign of harassment, IRC channels and online forums such as Reddit, 4chan, and 8chan were used by an anonymous and amorphous group that came to be represented by the Twitter hashtag #GamerGate after actor Adam Baldwin deployed a familiar suffix associated with prominent political cover-ups. According to the Wikipedia entry, Gamergate “has been described as a manifestation of a culture war over gaming culture diversification, artistic recognition and social criticism of video games, and the gamer social identity. Some of the people using the Gamergate hashtag allege collusion among feminists, progressives, journalists and social critics, which they believe is the cause of increasing social criticism in video game reviews” (“Gamergate Controversy” 2015).

    It is worth noting that Wikipedia’s handling of its own distributed labor practices defining Gamergate has had a contentious history that included a personal invitation to Gamergaters from Wikipedia founder Jimmy Wales to contribute to improving the Gamergate article (Wales 2014), a pointed rejection of financial contributions to Wikipedia from Gamergaters (“So I Decided to Email Jimbo” 2014), and a defense of banning Wikipedia editors perceived as biased against Gamergate (Beaudette 2014). Ironically, during this intense period of engagement with the “toxic” participants of Gamergate eventually dismissed by Wales, Wikipedia often deployed a rhetoric about volunteerism, disinterested conduct, and playing by a neutral set of rules that paralleled similar rhetorical appeals from Gamergaters.

    Attention to this recent controversy – about who is a gamer and what is a game – has already generated a literature of scholarly response that focuses, as this essay does, on Gamergate rhetoric itself. Shira Chess and Adrienne Shaw’s (2015) essay, “A Conspiracy of Fishes,” analyzes how a particular cultural moment in which “masculine gaming culture became aware of and began responding to feminist game scholars” produced conspiratorial discourses with a specific internal logic that shouldn’t be dismissed as nonsensical:

    It is less useful to consider the validity of a conspiracy in terms of actual persecution, and is more potent if we look at it in terms of a combination of perceived persecution and an examination of the anxieties that the conspiracy is articulating. From this perspective, we can look at gaming culture as a somewhat marginalized group: For years those who have participated in gaming culture have defended their interests in spite of claims by popular media and (some) academics blaming it for violence, racism, and sexism. A perceived threat opens a venue for those who feel their culture has been misunderstood—regardless of whether they are the oppressors or the ones being oppressed. It is easy to negate and mark the claims of this group as inconsequential, but it is more powerful to consider the cultural realities that underline those claims. (217)

    As Chess and Shaw point out, the gamer identity may function in the context of other kinds of intersectional identities in which subjects for which the personal is political can be imagined as oppressors in one context and the oppressed in another.

    In addition to deploying a primary strategy about constructing a narrative about persecution aimed at a marginalized group, Gamergate is also concerned with the secondary strategy of mapping supposed networks of influence across publication venues, media genres, knowledge domains, political spheres, and economic sectors. Such Gamergate infographics seem to have begun with visualizations that were often reminiscent of Wanted posters, in which names and photographs of individual offenders were clustered in particular interest areas. For example, 4chan assembled a list of “SJW Game Journalists” that was republished on Reddit, which goes far beyond the initial allegations of impropriety about game reviewing at Kotaku to target writers at over a dozen other publications.

    As Gamergaters go down the “rabbit hole” of exploring possible connections and exposing hidden networks, they eventually claim political and educational institutions as agents in the conspiracy with a particular focus on DiGRA, the Digital Games Research Association, which was founded in 2003 and holds an international conference each year. One diagram shows the tentacles of DiGRA extending into online venues for gaming news and reviews, such as Kotaku, Gamasutra, and Polygon, as well as mainstream publications with a print tradition, such as The Guardian and TIME, and conference venues for many AAA games, such as the annual Game Developers Conference (GDC), which was founded in 1988 with a focus on fostering more creativity in the industry. Pictures of offender/participants in the network continued to be featured in this denser and more recursive form of network mapping, as though facial recognition would be a key literacy for Gamergaters.

    It is worth noting that many feminists would describe DiGRA as far from being a haven organization from misogyny, given existing biases in game studies that may privilege academics with ties to computer science, corporate start-ups, or other male dominated fields. Members of the feminist game collective Ludica have described strong reactions of denial when they declared at DiGRA in 2007 that the “power elite of the game industry is a predominately white, and secondarily Asian, male-dominated corporate and creative elite that represents a select group of large, global publishing companies in conjunction with a handful of massive chain retail distributors” and thus constitutes a “hegemonic” power that “determines which technologies will be deployed, and which will not; which games will be made, and by which designers; which players are important to design for, and which play styles will be supported” (Fron et al. 2007). The rhetoric of the Ludica manifestos about how games and gamers were being defined too rigidly by an industry enamored of AAA titles often ran counter to the origin stories of organizations such as GDC and SIGGRAPH.

    The third key strategy of Gamergaters – in addition to the fabricating the persecution narrative and the influence maps – is formulating threats of financial retaliation. If liberal members of the press and academic and professional associations in game studies and game development benefit from a supposed flow of money, social capital, and privileged access to career advancement, libertarian Gamergaters will thwart them with economic threats. This creates a paradoxical dynamic in which Gamergaters both assert an ethos of economic disinterest – because gaming is supposed to be a non-profit/non-wage activity that is separate from accumulation of capital in the real world – and seek to exercise their collective power to crowdfund sympathizers, and boycott, divest, and freeze assets of feminist allies and ally organizations. Advertisers are besieged with consumer complaints about the ethics of reporting in game publications, university employees are reported to administrators with accusations about frittering away public funds, and even donations to Wikipedia are withdrawn by indignant Gamergaters.

    Because feminists supposedly use financial interest as a lever, Gamergaters must also use financial interest as a way to assert the fairness, neutrality, and civility of a rational public sphere, which is tied to their fourth strategy about policing discourse. In regulating language in order to keep it freely flowing in a neoliberal marketplace of ideas so that the best notions will be the most valued, hyperbolic and hysterical feminist “strawmanning” and “insulting” very explicitly will not be tolerated by Gamergaters. In insisting that harassers are a statistically insignificant fraction of their movement in a counterfactual account of their power to terrorize targets and dominate channels of communication, language reminiscent of Robert’s Rules of Order can be as commonly encountered in Gamergate discourses as more stereotypical forms of trolling.

    This does not mean that the campaigns of Gamergate to construct us-and-them narratives, to make explicit and to visualize connections in social networks, to block some financial transactions and facilitate others, and to regulate discourse with structures of rational dialogue, leveling effects, and tone policing are not misogynistic. They defend and enable doxxing, swatting, and stalking behaviors that undermine the very barriers between virtual reality and material existence that are central to their contradictory ideologies of exceptionalism and common jurisdiction.

    The need for nurturing diversity among game players and developers (Fron et al. 2007) has been a work in progress for the better part of a decade, but in the wake of Gamergate, hundreds of prominent signatories who asserted the “right to play games, criticize games and make games without getting harassed or threatened” published an “open letter to the gaming community” (IGDA 2014). The fact that this pointed defense of feminist gamers, critics, and designers also used rights-based language might be instructive for better understanding the discursive context of Gamergate as well.

    The Italian biopolitical philosopher Roberto Esposito (2010, 2011) has theorized that two conflicting modalities of “community” and “immunity” operate when members either accept or resist the obligations of the social contract. Looking at the rhetoric of Gamergaters about the magic circle and how they caricature the rhetoric of feminists about safe space, we see how these oppositions are underexamined, and we can ask why opportunities for reflection and reflexive thinking about intersectionality are being foreclosed.

    Works Cited

    • Ahmed, Sara. 2010. The Promise of Happiness. Durham: Duke University Press.
    • Alexander, Leigh. 2014. “‘Gamers’ Don’t Have to Be Your Audience. ‘Gamers’ Are Over.” Gamasutra, August 28. http://www.gamasutra.com/view/news/224400/Gamers_dont_have_to_be_your_audience_Gamers_are_over.php.
    • Bailey, Moya. 2015. “#transform(ing)DH Writing and Research: An Autoethnography of Digital Humanities and Feminist Ethics.” Digital Humanities Quarterly 9, no. 2.
    • Beaudette, Philippe. 2015. “Civility, Wikipedia, and the Conversation on Gamergate.” Wikimedia Blog. January 27. http://blog.wikimedia.org/2015/01/27/civility-wikipedia-Gamergate/.
    • Bokhari, Allum, and Milo Yiannopoulos. 2015. “Entertainment Industry Says ‘No More’ to Social Justice Warriors.” Breitbart. July 20. http://www.breitbart.com/big-hollywood/2015/07/20/enough-entire-entertainment-industry-says-no-more-to-social-justice-warriors/.
    • Browne, Kath. 2009. “Womyn’s Separatist Spaces: Rethinking Spaces of Difference and Exclusion.” Transactions of the Institute of British Geographers, New Series, 34 (4): 541–56.
    • Castronova, Edward. 2007. Synthetic Worlds: The Business and Culture of Online Games. Chicago: University of Chicago Press.
    • Chess, Shira, and Adrienne Shaw. 2015. “A Conspiracy of Fishes, Or, How We Learned to Stop Worrying About #Gamergate and Embrace Hegemonic Masculinity.” Journal of Broadcasting & Electronic Media 59, no. 1: 208–20.
    • Coleman, Beth. 2011. Hello Avatar: Rise of the Networked Generation. Cambridge, MA: MIT Press.
    • Coleman, E. Gabriella. 2014. Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. Brooklyn, NY: Verso.
    • Consalvo, Mia. 2009. “There Is No Magic Circle.” Games and Culture 4, no. 4: 408–17.
    • Daniels, Shonte. 2014. “Gaming Was Never a Safe Space for Women.” RH Reality Check. November 4. http://rhrealitycheck.org/article/2014/11/10/gaming-never-safe-space-women/.
    • Dibbell, Julian. 1998. “A Rape in Cyberspace.” http://www.juliandibbell.com/articles/a-rape-in-cyberspace/.
    • Donadey, Anne. 2009. “Negotiating Tensions: Teaching about Race in a Graduate Feminist Classroom.” In Feminist Pedagogy: Looking back to Move Forward, edited by Robbin Crabtree, David Alan Sapp, and Adela C. Licona, 209–29. Baltimore, MD: Johns Hopkins University Press.
    • Esmay, Dean. 2014. “Keeping up with #Gamergate.” A Voice for Men. October 16. https://lockerdome.com/7754206970916417.
    • Esposito, Roberto. 2010. Communitas: The Origin and Destiny of Community. Stanford, Calif.: Stanford University Press.
    • ———. 2011. Immunitas: The Protection and Negation of Life. Cambridge; Malden MA: Polity.
    • Fox, D. L., and C. Fleischer. 2004. “Beginning Words: Toward ‘Brave Spaces’ in English Education.” English Education. 37, no. 1: 3–4.
    • Fron, Janine, Tracy Fullerton, Jacquelyn Ford Morie, and Celia Pearce. 2007. “The Hegemony of Play.” In Proceedings, DiGRA: Situated Play, Tokyo, September 24-27, 2007, 309–18. Tokyo, Japan. http://www.digra.org/dl/db/07312.31224.pdf.
    • “Gamergate Controversy.” 2015. Wikipedia, the Free Encyclopedia. https://en.wikipedia.org/w/index.php?title=Gamergate_controversy&oldid=682713753.
    • Golding, Dan. 2014. “The End of Gamers.” Dan Golding. August 28. http://dangolding.tumblr.com/post/95985875943/the-end-of-gamers.
    • Halberstam, Jack. 2014. “You Are Triggering Me! The Neo-Liberal Rhetoric of Harm, Danger and Trauma.” Bully Bloggers. July 5. https://bullybloggers.wordpress.com/2014/07/05/you-are-triggering-me-the-neo-liberal-rhetoric-of-harm-danger-and-trauma/.
    • Huizinga, Johan. 2014. Homo Ludens: A Study of the Play-Element in Culture. Mansfield Centre, CT: Martino Fine Books.
    • “IGDA Developer Satisfaction Survey Summary Report Available – International Game Developers Association (IGDA).” 2015. https://www.igda.org/news/179436/IGDA-Developer-Satisfaction-Survey-Summary-Report-Available.htm (accessed September 23, 2015).
    • Jacobs-Huey, Lanita. 2006. From the Kitchen to the Parlor Language and Becoming in African American Women’s Hair Care. Oxford, UK, and New York, NY: Oxford University Press.
    • Koebler, Jason. 2015. “Dear Gamergate: Please Stop Stealing Our Shit.” Motherboard. http://motherboard.vice.com/read/dear-Gamergate-please-stop-stealing-our-shit (accessed September 24, 2015).
    • Levmore, Saul, and Martha Craven Nussbaum. 2010. The Offensive Internet: Speech, Privacy, and Reputation. Cambridge, MA: Harvard University Press.
    • Losh, Elizabeth. 2009. “Regulating Violence in Virtual Worlds: Theorizing Just War and Defining War Crimes in World of Warcraft.” Pacific Coast Philology 44, no. 2: 159–72.
    • MSMPlan. 2015. “The Flaws in Adrienne Shaw’s Paper on Gamergate and Conspiracy Theories.” Medium. March 18. https://medium.com/@MSMPlan/the-flaws-in-adrienne-shaw-s-paper-on-Gamergate-and-conspiracy-theories-7fc91df43bc.
    • Nakamura, Lisa. 2012. “Queer Female of Color: The Highest Difficulty Setting There Is? Gaming Rhetoric as Gender Capital.” Ada: A Journal of Gender, New Media & Technology 1, no. 1. http://adanewmedia.org/2012/11/issue1-nakamura/
    • Negroponte, Nicholas. 1995. Being Digital. New York: Knopf.
    • Plunkett, Luke. 2014. “We Might Be Witnessing The ‘Death of An Identity.’” Kotaku, August 28. http://kotaku.com/we-might-be-witnessing-the-death-of-an-identity-1628203079.
    • Quinn, Zoe. 2015. “August Never Ends.” Quinnspiracy Blog. January 11. http://ohdeargodbees.tumblr.com/post/107838639074/august-never-ends.
    • Salter, Anastasia. 2016. “Code before Content? Brogrammer Culture in Games and Electronic Literature.” presented at the Electronic Literature Organization, University of Victoria, June 10.
    • Sargon of Akkad. 2014. A Conspiracy Within Gaming #Gamergate #NotYourShield. https://www.youtube.com/watch?v=yJyU7RSvs_s.
    • Sasaki, Betty. 2002. “Toward a Pedagogy of Coalition.” In Twenty-First-Century Feminist Classrooms: Pedagogies of Identity and Difference, edited by Amie A. Macdonald and Susan Sánchez-Casal, 31–57. New York, NY: Palgrave Macmillan.
    • Shield Project. 2014. #NotYourShield – We Are Gamers. https://www.youtube.com/watch?v=SYqBdCmDR0M#t=81.
    • Shulevitz, Judith. 2015. “In College and Hiding From Scary Ideas.” The New York Times, March 21. http://www.nytimes.com/2015/03/22/opinion/sunday/judith-shulevitz-hiding-from-scary-ideas.html.
    • “So I Decided to Email Jimbo…” 2015. Reddit. Accessed September 25. https://www.reddit.com/r/KotakuInAction/comments/2pphuo/so_i_decided_to_email_jimbo/cmyzva7?context=3.
    • Solorzano, Daniel, Miguel Ceja, and Tara Yosso. 2000. “Critical Race Theory, Racial Microaggressions, and Campus Racial Climate: The Experiences of African American College Students.” The Journal of Negro Education 69, no. 1/2: 60–73.
    • “The Birth of Vivian.” 2015. http://i.imgur.com/FdqKFwu.jpg (accessed September 27, 2015).
    • Wales, Jimmy. 2014. “I Have an Idea for pro #Gamergate Folks of Good Will. Go to http://Gamergate.wikia.com/Proposed_Wikipedia_Entry … and Write What You Think Is an Appropriate Article.” Microblog. @jimmy_wales. November 12. https://twitter.com/jimmy_wales/status/532624325694992385?ref_src=twsrc%5Etfw.
    • Wernimont, Jacqueline. 2015. “A ‘Conversation’ about Violence against Women Online (with Images, Tweets) · Jwernimo.” Storify. https://storify.com/jwernimo/a-conversation-about-violence-against-women-online (accessed September 23, 2015).
    • Yiannopoulos, Milo. 2014a. “Gamergate: Angry Feminists, Unethical Journalists Are the Ones Not Welcome in the Gaming Community.” Breitbart. September 14. http://www.breitbart.com/big-hollywood/2014/09/15/the-Gamergate-movement-is-making-terrific-progress-don-t-stop-now/.
    • ———. 2014b. “The Authoritarian Left Was on Course to Win the Culture Wars… Then Along Came #Gamergate.” Breitbart. November 12. http://www.breitbart.com/london/2014/11/12/the-authoritarian-left-was-on-course-to-win-the-culture-wars-then-along-came-Gamergate/.
  • Zachary Loeb – What Technology Do We Really Need? – A Critique of the 2016 Personal Democracy Forum

    Zachary Loeb – What Technology Do We Really Need? – A Critique of the 2016 Personal Democracy Forum

    by Zachary Loeb

    ~

    Technological optimism is a dish best served from a stage. Particularly if it’s a bright stage in front of a receptive and comfortably seated audience, especially if the person standing before the assembled group is delivering carefully rehearsed comments paired with compelling visuals, and most importantly if the stage is home to a revolving set of speakers who take turns outdoing each other in inspirational aplomb. At such an event, even occasional moments of mild pessimism – or a rogue speaker who uses their fifteen minutes to frown more than smile – serve to only heighten the overall buoyant tenor of the gathering. From TED talks to the launching of the latest gizmo by a major company, the person on a stage singing the praises of technology has become a familiar cultural motif. And it is a trope that was alive and drawing from that well at the 2016 Personal Democracy Forum, the theme of which was “The Tech We Need.”

    Over the course of two days some three-dozen speakers and a similar number of panelists gathered to opine on the ways in which technology is changing democracy to a rapt and appreciative audience. The commentary largely aligned with the sanguine spirit animating the founding manifesto of the Personal Democracy Forum (PDF) – which frames the Internet as a potent force set to dramatically remake and revitalize democratic society. As the manifesto boldly decrees, “the realization of ‘Personal Democracy,’ where everyone is a full participant, is coming” and it is coming thanks to the Internet. The two days of PDF 2016 consisted of a steady flow of intelligent, highly renowned, well-meaning speakers expounding on the conference’s theme to an audience largely made up of bright caring individuals committed to answering that call. To attend an event like PDF and not feel moved, uplifted or inspired by the speakers would be a testament to an empathic failing. How can one not be moved? But when one’s eyes are glistening and when one’s heart is pounding it is worth being wary of the ideology in which one is being baptized.

    To critique an event like the Personal Democracy Forum – particularly after having actually attended it – is something of a challenge. After all, the event is truly filled with genuine people delivering (mostly) inspiring talks. There is something contagious about optimism, especially when it presents itself as measured optimism. And besides, who wants to be the jerk grousing and grumbling after an activist has just earned a standing ovation? Who wants to cross their arms and scoff that the criticism being offered is precisely the type that serves to shore up the system being criticized? Pessimists don’t often find themselves invited to the after party. Thus, insofar as the following comments – and those that have already been made – may seem prickly and pessimistic it is not meant as an attack upon any particular speaker or attendee. Many of those speakers truly were inspiring (and that is meant sincerely), many speakers really did deliver important comments (that is also meant sincerely), and the goal here is not to question the intentions of PDF’s founders or organizers. Yet prominent events like PDF are integral to shaping the societal discussions surrounding technology – and therefore it is essential to be willing to go beyond the inspirational moments and ask: what is really being said here?

    For events like PDF do serve to advance an ideology, whether they like it or not. And it is worth considering what that ideology means, even if it forces one to wipe the smile from one’s lips. And when it comes to PDF much of its ideology can be discovered simply by dissecting the theme for the 2016 conference: “The Tech We Need.”

    “The Tech”

    What do you (yes, you) think of when you hear the word technology? After all, it is a term that encompasses a great deal, which is one of the reasons why Leo Marx (1997) was compelled to describe technology as a “hazardous concept.” Eyeglasses are technology, but so too is Google Glass. A hammer is technology, and so too is a smart phone. In other words, when somebody says “technology is X” or “technology does Q” or “technology will result in R” it is worth pondering whether technology really is, does or results in those things, or if what is being discussed is really a particular type of technology in a particular context. Granted, technology remains a useful term, it is certainly a convenient shorthand (one which very many people [including me] are guilty of occasionally deploying), but in throwing the term technology about so casually it is easy to obfuscate as much as one clarifies. At PDF it seemed as though a sentence was not complete unless it included a noun, a verb and the word technology – or “tech.” Yet what was meant by “tech” at PDF almost always meant the Internet or a device linked to the Internet – and qualifying this by saying “almost” is perhaps overly generous.

    Thus the Internet (as such), web browsers, smart phones, VR, social networks, server farms, encryption, other social networks, apps, and websites all wound up being pleasantly melted together into “technology.” When “technology” encompasses so much a funny thing begins to happen – people speak effusively about “technology” and only name specific elements when they want to single something out for criticism. When technology is so all encompassing who can possibly criticize technology? And what would it mean to criticize technology when it isn’t clear what is actually meant by the term? Yes, yes, Facebook may be worthy of mockery and smart phones can be used for surveillance but insofar as the discussion is not about the Internet but “technology” on what grounds can one say: “this stuff is rubbish”? For even if it is clear that the term “technology” is being used in a way that focuses on the Internet if one starts to seriously go after technology than one will inevitably be confronted with the question “but aren’t hammers also technology?” In short, when a group talks about “the tech” but by “the tech” only means the Internet and the variety of devices tethered to it, what happens is that the Internet appears as being synonymous with technology. It isn’t just a branch or an example of technology, it is technology! Or to put this in sharper relief: at a conference about “the tech we need” held in the US in 2016 how can one avoid talking about the technology that is needed in the form of water pipes that don’t poison people? The answer: by making it so that the term “technology” does not apply to such things.

    The problem is that when “technology” is used to only mean one set of things it muddles the boundaries of what those things are, and what exists outside of them. And while it does this it allows people to confidently place trust in a big category, “technology,” whereas they would probably have been more circumspect if they were just being asked to place trust in smart phones. After all, “the Internet will save us” doesn’t have quite the same seductive sway as “technology will save us” – even if the belief is usually put more eloquently than that. When somebody says “technology will save us” people can think of things like solar panels and vaccines – even if the only technology actually being discussed is the Internet. Here, though, it is also vital to approach the question of “the tech” with some historically grounded modesty in mind. For the belief that technology is changing the world and fundamentally altering democracy is nothing new. The history of technology (as an academic field) is filled with texts describing how a new tool was perceived as changing everything – from the compass to the telegraph to the phonograph to the locomotive to the [insert whatever piece of technology you (the reader) can think of]. And such inventions were often accompanied by an, often earnest, belief that these inventions would improve everything for the better! Claims that the Internet will save us, invoke déjà vu for those with a familiarity with the history of technology. Carolyn Marvin’s masterful study When Old Technologies Were New (1988) examines the way in which early electrical communications methods were seen at the time of their introduction, and near the book’s end she writes:

    Predictions that strife would cease in a world of plenty created by electrical technology were clichés breathed by the influential with conviction. For impatient experts, centuries of war and struggle testified to the failure of political efforts to solve human problems. The cycle of resentment that fueled political history could perhaps be halted only in a world of electrical abundance, where greed could not impede distributive justice. (206)

    Switch out the words ”electrical technology” for “Internet technology” and the above sentences could apply to the present (and the PDF forum) without further alterations. After all, PDF was certainly a gathering of “the influential” and of “impatient experts.”

    And whenever “tech” and democracy are invoked in the same sentence it is worth pondering whether the tech is itself democratic, or whether it is simply being claimed that the tech can be used for democratic purposes. Lewis Mumford wrote at length about the difference between what he termed “democratic” and “authoritarian” technics – in his estimation “democratic” systems were small scale and manageable by individuals, whereas “authoritarian” technics represented massive systems of interlocking elements where no individual could truly assert control. While Mumford did not live to write about the Internet, his work makes it very clear that he did not consider computer technologies to belong to the “democratic” lineage. Thus, to follow from Mumford, the Internet appears as a wonderful example of an “authoritarian” technic (it is massive, environmentally destructive, turns users into cogs, runs on surveillance, cannot be controlled locally, etc…) – what PDF argues for is that this authoritarian technology can be used democratically. There is an interesting argument there, and it is one with some merit. Yet such a discussion cannot even occur in the confusing morass that one finds oneself in when “the tech” just means the Internet.

    Indeed, by meaning “the Internet” but saying “the tech” groups like PDF (consciously or not) pull a bait and switch whereby a genuine consideration of what “the tech we need” simply becomes a consideration of “the Internet we need.”

    “We”

    Attendees to the PDF conference received a conference booklet upon registration; it featured introductory remarks, a code of conduct, advertisements from sponsors, and a schedule. It also featured a fantastically jarring joke created through the wonders of, perhaps accidental, juxtaposition; however, to appreciate the joke one needed to open the booklet so as to be able to see the front and back cover simultaneously. Here is what that looked like:

    Personal Democracy Forum (2016)

    Get it?

    Hilarious.

    The cover says “The Tech We Need” emblazoned in blue over the faces of the conference speakers, and the back is an advertisement for Microsoft stating: “the future is what we make it.” One almost hopes that the layout was intentional. For, who the heck is the “we” being discussed? Is it the same “we”? Are you included in that “we”? And this is a question that can be asked of each of those covers independently of the other: when PDF says “we” who is included and who is excluded? When Microsoft says “we” who is included and who is excluded? Of course, this gets muddled even more when you consider that Microsoft was the “presenting sponsor” for PDF and that many of the speakers at PDF have funding ties to Microsoft. The reason this is so darkly humorous is that there is certainly an argument to be made that “the tech we need” has no place for mega-corporations like Microsoft, while at the same time the booklet assures that “the future is what we [Microsoft] make it.” In short: the future is what corporations like Microsoft will make it…which might be very different from the kind of tech we need.

    In considering the “we” of PDF it is worth restating that this is a gathering of well-meaning individuals who largely seem to want to approach the idea of “we” with as much inclusivity as possible. Yet defining a “we” is always fraught, speaking for a “we” is always dangerous, and insofar as one can think of PDF with any kind of “we” (or “us”) in mind the only version of the group that really emerges is one that leans heavily towards describing the group actually present at the event. And while one can certainly speak about the level (or lack) of diversity at the PDF event – the “we” who came together at PDF is not particularly representative of the world. This was also brought into interesting relief in some other amusing ways: throughout the event one heard numerous variations of the comment “we all have smart phones” – but this did not even really capture the “we” of PDF. While walking down the stairs to a session one day I clearly saw a man (wearing a conference attendee badge) fiddling with a flip-phone – I suppose he wasn’t included in the “we” of “we all have smart phones.” But I digress.

    One encountered further issues with the “we” when it came to the political content of the forum. While the booklet states, and the hosts repeated over and over, that the event was “non-partisan” such a descriptor is pretty laughable. Those taking to the stage were a procession of people who had cut their teeth working for MoveOn and the activists represented continually self-identified as hailing from the progressive end of the spectrum. The token conservative speaker who stepped onto the stage even made a self-deprecating joke in which she recognized that she was one of only a handful (if that) of Republicans present. So, again, who is missing from this “we”? One can be a committed leftist and genuinely believe that a figure like Donald Trump is a xenophobic demagogue – and still recognize that some of his supporters might have offered a very interesting perspective to the PDF conversation. After all, the Internet (“the tech”) has certainly been used by movements on the right as well – and used quite effectively at that. But this part of a national “we” was conspicuously absent from the forum even if they are not nearly so absent from Twitter, Facebook, or the population of people owning smart phones. Again, it is in no way shape or form an endorsement of anything that Trump has said to point out that when a forum is held to discuss the Internet and democracy that it is worth having the people you disagree with present.

    Another question of the “we” that is worth wrestling with revolves around the way in which events like PDF involve those who offer critical viewpoints. If, as is being argued here, PDF’s basic ideology is that the Internet (“the tech”) is improving people’s lives and will continue to do so (leading towards “personal democracy”) – it is important to note that PDF welcomed several speakers who offered accounts of some of the shortcomings of the Internet. Figures including Sherry Turkle, Kentaro Toyama, Safiya Noble, Kate Crawford, danah boyd, and Douglas Rushkoff all took the stage to deliver some critical points of view – and yet in incorporating such voices into the “we” what occurs is that these critiques function less as genuine retorts and more as safety valves that just blow off a bit of steam. Having Sherry Turkle (not to pick on her) vocally doubt the empathetic potential of the Internet just allows the next speaker (and countless conference attendees) to say “well, I certainly don’t agree with Sherry Turkle.” Nevertheless, one of the best ways to inoculate yourself against the charge of unthinking optimism is to periodically turn the microphone over to a critic. But perhaps the most important things that such critics say are the ways in which they wind up qualifying their comments – thus Turkle says “I’m not anti-technology,” Toyama disparages Facebook only to immediately add “I love Facebook,” and fears regarding the threat posed by AI get laughed off as the paranoia of today’s “apex predators” (rich white men) being concerned that they will lose their spot at the top of the food chain. The environmental costs of the cloud are raised, the biased nature of algorithms is exposed – but these points are couched against a backdrop that says to the assembled technologists “do better” not “the Internet is a corporately controlled surveillance mall, and it’s overrated.” The heresies that are permitted are those that point out the rough edges that need to be rounded so that the pill can be swallowed. To return to the previous paragraph, this is not to say that PDF needs to invite John Zerzan or Chellis Glendinning to speak…but one thing that would certainly expose the weaknesses of the PDF “we” is to solicit viewpoints that genuinely come from outside of that “we.” Granted, PDF is more TED talk than FRED talk.

    And of course, and most importantly, one must think of the “we” that goes totally unheard. Yes, comments were made about the environmental cost of the cloud and passing phrases recognized mining – but PDF’s “we” seems to mainly refer to a “we” defined as those who use the Internet and Internet connected devices. Miners, those assembling high-tech devices, e-waste recyclers, and the other victims of those processes are only a hazy phantom presence. They are mentioned in passing, but not ever included fully in the “we.” PDF’s “the tech we need” is for a “we” that loves the Internet and just wants it to be even better and perhaps a bit nicer, while Microsoft’s we in “the future is what we make it” is a “we” that is committed to staying profitable. But amidst such statements there is an even larger group saying: “we are not being included.” That unheard “we” being the same “we” from the classic IWW song “we have fed you all for a thousand years” (Green et al 2016). And as the second line of that song rings out “and you hail us still unfed.”

    “Need”

    When one looks out upon the world it is almost impossible not to be struck by how much is needed. People need homes, people need –not just to be tolerated – but accepted, people need food, people need peace, people need stability, people need the ability to love without being subject to oppression, people need to be free from bigotry and xenophobia, people need…this list could continue with a litany of despair until we all don sackcloth. But do people need VR headsets? Do people need Facebook or Twitter? Do those in the possession of still-functioning high-tech devices need to trade them in every eighteen months? Of course it is important to note that technology does have an important role in meeting people’s needs – after all “shelter” refers to all sorts of technology. Yet, when PDF talks about “the tech we need” the “need” is shaded by what is meant by “the tech” and as was previously discussed that really means “the Internet.” Therefore it is fair to ask, do people really “need” an iPhone with a slightly larger screen? Do people really need Uber? Do people really need to be able to download five million songs in thirty seconds? While human history is a tale of horror it requires a funny kind of simplistic hubris to think that World War II could have been prevented if only everybody had been connected on Facebook (to be fair, nobody at PDF was making this argument). Are today’s “needs” (and they are great) really a result of a lack of technology? It seems that we already have much of the tech that is required to meet today’s needs, and we don’t even require new ways to distribute it. Or, to put it clearly at the risk of being grotesque: people in your city are not currently going hungry because they lack the proper app.

    The question of “need” flows from both the notion of “the tech” and “we” – and as was previously mentioned it would be easy to put forth a compelling argument that “the tech we need” involves water pipes that don’t poison people with lead, but such an argument is not made when “the tech” means the Internet and when the “we” has already reached the top of Maslow’s hierarchy of needs. If one takes a more expansive view of “the tech” and “we” than the range of what is needed changes accordingly. This issue – the way “tech” “we” and “need” intersect – is hardly a new concern. It is what prompted Ivan Illich (1973) to write, in Tools for Conviviality, that:

    People need new tools to work with rather than tools that ‘work’ for them. They need technology to make the most of the energy and imagination each has, rather than more well-programmed energy slaves. (10)

    Granted, it is certainly fair to retort “but who is the ‘we’ referred to by Illich” or “why can’t the Internet be the type of tool that Illich is writing about” – but here Illich’s response would be in line with the earlier referral to Mumford. Namely: accusations of technological determinism aside, maybe it’s fair to say that some technologies are oversold, and maybe the occasional emphasis on the way that the Internet helps activists serves as a patina that distracts from what is ultimately an environmentally destructive surveillance system. Is the person tethered to their smart phone being served by that device – or are they serving it? Or, to allow Illich to reply with his own words:

    As the power of machines increases, the role of persons more and more decreases to that of mere consumers. (11)

    Mindfulness apps, cameras on phones that can be used to film oppression, new ways of downloading music, programs for raising money online, platforms for connecting people on a political campaign – the user is empowered as a citizen but this empowerment tends to involve needing the proper apps. And therefore that citizen needs the proper device to run that app, and a good wi-fi connection, and… the list goes on. Under the ideology captured in the PDF’s “the tech we need” to participate in democracy becomes bound up with “to consume the latest in Internet innovation.” Every need can be met, provided that it is the type of need, which the Internet can meet. Thus the old canard “to the person with a hammer every problem looks like a nail” finds its modern equivalent in “to the person with a smart phone and a good wi-fi connection, every problem looks like one that can be solved by using the Internet.” But as for needs? Freedom from xenophobia and oppression are real needs – undoubtedly – but the Internet has done a great deal to disseminate xenophobia and prop up oppressive regimes. Continuing to double down on the Internet seems like doing the same thing “we” have been doing and expecting different results because finally there’s an “app for that!”

    It is, again, quite clear that those assembled at PDF came together with well-meaning attitudes, but as Simone Weil (2010) put it:

    Intentions, by themselves, are not of any great importance, save when their aim is directly evil, for to do evil the necessary means are always within easy reach. But good intentions only count when accompanied by the corresponding means for putting them into effect. (180)

    The ideology present at PDF emphasizes that the Internet is precisely “the means” for the realization of its attendees’ good intentions. And those who took to the stage spoke rousingly of using Facebook, Twitter, smart phones, and new apps for all manner of positive effects – but hanging in the background (sometimes more clearly than at other times) is the fact that these systems also track their users’ every move and can be used just as easily by those with very different ideas as to what “positive effects” look like. The issue of “need” is therefore ultimately a matter not simply of need but of “ends” – but in framing things in terms of “the tech we need” what is missed is the more difficult question of what “ends” do we seek. Instead “the tech we need” subtly shifts the discussion towards one of “means.” But, as Jacques Ellul, recognized the emphasis on means – especially technological ones – can just serve to confuse the discussion of ends. As he wrote:

    It must always be stressed that our civilization is one of means…the means determine the ends, by assigning us ends that can be attained and eliminating those considered unrealistic because our means do not correspond to them. At the same time, the means corrupt the ends. We live at the opposite end of the formula that ‘the ends justify the means.’ We should understand that our enormous present means shape the ends we pursue. (Ellul 2004, 238)

    The Internet and the raft of devices and platforms associated with it are a set of “enormous present means” – and in celebrating these “means” the ends begin to vanish. It ceases to be a situation where the Internet is the mean to a particular end, and instead the Internet becomes the means by which one continues to use the Internet so as to correct the current problems with the Internet so that the Internet can finally achieve the… it is a snake eating its own tail.

    And its own tale.

    Conclusion: The New York Ideology

    In 1995, Richard Barbrook and Andy Cameron penned an influential article that described what they called “The Californian Ideology” which they characterized as

    promiscuously combin[ing] the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through a profound faith in the emancipatory potential of the new information technologies. In the digital utopia, everybody will be both hip and rich. (Barbrook and Cameron 2001, 364)

    As the placing of a state’s name in the title of the ideology suggests, Barbrook and Cameron were setting out to describe the viewpoint that was underneath the firms that were (at that time) nascent in Silicon Valley. They sought to describe the mixture of hip futurism and libertarian politics that worked wonderfully in the boardroom, even if there was now somebody in the boardroom wearing a Hawaiian print shirt – or perhaps jeans and a hoodie. As companies like Google and Facebook have grown the “Californian Ideology” has been disseminated widely, and though such companies periodically issued proclamations about not being evil and claimed that connecting the world was their goal they maintained their utopian confidence in the “independence of cyberspace” while directing a distasteful gaze towards the “dinosaurs” of representative democracy that would dare to question their zeal. And though it is a more recent player in the game, one is hard-pressed to find a better example than Uber of the fact that this ideology is alive and well.

    The Personal Democracy Forum is not advancing the Californian Ideology. And though the event may have featured a speaker who suggested that the assembled “we” think of the “founding fathers” as start-up founders – the forum continually returned to the questions of democracy. While the Personal Democracy Forum shares the “faith in the emancipatory potential of the new information technologies” with Silicon Valley startups it seems less “free-wheeling” and more skeptical of “entrepreneurial zeal.” In other words, whereas Barbrook and Cameron spoke of “The Californian Ideology” what PDF makes clear is that there is also a “New York Ideology.” Wherein the ideological hallmark is an embrace of the positive potential of new information technologies tempered by the belief that such potential can best be reached by taming the excesses of unregulated capitalism. Where the Californian Ideology says “libertarian” the New York Ideology says “liberation.” Where the Californian Ideology celebrates capital the New York Ideology celebrates the power found in a high-tech enhanced capitol. The New York Ideology balances the excessive optimism of the Californian Ideology by acknowledging the existence of criticism, and proceeds to neutralize this criticism by making it part and parcel of the celebration of the Internet’s potential. The New York Ideology seeks to correct the hubris of the Californian Ideology by pointing out that it is precisely this hubris that turns many away from the faith in the “emancipatory potential.” If the Californian Ideology is broadcast from the stage at the newest product unveiling or celebratory conference, than the New York Ideology is disseminated from conferences like PDF and the occasional skeptical TED talk. The New York Ideology may be preferable to the Californian Ideology in a thousand ways – but ultimately it is the ideology that manifests itself in the “we” one encounters in the slogan “the tech we need.”

    Or, to put it simply, whereas the Californian Ideology is “wealth meaning,” the New York Ideology is “well-meaning.”

    Of course, it is odd and unfair to speak of either ideology as “Californian” or “New York.” California is filled with Californians who do not share in that ideology, and New York is filled with New Yorkers who do not share in that ideology either. Yet to dub what one encounters at PDF to be “The New York Ideology” is to indicate the way in which current discussions around the Internet are not solely being framed by “The Californian Ideology” but also by a parallel position wherein faith in Internet enabled solutions puts aside its libertarian sneer to adopt a democratic smile. One could just as easily call the New York Ideology the “Tech On Stage Ideology” or the “Civic Tech Ideology” – perhaps it would be better to refer to the Californian Ideology as the SV Ideology (silicon valley) and the New York Ideology as the CV ideology (civic tech). But if the Californian Ideology refers to the tech campus in Silicon Valley than the New York Ideology refers to the foundation based in New York – that may very well be getting much of its funding from the corporations that call Silicon Valley home. While Uber sticks with the Californian Ideology, companies like Facebook have begun transitioning to the New York Ideology so that they can have their panoptic technology and their playgrounds too. Whilst new tech companies emerging in New York (like Kickstarter and Etsy) make positive proclamations about ethics and democracy by making it seem that ethics and democracy are just more consumption choices that one picks from the list of downloadable apps.

    The Personal Democracy Forum is a fascinating event. It is filled with intelligent individuals who speak of democracy with unimpeachable sincerity, and activists who really have been able to use the Internet to advance their causes. But despite all of this, the ideological emphasis on “the tech we need” remains based upon a quizzical notion of “need,” a problematic concept of “we,” and a reductive definition of “tech.” For statements like “the tech we need” are not value neutral – and even if the surface ethics are moving and inspirational, sometimes a problematic ideology is most easily disseminated when it takes care to dispense with ideologues. And though the New York Ideology is much more subtle than the Californian Ideology – and makes space for some critical voices – it remains a vehicle for disseminating an optimistic faith that a technologically enhanced Moses shall lead us into the high-tech promised land.

    The 2016 Personal Democracy Forum put forth an inspirational and moving vision of “the tech we need.”

    But when it comes to promises of technological salvation, isn’t it about time that “we” stopped getting our hopes up?

    Coda

    I confess, I am hardly free of my own ideological biases. And I recognize that everything written here may simply be dismissed of by those who find it hypocritical that I composed such remarks on a computer and then posted them online. But I would say that the more we find ourselves using technology the more careful we must be that we do not allow ourselves to be used by that technology.

    And thus, I shall simply conclude by once more citing a dead, but prescient, pessimist:

    I have no illusions that my arguments will convince anyone. (Ellul 1994, 248)

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, where an earlier version of this post first appeared, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Barbrook, Richard and Andy Cameron. 2001. “The Californian Ideology.” In Peter Ludlow, ed., Crypto Anarchy, Cyberstates and Pirate Utopias. Cambridge: MIT Press. 363-387.
    • Ellul, Jacques. 2004. The Political Illusion. Eugene, OR: Wipf and Stock.
    • Ellul, Jacques. 1994. A Critique of the New Commonplaces. Eugene, OR: Wipf and Stock.
    • Green, Archie, David Roediger, Franklin Rosemont, and Salvatore Salerno. 2016. The Big Red Songbook: 250+ IWW Songs! Oakland, CA: PM Press.
    • Illich, Ivan. 1973. Tools for Conviviality. New York: Harper and Row.
    • Marvin, Carolyn. 1988. When Old Technologies Were New: Thinking About Electric Communication in the Late Nineteenth Century. New York: Oxford University Press.
    • Marx, Leo. 1997. “‘Technology’: The Emergence of a Hazardous Concept.” Social Research 64:3 (Fall). 965-988.
    • Mumford, Lewis. 1964. “Authoritarian and Democratic Technics.” in Technology and Culture, 5:1 (Winter). 1-8.
    • Weil, Simone. 2010. The Need for Roots. London: Routledge.
  • Bradley J. Fest – The Function of Videogame Criticism

    Bradley J. Fest – The Function of Videogame Criticism

    a review of Ian Bogost, How to Talk about Videogames (University of Minnesota Press, 2015)

    by Bradley J. Fest

    ~

    Over the past two decades or so, the study of videogames has emerged as a rigorous, exciting, and transforming field. During this time there have been a few notable trends in game studies (which is generally the name applied to the study of video and computer games). The first wave, beginning roughly in the mid-1990s, was characterized by wide-ranging debates between scholars and players about what they were actually studying, what aspects of videogames were most fundamental to the medium.[1] Like arguments about whether editing or mise-en-scène was more crucial to the meaning-making of film, the early, sometimes heated conversations in the field were primarily concerned with questions of form. Scholars debated between two perspectives known as narratology and ludology, and asked whether narrative or play was more theoretically important for understanding what makes videogames unique.[2] By the middle of the 2000s, however, this debate appeared to be settled (as perhaps ultimately unproductive and distracting—i.e., obviously both narrative and play are important). Over the past decade, a second wave of scholars has emerged who have moved on to more technical, theoretical concerns, on the one hand, and more social and political issues, on the other (frequently at the same time). Writers such as Patrick Crogan, Nick Dyer-Witherford, Alexander R. Galloway, Patrick Jagoda, Lisa Nakamura, Greig de Peuter, Adrienne Shaw, McKenzie Wark, and many, many others write about how issues such as control and empire, race and class, gender and sexuality, labor and gamification, networks and the national security state, action and procedure can pertain to videogames.[3] Indeed, from a wide sampling of contemporary writing about games, it appears that the old anxieties regarding the seriousness of its object have been put to rest. Of course games are important. They are becoming a dominant cultural medium; they make billions of dollars; they are important political allegories for life in the twenty-first century; they are transforming social space along with labor practices; and, after what many consider a renaissance in independent game development over the past decade, some of them are becoming quite good.

    Ian Bogost has been one of the most prominent voices in this second wave of game criticism. A media scholar, game designer, philosopher, historian, and professor of interactive computing at the Georgia Institute of Technology, Bogost has published a number of influential books. His first, Unit Operations: An Approach to Videogame Criticism (2006), places videogames within a broader theoretical framework of comparative media studies, emphasizing that games deserve to be approached on their own terms, not only because they are worthy of attention in and of themselves but also because of what they can show us about the ways other media operate. Bogost argues that “any medium—poetic, literary, cinematic, computational—can be read as a configurative system, an arrangement of discrete, interlocking units of expressive meaning. I call these general instances of procedural expression, unit operations” (2006, 9). His second book, Persuasive Games: The Expressive Power of Videogames (2007), extends his emphasis on the material, discrete processes of games, arguing that they can and do make arguments; that is, games are rhetorical, and they are rhetorical by virtue of what they and their operator can do, their procedures: games make arguments through “procedural rhetoric.”[4] The publication of Persuasive Games in particular—which he promoted with an appearance on The Colbert Report (2005–14)—saw Bogost emerge as a powerful voice in the broad cohort of second wave writers and scholars.

    But I feel that the publication of Bogost’s most recent book, How to Talk about Videogames (2015), might very well end up signaling the beginning of a third phase of videogame criticism. If the first task of game criticism was to formally define its object, and the second wave of game studies involved asking what games can and do say about the world, the third phase might see critics reflecting on their own processes and procedures, thinking, not necessarily about what videogames are and do, but about what videogame criticism is and does. How to Talk about Videogames is a book that frequently poses the (now quite old) question: what is the function of criticism at the present time? In an industry dominated by multinational media megaconglomerates, what should the role of (academic) game criticism be? What can a handful of researchers and scholars possibly do or say in the face of such a massive, implacable, profit-driven industry, where every announcement about future games further stokes its rabid fan base of slobbering, ravening hordes to spend hundreds of dollars and thousands of hours consuming a form known for its spectacular violence, ubiquitous misogyny, and myopic tribalism? What is the point of writing about games when the videogame industry appears to happily carry on as if nothing is being said at all, impervious to any conversation that people may be having about its products beyond what “fans” demand?

    To read the introduction and conclusion of Bogost’s most recent book, one might think that, suggestions about their viability aside, both the videogame industry and the critical writing surrounding it are in serious crisis, and the matter of the cultural status of the videogame has hardly been put to rest. As a scholar, critic, and designer who has been fairly consistent in positively exploring what digital games can do, what they can uniquely accomplish as a process-based medium, it is striking, at least to this reviewer, that Bogost begins by anxiously admitting,

    whenever I write criticism of videogames, someone strongly invested in games as a hobby always asks the question “is this parody?” as if only a miscreant or a comedian or a psychopath would bother to invest the time and deliberateness in even thinking, let alone writing about videogames with the seriousness that random, anonymous Internet users have already used to write about toasters, let alone deliberate intellectuals about film or literature! (Bogost 2015, xi–xii)

    Bogost calls this kind of attention to the status of his critical endeavor in a number of places in How to Talk about Videogames. The book shows him involved in that untimely activity of silently but implicitly assessing his body of work, reflectively approaching his critical task with cautious trepidation. In a variety of moments from the opening and closing of the book, games and criticism are put into serious question. Videogames are puerile, an “empty diversion” (182), and without value; “games are grotesque. . . . [they] are gross, revolting, heaps of arbitrary anguish” (1); “games are stupid” (9); “that there could be a game criticism [seems] unlikely and even preposterous” (181). In How to Talk about Videogames, Bogost, at least in some ways, is giving up his previous fight over whether or not videogames are serious aesthetic objects worthy of the same kind of hermeneutic attention given to more established art forms.[5] If games are predominantly treated as “perversion, excess” (183), a symptom of “permanent adolescence” (180), as unserious, wasteful, unproductive, violently sadistic entertainments—perhaps there is a reason. How to Talk about Videogames shows Bogost turning an intellectual corner toward a decidedly ironic sense of his role as a critic and the worthiness of his critical object.

    Compare Bogost’s current pessimism with the optimism of his previous volume, How to Do Things with Videogames (2011), to which How to Talk about Videogames functions as a kind of sequel or companion. In this earlier book, he is rather more affirmative about the future of the videogame industry (and, by proxy, videogame criticism):

    What if we allowed that videogames have many possible goals and purposes, each of which couples with many possible aesthetics and designs to create many possible player experiences, none of which bears any necessary relationship to the commercial videogame industry as we currently know it. The more games can do, the more the general public will become accepting of, and interested in, the medium in general. (Bogost 2011, 153)

    2011’s How to Do Things with Videogames aims to bring to the table things that previous popular and scholarly approaches to videogames had ignored in order to show all the other ways that videogames operate, what they are capable of beyond mere mimetic simulation or entertaining distraction, and how game criticism might allow their audiences to expand beyond the province of the “gamer” to mirror the diversified audiences of other media. Individual chapters are devoted to how videogames produce empathy and inspire reverence; they can be vehicles for electioneering and promotion; games can relax, titillate, and habituate; they can be work. Practicing what he calls “media microecology,” a critical method that “seeks to reveal the impact of a medium’s properties on society . . . through a more specialized, focused attention . . . digging deep into one dark, unexplored corner of a media ecosystem” (2011, 7), Bogost argues that game criticism should be attentive to more than simply narrative or play. The debates that dominated the early days of critical game studies, in this regard, only account for a rather limited view of what games can do. Appearing at a time when many were arguing that the medium was beginning to reach aesthetic maturity, Bogost’s 2011 book sounds a note of hope and promise for the future of game studies and the many unexplored possibilities for game design.

    How to Talk about Videogames

    I cannot really overstate, however, the ways in which How to Talk about Videogames, published four years later, shows Bogost reversing tack, questioning his entire enterprise.[6] Even with the appearance of such a serious, well-received game as Gone Home (2013)—to which he devotes a particularly scathing chapter about what the celebration of an ostensibly adolescent game tells us about contemporaneity—this is a book that repeatedly emphasizes the cultural ghetto in which videogames reside. Criticism devoted exclusively to this form risks being “subsistence criticism. . . . God save us from a future of game critics, gnawing on scraps like the zombies that fester in our objects of study” (188). Despite previous claims about videogames “[helping] us expose and interrogate the ways we engage the world in general, not just the ways that computational systems structure or limit that experience” (Bogost 2006, 40), How to Talk about Videogames is, at first glance, a book that raises the question of not only how videogames should be talked about, but whether they have anything to say in the first place.

    But it is difficult to gauge the seriousness of Bogost’s skepticism and reluctance given a book filled with twenty short essays of highly readable, informative, and often compelling criticism. (The disappointingly short essay, “The Blue Shell Is Everything That’s Wrong with America”—in which he writes: “This is the Blue Shell of collapse, the Blue Shell of financial hubris, the Blue Shell of the New Gilded Age” [26]—particularly stands out in the way that it reads an important if overlooked aspect of a popular game in terms of larger social issues.) For it is, really, somewhat unthinkable that someone who has written seven books on the subject would arrive at the conclusion that “videogames are a lot like toasters. . . . Like a toaster, a game is both appliance and hearth, both instrument and aesthetic, both gadget and fetish. It’s preposterous to do game criticism, like it’s preposterous to do toaster criticism” (ix and xii).[7] Bogost’s point here is rhetorical, erring on the side of hyperbole in order to emphasize how videogames are primarily process-based—that they work and function like toasters perhaps more than they affect and move like films or novels (a claim with which I imagine many would disagree), and that there is something preposterous in writing criticism about a process-based technology. A decade after emphasizing videogames’ procedurality in Unit Operations, this is a way for him to restate and reemphasize these important claims for the more popular audience intended for How to Talk about Videogames. Games involve actions, which make them different from other media that can be more passively absorbed. This is why videogames are often written about in reviews “full of technical details and thorough testing and final, definitive scores delivered on improbably precise numerical scales” (ix). Bogost is clear. He is not a reviewer. He is not assessing games’ ability to “satisfy our need for leisure [as] their only function.” He is a critic and the critic’s activity, even if his object resembles a toaster, is different.

    But though it is apparent why games might require a different kind of criticism than other media, what remains unclear is what Bogost believes the role of the critic ought to be. He says, contradicting the conclusion of How to Do Things with Videogames, that “criticism is not conducted to improve the work or the medium, to win over those who otherwise would turn up their noses at it. . . . Rather, it is conducted to get to the bottom of something, to grasp its form, context, function, meaning, and capacities” (xii). This seems like somewhat of a mistake, and a mistake that ignores both the history of criticism and Bogost’s own practice as a critic. Yes, of course criticism should investigate its object, but even Matthew Arnold, who emphasized “disinterestedness . . . keeping aloof from . . . ‘the practical view of things,’” also understood that such an approach could establish “a current of fresh and true ideas” (Arnold 1993 [1864], 37 and 49). No matter how disinterested, criticism can change the ways that art and the world are conceived and thought about. Indeed, only a sentence later it is difficult to discern what precisely Bogost believes the function of videogame criticism to be if not for improving the work, the medium, the world, if not for establishing a current from which new ideas might emerge. He writes that criticism can “venture so far from ordinariness of a subject that the terrain underfoot gives way from manicured path to wilderness, so far that the words that we would spin tousle the hair of madness. And then, to preserve that wilderness and its madness, such that both the works and our reflections on them become imbricated with one another and carried forward into the future where others might find them anew” (xii; more on this in a moment). It is clear that Bogost understands the mode of the critic to be disinterested and objective, to answer ‘the question ‘What is even going on here?’” (x), but it remains unclear why such an activity would even be necessary or worthwhile, and indeed, there is enough in the book that points to criticism being a futile, unnecessary, parodic, parasitic, preposterous endeavor with no real purpose or outcome. In other words, he may say how to talk about videogames, but not why anyone would ever really want to do so.

    I have at least partially convinced myself that Bogost’s claims about videogames being more like toasters than other art forms, along with the statements above regarding the disreputable nature of videogames, are meant as rhetorical provocations, ironic salvos to inspire from others more interesting, rigorous, thoughtful, and complex critical writing, both of the popular and academic stripe. I also understand that, as he did in Unit Operations, Bogost balks at the idea of a critical practice wholly devoted to videogames alone: “the era of fields and disciplines ha[s] ended. The era of critical communities ha[s] ended. And the very idea of game criticism risks Balkanizing games writing from other writing, severing it from the rivers and fields that would sustain it” (187). But even given such an understanding, it is unclear who precisely is suggesting that videogame criticism should be a hermetically sealed niche cut off from the rest of the critical tradition. It is also unclear why videogame criticism is so preposterous, why writing it—even if a critic’s task is limited to getting “to the bottom of something”—is so divorced from the current of other works of cultural criticism. And finally, given what are, at the end of the day, some very good short essays on games that deserve a thoughtful readership, it is unclear why Bogost has framed his activity in such a negatively self-aware fashion.

    So, rather than pursue a discussion about the relative merits and faults of Bogost’s critical self-reflexivity, I think it worth asking what changed between his 2011 and 2015 books, what took him from being a cheerleader—albeit a reticent, tempered, and disinterested one—to questioning the very value of videogame criticism itself. Why does he change from thinking about the various possibilities for doing things with videogames to thinking that “entering a games retail outlet is a lot like entering a sex shop or a liquor store . . . game shops are still vaguely unseemly” (182)?[8] I suspect that such events as 2014’s Gamergate—when independent game designer Zoe Quinn, critic Anita Sarkeesian, and others were threatened and harassed for their feminist views—the generally execrable level of discourse found on internet comments pages, and the questionable cultural identity of the “gamer,” probably account for some of Bogost’s malaise.[9] Indeed, most of the essays found in How to Talk about Videogames initially appeared online, largely in The Atlantic (where he is an editor) and Gamasutra, and, I have to imagine, suffered for it in their comments sections. With this change in audience and platform, it seems to follow that the opening and closing of How to Talk about Videogames reflect a general exhaustion with the level of discourse from fans, companies, and internet trolls. How can criticism possibly thrive or have an impact in a community that so frequently demonstrates its intolerance and rage toward other modes of thinking and being that might upset its worldview and sense of cultural identity? How does one talk to those who will not listen?

    And if these questions perhaps sound particularly apt today—that the “gamer” might bear an awfully striking resemblance to other headline-grabbing individuals and groups dominating the public discussion in the months after the publication of Bogost’s book, namely Donald J. Trump and his supporters—they should. I agree with Bogost that it can be difficult to see the value of criticism at a time when many United States citizens appear, at least on the surface, to be actively choosing to be uncritical. (As Philip Mirowski argues, the promotion of “ignorance [is] the lynchpin in the neoliberal project” [2013, 96].) Given such a discursive landscape, what is the purpose of writing, even in Bogost’s admirably clear (yet at times maddeningly spare) prose, if no amount of stylistic precision or rhetorical complexity—let alone a mastery of basic facts—can influence one’s audience? How to Talk about Videogames is framed as a response to the anti-intellectual atmosphere of the middle of the second decade of the twenty-first century, and it is an understandably despairing one. As such, it is not surprising that Bogost concludes that criticism has no role to play in improving the medium (or perhaps the world) beyond mere phenomenological encounter and description given the social fabric of life in the 2010s. In a time of vocally racist demagoguery, an era witnessing a rising tide of reactionary nationalism in the US and around the world, a period during which it often seems like no words of any kind can have any rhetorical effect at all—procedurally or otherwise—perhaps the best response is to be quiet. But I also think that this is to misunderstand the function of critical thought, regardless of what its object might be.

    To be sure, videogame creators have probably not yet produced a Citizen Kane (1941), and videogame criticism has not yet produced a work like Erich Auerbach’s Mimesis (1946). I am unconvinced, however, that such future accomplishments remain out of reach, that videogames are barred from profound aesthetic expression, and that writing about games preclude the heights attained by previous criticism simply because of some ill-defined aspect of the medium which prevents it from ever aspiring to anything beyond mere craft. Is a study of the Metal Gear series (1987–2015) similar to Roland Barthes’s S/Z (1970) really all that preposterous? Is Mario forever denied his own Samuel Johnson simply because he is composed of code rather than words? For if anything is unclear about Bogost’s book, it is what precisely prohibits videogames from having the effects and impacts of other art forms, why they are restricted to the realm of toasters, incapable of anything beyond adolescent poiesis. Indeed, Bogost’s informative and incisive discussion about Ms. Pac-Man (1981), his thought-provoking interpretation of Mountain (2014), or the many moments of accomplished criticism in his previous books—for example, his masterful discussion of the “figure of fascination” in Unit Operations—betray such claims.[10]

    Matthew Arnold once famously suggested that creativity and criticism were intimately linked, and I believe it might be worthwhile to remember this for the future of videogame criticism:

    It is the business of the critical power . . . “in all branches of knowledge, theology, philosophy, history, art, science, to see the object as in itself it really is.” Thus it tends, at last, to make an intellectual situation of which the creative power can profitably avail itself. It tends to establish an order of ideas, if not absolutely true, yet true by comparison with that which it displaces; to make the best ideas prevail. Presently these new ideas reach society, the touch of truth is the touch of life, and there is a stir and growth everywhere; out of this stir and growth come the creative epochs of literature. (Arnold 1993 [1864], 29)

    In other words, criticism has a vital role to play in the development of an art form, especially if an art form is experiencing contraction or stagnation. Whatever disagreements I might have with Arnold, I too believe that criticism and creativity are indissolubly linked, and further, that criticism has the power to shape and transform the world. Bogost says that “being a critic is not an enjoyable job . . . criticism is not pleasurable” (x). But I suspect that there may still be many who share Arnold’s view of criticism as a creative activity, and maybe the problem is not that videogame criticism is akin to preposterous toaster criticism, but that the function of videogame criticism at the present time is to expand its own sense of what it is doing, of what it is capable, of how and why it is written. When Bogost says he wants “words that . . . would . . . tousle the hair of madness,” why not write in such a fashion (Bogost’s controlled style rarely approaches madness), expanding criticism beyond mere phenomenological summary at best or zombified parasitism at worst. Consider, for instance, Jonathan Arac: “Criticism is literary writing that begins from previous literary writing. . . . There need not be a literary avant-garde for criticism to flourish; in some cases criticism itself plays a leading cultural role” (1989, 7). If we are to take seriously Bogost’s point about how the overwhelmingly positive reaction to Gone Home reveals the aesthetic and political impoverishment of the medium, then it is disappointing to see someone so well-positioned to take a leading cultural role in shaping the conversation about how videogames might change or transform surrendering the field.

    Forget analogies. What if videogame criticism were to begin not from comparing games to toasters but from previous writing, from the history of criticism, from literature and theory, from theories of art and architecture and music, from rhetoric and communication, from poetry? For, given the complex mediations present in even the simplest games—i.e., games not only involve play and narrative, but raise concerns about mimesis, music, sound, spatiality, sociality, procedurality, interface effects, et cetera—it increasingly makes less and less sense to divorce or sequester games from other forms of cultural study or to think that videogames are so unique that game studies requires its own critical modality. If Bogost implores game critics not to limit themselves to a strictly bound, niche field uninformed by other spheres of social and cultural inquiry, if game studies is to go forward into a metacritical third wave where it can become interested in what makes videogames different from other forms and self-reflexively aware of the variety of established and interconnecting modes of cultural criticism from which the field can only benefit, then thinking about the function of criticism historically should guide how and why games are written about at the present time.

    Before concluding, I should also note that something else perhaps changed between 2011 and 2015, namely, Bogost’s alignment with the philosophical movements of speculative realism and object-oriented ontology. In 2012, he published Alien Phenomenology, or What It’s Like to Be a Thing, a book that picks up some of the more theoretical aspects of Unit Operations and draws upon the work of Graham Harman and other anti-correlationists to pursue a flat ontology, arguing that the job of the philosopher “is to amplify the black noise of objects to make the resonant frequencies of the stuffs inside them hum in credibly satisfying ways. Our job is to write the speculative fictions of their processes, their unit operations” (Bogost 2012, 34). Rather than continue pursuing an anthropocentric, correlationist philosophy that can only think about objects in relation to human consciousness, Bogost claims that “the answer to correlationism is not the rejection of any correlate but the acknowledgment of endless ones, all self-absorbed, obsessed by givenness rather than by turpitude” (78). He suggests that philosophy should extend the possibility of phenomenological encounter to all objects, to all units, in his parlance; let phenomenology be alien and weird; let toasters encounter tables, refrigerators, books, climate change, Pittsburgh, Higgs boson particles, the 2016 Electronic Entertainment Expo, bagels, et cetera.[11]

    Though this is not the venue to pursue a broader discussion of Bogost’s philosophical writing, I mention his speculative turn because it seems important for understanding his changing attitudes about criticism. That is, as Graham Harman’s 2012 essay, “The Well-Wrought Broken Hammer,” negatively demonstrates, it is unclear what a flat ontology has to say, if anything, about art, what such a philosophy can bring to critical, hermeneutic activity.[12] Indeed, regardless of where one stands with regard to object-oriented ontology and other speculative realisms, what these philosophies might offer to critics seems to be one of the more vexing and polarizing intellectual questions of our time. Hermeneutics may very well prove inescapably “correlationist,” and, indeed, no matter how disinterested, historical. It is an open question whether or not one can ground a coherent and worthwhile critical practice upon a flat ontology. I am tempted to suspect not. I also suspect that the current trends in continental philosophy, at the end of the day, may not be really interested in criticism as such, and perhaps that is not really such a big deal. Criticism, theory, and philosophy are not synonymous activities nor must they be. (The question about criticism vis-à-vis alien phenomenology also appears to have motivated the Object Lessons series that Bogost edits.) This is all to say, rather than ground videogame criticism in what may very well turn out to be an intellectual fad whose possibilities for writing worthwhile criticism remain somewhat dubious, perhaps there may be more ripe currents and streams—namely, the history of criticism—that can inform how we write about videogames. Criticism may be steered by keeping in view many polestars; let us not be overly swayed by what, for now, burns brightest. For an area of humanistic inquiry that is still very much emerging, it seems a mistake to assume it can and should be nothing more than toaster criticism.

    In this review I have purposefully made few claims about the state of videogames. This is partly because I do not feel that any more work needs to be done to justify writing about the medium. It is also partly because I feel that any broad statement about the form would be an overgeneralization at this point. There are too many games being made in too many places by too many different people for any all-encompassing statement about the state of videogame art to be all that coherent. (In this, I think Bogost’s sense of the need for a media microecology of videogames is still apropos.) But I will say that the state of videogame criticism—and, strangely enough, particularly the academic kind—is one of the few places where humanistic inquiry seems, at least to me, to be growing and expanding rather than contracting or ossifying. Such a generally positive and optimistic statement about a field of the humanities may not adhere to present conceptions about academic activity (indeed, it might even be unfashionable!), which seem to more generally despair about the humanities, and rightfully so. Admitting that some modes of criticism might be, at least in some ways, exhausted, would be an important caveat, especially given how the past few years have seen a considerable amount of reflection about contemporary modes of academic criticism—e.g., Rita Felski’s The Limits of Critique (2015) or Eric Hayot’s “Academic Writing, I Love You. Really, I Do” (2014). But I think that, given how the anti-intellectual miasma that has long been present in US life has intensified in recent years, creeping into seemingly every discourse, one of the really useful functions of videogame criticism may very well be its potential ability to allow reflection on the function of criticism itself in the twenty-first century. If one of the most prominent videogame critics is calling his activity “preposterous” and his object “adolescent,” this should be a cause for alarm, for such claims cannot but help to perpetuate present views about the worthlessness of the humanities. So, I would like to modestly suggest that, rather than look to toasters and widgets to inform how we talk about videogames, let us look to critics and what they have written. Edward W. Said once wrote: “for in its essence the intellectual life—and I speak here mainly about the social sciences and the humanities—is about the freedom to be critical: criticism is intellectual life and, while the academic precinct contains a great deal in it, its spirit is intellectual and critical, and neither reverential nor patriotic” (1994, 11). If one can approach videogames—of all things!—in such a spirit, perhaps other spheres of human activity can rediscover their critical spirit as well.

    _____

    Bradley J. Fest will begin teaching writing this fall at Carnegie Mellon University. His work has appeared or is forthcoming in boundary 2 (interviews here and here), Critical Quarterly, Critique, David Foster Wallace and “The Long Thing” (Bloomsbury, 2014), First Person Scholar, The Silence of Fallout (Cambridge Scholars, 2013), Studies in the Novel, and Wide Screen. He is also the author of a volume of poetry, The Rocking Chair (Blue Sketch, 2015), and a chapbook, “The Shape of Things,” was selected as finalist for the 2015 Tomaž Šalamun Prize and is forthcoming in Verse. Recent poems have appeared in Empty Mirror, PELT, PLINTH, TXTOBJX, and Small Po(r)tions. He previously reviewed Alexander R. Galloway’s The Interface Effect for The b2 Review “Digital Studies.”

    Back to the essay
    _____

    NOTES

    [1] On some of the first wave controversies, see Aarseth (2001).

    [2] For a representative sample of essays and books in the narratology versus ludology debate from the early days of academic videogame criticism, see Murray (1997 and 2004), Aarseth (1997, 2003, and 2004), Juul (2001), and Frasca (2003).

    [3] For representative texts, see Crogan (2011), Dyer-Witherford and Peuter (2009), Galloway (2006a and 2006b), Jagoda (2013 and 2016), Nakamura (2009), Shaw (2014), and Wark (2007). My claims about the vitality of the field of game studies are largely a result of having read these and other critics. There have also been a handful of interesting “videogame memoirs” published recently. See Bissell (2010) and Clune (2015).

    [4] Bogost defines procedurality as follows: “Procedural representation takes a different form than written or spoken representation. Procedural representation explains processes with other processes. . . . [It] is a form of symbolic expression that uses process rather than language” (2007, 9). For my own discussion of proceduralism, particularly with regard to The Stanley Parable (2013) and postmodern metafiction, see Fest (forthcoming 2016).

    [5] For instance, in the concluding chapter of Unit Operations, Bogost writes powerfully and convincingly about the need for a comparative videogame criticism in conversation with other forms of cultural criticism, arguing that “a structural change in our thinking must take place for videogames to thrive, both commercially and culturally” (2006, 179). It appears that the lack of any structural change in the nonetheless wildly thriving—at least financially—videogame industry has given Bogost serious pause.

    [6] Indeed, at one point he even questions the justification for the book in the first place: “The truth is, a book like this one is doomed to relatively modest sales and an even more modest readership, despite the generous support of the university press that publishes it and despite the fact that I am fortunate enough to have a greater reach than the average game critic” (Bogost 2015, 185). It is unclear why the limited reach of his writing might be so worrisome to Bogost given that, historically, the audience for, say, poetry criticism has never been all that large.

    [7] In addition to those previously mentioned, Bogost has also published Racing the Beam: The Atari Video Computer System (2009) and, with Simon Ferrari and Bobby Schweizer, Newsgames: Journalism at Play (2010). Also forthcoming is Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games (2016).

    [8] This is, to be sure, a somewhat confusing point. Are not record stores, book stores, and video stores (if such things still exist), along with tea shops, shoe stores, and clothing stores “retail establishment[s] devoted to a singular practice” (Bogost 2015, 182–83)? Are all such establishments unseemly because of the same logic? What makes a game store any different?

    [9] For a brief overview of Gamergate, see Winfield (2014). For a more detailed discussion of both the cultural and technological underpinnings of Gamergate, with a particular emphasis on the relationship between the algorithmic governance of sites such as Reddit or 4chan and online misogyny and harassment, see Massanari’s (2015) important essay. For links to a number of other articles and essays on gaming and feminism, see Ligman (2014) and The New Inquiry (2014). For essays about contemporary “gamer” culture, see Williams (2014) and Frase (2014). On gamers, Bogost writes in a chapter titled “The End of Gamers” from his previous book: “as videogames broaden in appeal, being a ‘gamer’ will actually become less common, if being a gamer means consuming games as one’s primary media diet or identifying with videogames as a primary part of one’s identity” (2011, 154).

    [10] See Bogost (2006, 73–89). Also, to be fair, Bogost devotes a paragraph of the introduction of How to Talk about Videogames to the considerable affective properties of videogames, but concludes the paragraph by saying that games are “Wagnerian Gesamtkunstwerk-flavored chewing gum” (Bogost 2015, ix), which, I feel, considerably undercuts whatever aesthetic value he had just ascribed to them.

    [11] In Alien Phenomenology Bogost calls such lists “Latour litanies” (2012, 38) and discusses this stylistic aspect of object-oriented ontology at some length in the chapter, “Ontography” (35–59).

    [12] See Harman (2012). Bogost addresses such concerns in the conclusion of Alien Phenomenology, responding to criticism about his study of the Atari 2600: “The platform studies project is an example of alien phenomenology. Yet our efforts to draw attention to hardware and software objects have been met with myriad accusations of human erasure: technological determinism most frequently, but many other fears and outrages about ‘ignoring’ or ‘conflating’ or ‘reducing,’ or otherwise doing violence to ‘the cultural aspects’ of things. This is a myth” (2012, 132).

    Back to the essay

    WORKS CITED

    • Aarseth, Espen. 1997. Cybertext: Perspectives on Ergodic Literature. Baltimore: Johns Hopkins University Press.
    • ———. 2001. “Computer Game Studies, Year One.” Game Studies 1, no. 1. http://gamestudies.org/0101/editorial.html.
    • ———. 2003. “Playing Research: Methodological Approaches to Game Analysis.” Game Approaches: Papers from spilforskning.dk Conference, August 28–29. http://hypertext.rmit.edu.au/dac/papers/Aarseth.pdf.
    • ———. 2004. “Genre Trouble: Narrativism and the Art of Simulation.” In First Person: New Media as Story, Performance, and Game, edited by Noah Wardrip-Fruin and Pat Harrigan, 45–55. Cambridge, MA: MIT Press.
    • Arac, Jonathan. 1989. Critical Genealogies: Historical Situations for Postmodern Literary Studies. New York: Columbia University Press.
    • Arnold, Matthew. 1993 (1864). “The Function of Criticism at the Present Time.” In Culture and Anarchy and Other Writings, edited by Stefan Collini, 26–51. New York: Cambridge University Press.
    • Bissell, Tom. 2010. Extra Lives: Why Video Games Matter. New York: Pantheon.
    • Bogost, Ian. 2006. Unit Operations: An Approach to Videogame Criticism. Cambridge, MA:MIT Press.
    • ———. 2007. Persuasive Games: The Expressive Power of Videogame Criticism. Cambridge, MA: MIT Press.
    • ———. 2009. Racing the Beam: The Atari Video Computer System. Cambridge, MA: MIT
    • Press.
    • ———. 2011. How to Do Things with Videogames. Minneapolis: University of Minnesota Press.
    • ———. 2012. Alien Phenomenology, or What It’s Like to Be a Thing. Minneapolis: University of Minnesota Press.
    • ———. 2015. How to Talk about Videogames. Minneapolis: University of Minnesota Press.
    • ———. Forthcoming 2016. Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games. New York: Basic Books.
    • Bogost, Ian, Simon Ferrari, and Bobby Schweizer. 2010. Newsgames: Journalism at Play.
    • Cambridge, MA: MIT Press.
    • Clune, Michael W. 2015. Gamelife: A Memoir. New York: Farrar, Straus and Giroux.
    • Crogan, Patrick. 2011. Gameplay Mode: War, Simulation, and Tehnoculture. Minneapolis: University of Minnesota Press.
    • Dyer-Witherford, Nick, and Greig de Peuter. 2009. Games of Empire: Global Capitalism and Video Games. Minneapolis: University of Minnesota Press.
    • Felski, Rita. 2015. The Limits of Critique. Chicago: University of Chicago Press.
    • Fest, Bradley J. Forthcoming 2016. “Metaproceduralism: The Stanley Parable and the Legacies of Postmodern Metafiction.” “Videogame Adaptation,” edited by Kevin M. Flanagan, special issue, Wide Screen.
    • Frasca, Gonzalo. 2003. “Simulation versus Narrative: Introduction to Ludology.” In The Video Game Theory Reader, edited by Mark J. P. Wolf and Bernard Perron, 221–36. New York: Routledge.
    • Frase, Peter. 2014.  “Gamer’s Revanche.” Peter Frase (blog), September 3. http://www.peterfrase.com/2014/09/gamers-revanche/.
    • Galloway, Alexander R. 2006a. “Warcraft and Utopia.” Ctheory.net, February 16. http://www.ctheory.net/articles.aspx?id=507.
    • ———. 2006b. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press.
    • Harman, Graham. 2012. “The Well-Wrought Broken Hammer: Object-Oriented Literary Criticism.” New Literary History 43, no. 2: 183–203.
    • Hayot, Eric. 2014. “Academic Writing, I Love You. Really, I Do.” Critical Inquiry 41, no. 1: 53–77.
    • Jagoda, Patrick. 2013. “Gamification and Other Forms of Play.” boundary 2 40, no. 2: 113–44.
    • ———. 2016. Network Aesthetics. Chicago: University of Chicago Press.
    • Juul, Jesper. 2001. “Games Telling Stories? A Brief Note on Games and Narratives.” Game Studies 1, no. 1. http://www.gamestudies.org/0101/juul-gts/.
    • Ligman, Chris. 2014. “August 31st.” Critical Distance, August 31. http://www.critical-distance.com/2014/08/31/august-31st/.
    • Massanari, Adrienne . 2015. “#Gamergate and The Fappening: How Reddit’s Algorithm, Governance, and Culture Support Toxic Technocultures.” New Media & Society, OnlineFirst, October 9.
    • Mirowski, Philip. 2013. Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown. New York: Verso.
    • Murray, Janet. 1997. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, MA: MIT Press.
    • ———. 2004. “From Game-Story to Cyberdrama.” In First Person: New Media as Story, Performance, and Game, edited by Noah Wardrip-Fruin and Pat Harrigan, 1–11. Cambridge, MA: MIT Press.
    • Nakamura, Lisa. 2009. “Don’t Hate the Player, Hate the Game: The Racialization of Labor in World of Warcraft.” Critical Studies in Media Communication 26, no. 2: 128–44.
    • The New Inquiry. 2014. “TNI Syllabus: Gaming and Feminism.” New Inquiry, September 2. http://thenewinquiry.com/features/tni-syllabus-gaming-and-feminism/.
    • Said, Edward W. 1994. “Identity, Authority, and Freedom: The Potentate and the Traveler.” boundary 2 21, no. 3: 1–18.
    • Shaw, Adrienne. 2014. Gaming at the Edge: Sexuality and Gender at the Margins of Gamer Culture. Minneapolis: University of Minnesota Press.
    • Wark, McKenzie. 2007. Gamer Theory. Cambridge, MA: Harvard University Press.
    • Williams, Ian. “Death to the Gamer.” Jacobin, September 9. https://www.jacobinmag.com/2014/09/death-to-the-gamer/.
    • Winfield, Nick. 2014. “Feminist Critics of Video Games Facing Threats in ‘GamerGate’ Campaign.” New York Times, October 15. http://www.nytimes.com/2014/10/16/technology/gamergate-women-video-game-threats-anita-sarkeesian.html.

    Back to the essay