b2o

Reviews and analysis of scholarly books about digital technology and culture, as well as of articles, legal proceedings, videos, social media, digital humanities projects, and other emerging digital forms, offered from a humanist perspective, in which our primary intellectual commitment is to the deeply embedded texts, figures, themes, and politics that constitute human culture, regardless of the medium in which they occur.

  • Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    By Audrey Watters

    ~

    This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology and the Media, Art, and Text PhD Program. The slides are also available here.

    Thank you very much for inviting me here to speak today. I’m particularly pleased to be speaking to those from Sociology and those from the English and those from the Media, Art, and Text departments, and I hope my talk can walk the line between and among disciplines and methods – or piss everyone off in equal measure. Either way.

    This is the last public talk I’ll deliver in 2016, and I confess I am relieved (I am exhausted!) as well as honored to be here. But when I finish this talk, my work for the year isn’t done. No rest for the wicked – ever, but particularly in the freelance economy.

    As I have done for the past six years, I will spend the rest of November and December publishing my review of what I deem the “Top Ed-Tech Trends” of the year. It’s an intense research project that usually tops out at about 75,000 words, written over the course of four to six weeks. I pick ten trends and themes in order to closely at the recent past, the near-term history of education technology. Because of the amount of information that is published about ed-tech – the amount of information, its irrelevance, its incoherence, its lack of context – it can be quite challenging to keep up with what is really happening in ed-tech. And just as importantly, what is not happening.

    So that’s what I try to do. And I’ll boast right here – no shame in that – no one else does as in-depth or thorough job as me, certainly no one who is entirely independent from venture capital, corporate or institutional backing, or philanthropic funding. (Of course, if you look for those education technology writers who are independent from venture capital, corporate or institutional backing, or philanthropic funding, there is pretty much only me.)

    The stories that I write about the “Top Ed-Tech Trends” are the antithesis of most articles you’ll see about education technology that invoke “top” and “trends.” For me, still framing my work that way – “top trends” – is a purposeful rhetorical move to shed light, to subvert, to offer a sly commentary of sorts on the shallowness of what passes as journalism, criticism, analysis. I’m not interested in making quickly thrown-together lists and bullet points. I’m not interested in publishing clickbait. I am interested nevertheless in the stories – shallow or sweeping – that we tell and spread about technology and education technology, about the future of education technology, about our technological future.

    Let me be clear, I am not a futurist – even though I’m often described as “ed-tech’s Cassandra.” The tagline of my website is “the history of the future of education,” and I’m much more interested in chronicling the predictions that others make, have made about the future of education than I am writing predictions of my own.

    One of my favorites: “Books will soon be obsolete in schools,” Thomas Edison said in 1913. Any day now. Any day now.

    Here are a couple of more recent predictions:

    “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.” – that’s Sebastian Thrun, best known perhaps for his work at Google on the self-driving car and as a co-founder of the MOOC (massive open online course) startup Udacity. The quotation is from 2012.

    And from 2013, by Harvard Business School professor, author of the book The Innovator’s Dilemma, and popularizer of the phrase “disruptive innovation,” Clayton Christensen: “In fifteen years from now, half of US universities may be in bankruptcy. In the end I’m excited to see that happen. So pray for Harvard Business School if you wouldn’t mind.”

    Pray for Harvard Business School. No. I don’t think so.

    Both of these predictions are fantasy. Nightmarish, yes. But fantasy. Fantasy about a future of education. It’s a powerful story, but not a prediction made based on data or modeling or quantitative research into the growing (or shrinking) higher education sector. Indeed, according to the latest statistics from the Department of Education – now granted, this is from the 2012–2013 academic year – there are 4726 degree-granting postsecondary institutions in the United States. A 46% increase since 1980. There are, according to another source (non-governmental and less reliable, I think), over 25,000 universities in the world. This number is increasing year-over-year as well. So to predict that the vast vast majority of these schools (save Harvard, of course) will go away in the next decade or so or that they’ll be bankrupt or replaced by Silicon Valley’s version of online training is simply wishful thinking – dangerous, wishful thinking from two prominent figures who will benefit greatly if this particular fantasy comes true (and not just because they’ll get to claim that they predicted this future).

    Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

    Fantasy. Fortune-telling. Or as capitalism prefers to call it “market research.”

    “Market research” involves fantastic stories of future markets. These predictions are often accompanied with a press release touting the size that this or that market will soon grow to – how many billions of dollars schools will spend on computers by 2020, how many billions of dollars of virtual reality gear schools will buy by 2025, how many billions of dollars of schools will spend on robot tutors by 2030, how many billions of dollars will companies spend on online training by 2035, how big will coding bootcamp market will be by 2040, and so on. The markets, according to the press releases, are always growing. Fantasy.

    In 2011, the analyst firm Gartner predicted that annual tablet shipments would exceed 300 million units by 2015. Half of those, the firm said, would be iPads. IDC estimates that the total number of shipments in 2015 was actually around 207 million units. Apple sold just 50 million iPads. That’s not even the best worst Gartner prediction. In October of 2006, Gartner said that Apple’s “best bet for long-term success is to quit the hardware business and license the Mac to Dell.” Less than three months later, Apple introduced the iPhone. The very next day, Apple shares hit $97.80, an all-time high for the company. By 2012 – yes, thanks to its hardware business – Apple’s stock had risen to the point that the company was worth a record-breaking $624 billion.

    But somehow, folks – including many, many in education and education technology – still pay attention to Gartner. They still pay Gartner a lot of money for consulting and forecasting services.

    People find comfort in these predictions, in these fantasies. Why?

    Gartner is perhaps best known for its “Hype Cycle,” a proprietary graphic presentation that claims to show how emerging technologies will be adopted.

    According to Gartner, technologies go through five stages: first, there is a “technology trigger.” As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage: the “peak of inflated expectations.” So many promises have been made about this technological breakthrough. Then, the third stage: the “trough of disillusionment.” Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly – this is the “slope of enlightenment.” Eventually the new technology becomes mainstream – the “plateau of productivity.”

    It’s not that hard to identify significant problems with the Hype Cycle, least of which being it’s not a cycle. It’s a curve. It’s not a particularly scientific model. It demands that technologies always move forward along it.

    Gartner says its methodology is proprietary – which is code for “hidden from scrutiny.” Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place technologies on the line. But most of the time when Gartner uses the word “methodology,” it is trying to signify “science,” and what it really means is “expensive reports you should buy to help you make better business decisions.”

    Can it really help you make better business decisions? It’s just a curve with some technologies plotted along it. The Hype Cycle doesn’t help explain why technologies move from one stage to another. It doesn’t account for technological precursors – new technologies rarely appear out of nowhere – or political or social changes that might prompt or preclude adoption. And at the end it is simply too optimistic, unreasonably so, I’d argue. No matter how dumb or useless a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted. Where would you plot the Segway, for example? (In 2008, ever hopeful, Gartner insisted that “This thing certainly isn’t dead and maybe it will yet blossom.” Maybe it will, Gartner. Maybe it will.)

    And maybe this gets to the heart as to why I’m not a futurist. I don’t share this belief in an increasingly technological future; I don’t believe that more technology means the world gets “more better.” I don’t believe that more technology means that education gets “more better.”

    Every year since 2004, the New Media Consortium, a non-profit organization that advocates for new media and new technologies in education, has issued its own forecasting report, the Horizon Report, naming a handful of technologies that, as the name suggests, it contends are “on the horizon.”

    Unlike Gartner, the New Media Consortium is fairly transparent about how this process works. The organization invites various “experts” to participate in the advisory board that, throughout the course of each year, works on assembling its list of emerging technologies. The process relies on the Delphi method, whittling down a long list of trends and technologies by a process of ranking and voting until six key trends, six emerging technologies remain.

    Disclosure/disclaimer: I am a folklorist by training. The last time I took a class on “methods” was, like, 1998. And admittedly I never learned about the Delphi method – what the New Media Consortium uses for this research project – until I became a scholar of education technology looking into the Horizon Report. As a folklorist, of course, I did catch the reference to the Oracle of Delphi.

    Like so much of computer technology, the roots of the Delphi method are in the military, developed during the Cold War to forecast technological developments that the military might use and that the military might have to respond to. The military wanted better predictive capabilities. But – and here’s the catch – it wanted to identify technology trends without being caught up in theory. It wanted to identify technology trends without developing models. How do you do that? You gather experts. You get those experts to consensus.

    So here is the consensus from the past twelve years of the Horizon Report for higher education. These are the technologies it has identified that are between one and five years from mainstream adoption:

    It’s pretty easy, as with the Gartner Hype Cycle, to look at these predictions and note that they are almost all wrong in some way or another.

    Some are wrong because, say, the timeline is a bit off. The Horizon Report said in 2010 that “open content” was less than a year away from widespread adoption. I think we’re still inching towards that goal – admittedly “open textbooks” have seen a big push at the federal and at some state levels in the last year or so.

    Some of these predictions are just plain wrong. Virtual worlds in 2007, for example.

    And some are wrong because, to borrow a phrase from the theoretical physicist Wolfgang Pauli, they’re “not even wrong.” Take “collaborative learning,” for example, which this year’s K–12 report posits as a mid-term trend. Like, how would you argue against “collaborative learning” as occurring – now or some day – in classrooms? As a prediction about the future, it is not even wrong.

    But wrong or right – that’s not really the problem. Or rather, it’s not the only problem even if it is the easiest critique to make. I’m not terribly concerned about the accuracy of the predictions about the future of education technology that the Horizon Report has made over the last decade. But I do wonder how these stories influence decision-making across campuses.

    What might these predictions – this history of the future – tell us about the wishful thinking surrounding education technology and about the direction that the people the New Media Consortium views as “experts” want the future to take. What can we learn about the future by looking at the history of our imagining about education’s future. What role does powerful ed-tech storytelling (also known as marketing) play in shaping that future? Because remember: to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

    It’s both convenient and troubling then these forward-looking reports act as though they have no history of their own; they purposefully minimize or erase their own past. Each year – and I think this is what irks me most – the NMC fails to looks back at what it had predicted just the year before. It never revisits older predictions. It never mentions that they even exist. Gartner too removes technologies from the Hype Cycle each year with no explanation for what happened, no explanation as to why trends suddenly appear and disappear and reappear. These reports only look forward, with no history to ground their direction in.

    I understand why these sorts of reports exist, I do. I recognize that they are rhetorically useful to certain people in certain positions making certain claims about “what to do” in the future. You can write in a proposal that, “According to Gartner… blah blah blah.” Or “The Horizon Reports indicates that this is one of the most important trends in coming years, and that is why we need to commit significant resources – money and staff – to this initiative.” But then, let’s be honest, these reports aren’t about forecasting a future. They’re about justifying expenditures.

    “The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Let’s consider: is there something about the field of computer science in particular – and its ideological underpinnings – that makes it more prone to encourage, embrace, espouse these sorts of predictions? Is there something about Americans’ faith in science and technology, about our belief in technological progress as a signal of socio-economic or political progress, that makes us more susceptible to take these predictions at face value? Is there something about our fears and uncertainties – and not just now, days before this Presidential Election where we are obsessed with polls, refreshing Nate Silver’s website obsessively – that makes us prone to seek comfort, reassurance, certainty from those who can claim that they know what the future will hold?

    “Software is eating the world,” investor Marc Andreessen pronounced in a Wall Street Journal op-ed in 2011. “Over the next 10 years,” he wrote, “I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.” Buy stock in technology companies was really the underlying message of Andreessen’s op-ed; this isn’t another tech bubble, he wanted to reinsure investors. But many in Silicon Valley have interpreted this pronouncement – “software is eating the world” – as an affirmation and an inevitability. I hear it repeated all the time – “software is eating the world” – as though, once again, repeating things makes them true or makes them profound.

    If we believe that, indeed, “software is eating the world,” that we are living in a moment of extraordinary technological change, that we must – according to Gartner or the Horizon Report – be ever-vigilant about emerging technologies, that these technologies are contributing to uncertainty, to disruption, then it seems likely that we will demand a change in turn to our educational institutions (to lots of institutions, but let’s just focus on education). This is why this sort of forecasting is so important for us to scrutinize – to do so quantitatively and qualitatively, to look at methods and at theory, to ask who’s telling the story and who’s spreading the story, to listen for counter-narratives.

    This technological change, according to some of the most popular stories, is happening faster than ever before. It is creating an unprecedented explosion in the production of information. New information technologies, so we’re told, must therefore change how we learn – change what we need to know, how we know, how we create and share knowledge. Because of the pace of change and the scale of change and the locus of change (that is, “Silicon Valley” not “The Ivory Tower”) – again, so we’re told – our institutions, our public institutions can no longer keep up. These institutions will soon be outmoded, irrelevant. Again – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    These forecasting reports, these predictions about the future make themselves necessary through this powerful refrain, insisting that technological change is creating so much uncertainty that decision-makers need to be ever vigilant, ever attentive to new products.

    As Neil Postman and others have cautioned us, technologies tend to become mythic – unassailable, God-given, natural, irrefutable, absolute. So it is predicted. So it is written. Techno-scripture, to which we hand over a certain level of control – to the technologies themselves, sure, but just as importantly to the industries and the ideologies behind them. Take, for example, the founding editor of the technology trade magazine Wired, Kevin Kelly. His 2010 book was called What Technology Wants, as though technology is a living being with desires and drives; the title of his 2016 book, The Inevitable. We humans, in this framework, have no choice. The future – a certain flavor of technological future – is pre-ordained. Inevitable.

    I’ll repeat: I am not a futurist. I don’t make predictions. But I can look at the past and at the present in order to dissect stories about the future.

    So is the pace of technological change accelerating? Is society adopting technologies faster than it’s ever done before? Perhaps it feels like it. It certainly makes for a good headline, a good stump speech, a good keynote, a good marketing claim, a good myth. But the claim starts to fall apart under scrutiny.

    This graph comes from an article in the online publication Vox that includes a couple of those darling made-to-go-viral videos of young children using “old” technologies like rotary phones and portable cassette players – highly clickable, highly sharable stuff. The visual argument in the graph: the number of years it takes for one quarter of the US population to adopt a new technology has been shrinking with each new innovation.

    But the data is flawed. Some of the dates given for these inventions are questionable at best, if not outright inaccurate. If nothing else, it’s not so easy to pinpoint the exact moment, the exact year when a new technology came into being. There often are competing claims as to who invented a technology and when, for example, and there are early prototypes that may or may not “count.” James Clerk Maxwell did publish A Treatise on Electricity and Magnetism in 1873. Alexander Graham Bell made his famous telephone call to his assistant in 1876. Guglielmo Marconi did file his patent for radio in 1897. John Logie Baird demonstrated a working television system in 1926. The MITS Altair 8800, an early personal computer that came as a kit you had to assemble, was released in 1975. But Martin Cooper, a Motorola exec, made the first mobile telephone call in 1973, not 1983. And the Internet? The first ARPANET link was established between UCLA and the Stanford Research Institute in 1969. The Internet was not invented in 1991.

    So we can reorganize the bar graph. But it’s still got problems.

    The Internet did become more privatized, more commercialized around that date – 1991 – and thanks to companies like AOL, a version of it became more accessible to more people. But if you’re looking at when technologies became accessible to people, you can’t use 1873 as your date for electricity, you can’t use 1876 as your year for the telephone, and you can’t use 1926 as your year for the television. It took years for the infrastructure of electricity and telephony to be built, for access to become widespread; and subsequent technologies, let’s remember, have simply piggy-backed on these existing networks. Our Internet service providers today are likely telephone and TV companies; our houses are already wired for new WiFi-enabled products and predictions.

    Economic historians who are interested in these sorts of comparisons of technologies and their effects typically set the threshold at 50% – that is, how long does it take after a technology is commercialized (not simply “invented”) for half the population to adopt it. This way, you’re not only looking at the economic behaviors of the wealthy, the early-adopters, the city-dwellers, and so on (but to be clear, you are still looking at a particular demographic – the privileged half.)

    And that changes the graph again:

    How many years do you think it’ll be before half of US households have a smart watch? A drone? A 3D printer? Virtual reality goggles? A self-driving car? Will they? Will it be fewer years than 9? I mean, it would have to be if, indeed, “technology” is speeding up and we are adopting new technologies faster than ever before.

    Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

    Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues (and this is from his recent book The Rise and Fall of American Growth: The US Standard of Living Since the Civil War), to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

    Let’s return briefly to those Horizon Report predictions again. They certainly reflect this belief that technology must be speeding up. Every year, there’s something new. There has to be. That’s the purpose of the report. The horizon is always “out there,” off in the distance.

    But if you squint, you can see each year’s report also reflects a decided lack of technological change. Every year, something is repeated – perhaps rephrased. And look at the predictions about mobile computing:

    • 2006 – the phones in their pockets
    • 2007 – the phones in their pockets
    • 2008 – oh crap, we don’t have enough bandwidth for the phones in their pockets
    • 2009 – the phones in their pockets
    • 2010 – the phones in their pockets
    • 2011 – the phones in their pockets
    • 2012 – the phones too big for their pockets
    • 2013 – the apps on the phones too big for their pockets
    • 2015 – the phones in their pockets
    • 2016 – the phones in their pockets

    This hardly makes the case for technological speeding up, for technology changing faster than it’s ever changed before. But that’s the story that people tell nevertheless. Why?

    I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

    But I also pay attention to these sorts of stories because there’s that part of me that is horrified at the stuff – predictions – that people pass off as true or as inevitable.

    “65% of today’s students will be employed in jobs that don’t exist yet.” I hear this statistic cited all the time. And it’s important, rhetorically, that it’s a statistic – that gives the appearance of being scientific. Why 65%? Why not 72% or 53%? How could we even know such a thing? Some people cite this as a figure from the Department of Labor. It is not. I can’t find its origin – but it must be true: a futurist said it in a keynote, and the video was posted to the Internet.

    The statistic is particularly amusing when quoted alongside one of the many predictions we’ve been inundated with lately about the coming automation of work. In 2014, The Economist asserted that “nearly half of American jobs could be automated in a decade or two.”“Before the end of this century,” Wired Magazine’s Kevin Kelly announced earlier this year, “70 percent of today’s occupations will be replaced by automation.”

    Therefore the task for schools – and I hope you can start to see where these different predictions start to converge – is to prepare students for a highly technological future, a future that has been almost entirely severed from the systems and processes and practices and institutions of the past. And if schools cannot conform to this particular future, then “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Now, I don’t believe that there’s anything inevitable about the future. I don’t believe that Moore’s Law – that the number of transistors on an integrated circuit doubles every two years and therefore computers are always exponentially smaller and faster – is actually a law. I don’t believe that robots will take, let alone need take, all our jobs. I don’t believe that YouTube has been rendered school irrevocably out-of-date. I don’t believe that technologies are changing so quickly that we should hand over our institutions to entrepreneurs, privatize our public sphere for techno-plutocrats.

    I don’t believe that we should cheer Elon Musk’s plans to abandon this planet and colonize Mars – he’s predicted he’ll do so by 2026. I believe we stay and we fight. I believe we need to recognize this as an ego-driven escapist evangelism.

    I believe we need to recognize that predicting the future is a form of evangelism as well. Sure gets couched in terms of science, it is underwritten by global capitalism. But it’s a story – a story that then takes on these mythic proportions, insisting that it is unassailable, unverifiable, but true.

    The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.

    Image credits: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28. And a special thanks to Tressie McMillan Cottom and David Golumbia for organizing this talk. And to Mike Caulfield for always helping me hash out these ideas.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • Elizabeth Losh — Hiding Inside the Magic Circle: Gamergate and the End of Safe Space

    Elizabeth Losh — Hiding Inside the Magic Circle: Gamergate and the End of Safe Space

    by Elizabeth Losh, The College of William and Mary

    The Gamergate controversy of recent years has brought renewed public attention to issues around online misogyny, as feminist game developers, critics, scholars, and fans of independent video gaming have been targeted by very intense campaigns of digital harassment that seem to threaten their fundamental rights to personal privacy, bodily safety, and sexual agency. Feminists under attack by users of the hashtag #GamerGate complain of being silenced, as they report being disciplined for imagined infractions of supposed sexual, social, journalistic, and ludic norms in computational culture with punishing messages of censure, ridicule, exclusion, and violence. As noted by the mainstream news media, extremely aggressive tactics have been deployed, including leaking women’s sensitive private information – such as unlisted addresses and social security numbers – to the web (a practice known as “doxxing”), placing false reports with law enforcement or emergency first responders (a practice known as “swatting”), and highly personalized stalking with rapid escalations of threats of graphic violence that are often sexualized as rape or racialized as lynching. Although it may be important for the eloquent first-person testimony of the terrorized women themselves to be given priority as speech acts that command attention in resisting prevailing misogyny, the women’s antagonists often are allowed to remain invisible. Furthermore, allies presuming to advocate for the feminist victims of Gamergate may not adequately honor their stated wishes for peace, privacy, and closure that those experiencing online violence may express (Quinn 2015). This essay attempts to examine the larger discursive context of Gamergate and why hardcore gamers who were fans of AAA videogames – often with military storylines and first-person shooter game mechanics – constructed a seemingly illogical and paranoid explanatory theory about so-called “social justice warriors” (Bokhari et al. 2015) or “SJWs,” pursuing unfair advantage to sway the game industry.

    How do we understand how Gamergaters’ claims for noninterference and sovereignty in game worlds and online forums function alongside their claims for no-holds-barred investigations and public debates? Common rhetorical tactics deployed by Gamergaters include using rights-based language to further this specific variant of the men’s rights movement (Esmay 2014) and making appeals to the values of a supposedly rational public sphere (MSMPlan 2015). As these hardcore gaming fans deny the materiality, affect, embodiment, labor, and situatedness of new media, they also affirm positive notions about the exceptionalism of a realm defined – in Nicholas Negroponte’s terms – by bits rather than atoms. Gamergaters are particularly vehement in denying that “online violence” is a possibility with tweets such as “>violence >online pick one” and “will you please point me to the online killing fields where all the bodies from violence online are kept?” (Wernimont 2015). The Gamergate vision of digital culture is one of disembodied and immaterial interactions in which emotional harm is considered to be nonviolent.

    According to Gamergate accounts, the assumption that hardcore gamers representing masculine white privilege were under attack was also apparently buttressed by a number of online articles by game journalists suggesting that that the species was endangered and soon to be extinct. Gamers were declared “over” (Alexander 2014), at their “end” (Golding 2014), or facing the “death” of their collective identity (Plunkett 2014). The arguments made for years by feminist game collectives for pursuing the large market share in lower-status “casual” games, often played by women, had finally seemed to create inroads for independent developers. At the same time Gamergaters described their defensive position as a response to what they often characterized as a feminist “incursion” or “invasion” of gaming that was conceptualized as a substantive attack or threat to gamers. So-called “men’s rights” proponents – who may characterize themselves as “Men’s Human Rights Activists” – differentiated themselves from the distributed and heterogeneous population of gamers but also proclaimed that “the same people attacking Gamergate have been attacking us for years, using exactly the same tactics” (Esmay 2014). According to Breitbart columnist Yiannopoulos (2014a), “cultural warriors” arrived on the scene of gaming like “genocidal, psychopathic aliens in Independence Day;” these “social justice warriors” allegedly attempted to colonize a diverse community, but their “killjoy” advances were repelled and defenders declared them “not welcome in the gaming community.” According to this columnist, supposedly “politeness and persistence” had guaranteed victory in “the culture wars against guilt-mongerers, nannies, authoritarians and far-Left agitators.” While Sara Ahmed (2010) has explicitly called for self-identified “feminist killjoys” to disrupt the perpetuation of patriarchal false consciousness and the enforcement of positive affect in society, the perceived opponents of Gamergate are often cast as the aggressors despite what may be deep desires to participate in the gaming communities that exclude them.

    Decades before Gamergate, Dutch game theorist Johan Huizinga (2014) described what he called the “magic circle” of the temporary world constituted by a game, which appears to function as an isolated “consecrated spot” within which “special rules obtain” for performances apart from everyday concerns (10). Gamergaters often use similar terminology to discuss how game spaces should be intended to serve as a refuge from real-world behavioral constraints and the restrictions of social roles, as in the case of one Breitbart blogger seeking to exclude “angry feminists” and “unethical journalists” from interference with game play.

    Gamers, as dozens of readers have told me in the relatively short time I have been covering the controversy now called #GamerGate, play games to escape the frustrations and absurdities of everyday life. That’s why they object so strongly to having those frustrations injected into their online worlds. The war in the gaming industry isn’t about right versus left, or tolerance versus bigotry: it’s between those who leverage video games to fight proxy wars about other things, introducing unwanted and unwarranted tension and misery, and those who simply want to enjoy themselves. (Yiannopoulos 2014a)

    Gamergate advocates claim that video games are expected to be arenas where gamers can assert their sovereignty and self-determination in spaces that can’t be “leveraged” or annexed to “fight proxy wars” by non-gamer outsiders.

    According to Huizinga (2014), the arena of game play is characterized by the freedom of voluntary participation, disinterested behavior, and an opposition to serious conduct. Similar criteria also often are presented as premises for action in the rhetoric of Gamergate enthusiasts in their comments on various sites for public debate. For example, feminist game developers and critics may be accused of coercing and manipulating potential allies who are journalists through sexual liaisons, romantic promises, or appeals to social justice that invoke guilt and shame. Feminist opponents of Gamergaters are also characterized on sites such as Breitbart as “self-promoters” and “opportunists” and labeled as “egotistical” people who “beg for sympathy and cash” (Yiannopoulos 2014b). Thus, according to the logic of free choice, feminist “social justice-oriented art” in digital culture is aimed at “robbing players of agency and individualism” in every possible kind of engagement (Yiannopoulos 2014b).

    Personal freedom and a separation from material interests or a profit motive are often cited as ethical values shared by Gamergate, although many of its tactics are not at all solemn or high-minded. Active Gamergaters on the Escapist and 8chan emphasize their own diverse and distributed structure, and these anarchic swarms of participants take action “for the lulz,” much as members of Anonymous and 4chan have engaged in outing and calling out campaigns (Coleman 2013). Images of feminist gamers are altered with editing software, phrases like “online violence” are mocked, and fake identities are manufactured with puns and inside jokes. For example, in a crowd-funding effort to promote women in games who disavowed feminist “SJWs,” Gamergate forum members created an elaborate green-eyed and hoodie-wearing fictional persona intended to represent a pro-Gamergate libertarian “everywoman.” The avatar dubbed “Vivian James” wears the four-leafed clover of 4chan, “tough-loves video games,” and “loathes dishonesty and hypocrisy” (“The Birth of Vivian” 2015).

    While Gamergaters emphasize “personal responsibility” and “individual agency” (Yiannopoulos 2014b) as values, feminist critics tend to emphasize interdependence and states of being always-already subject to the coercions of others. In Huizinga’s (2014) terms, feminists inside the magic circle may be perceived as “spoil-sports” who must be “ejected” from the “community,” because they are attempting to break the magic world by failing to acknowledge its misogynistic conventions (11-12). As Anastasia Salter (2016) notes, in Huizinga’s analysis the spoil-sport is most visible in “boys’ games,” thereby establishing solidarity around youthful masculinity as the norm.

    By discussing misogyny in different venues for conversation among networked publics in game forums, blogs, or vlogging communities, and even within live multi-player gaming itself, feminists are cast as a disruptive presence.  Social justice warriors must be treated as aggressors to be repulsed by Gamergaters from the magic circles of game worlds in order to reclaim these spaces and return them to their proper exceptional status and thus maintain their security from real-world incursions.

    Of course, the concept of “safe space” has been central to the history of the women’s liberation movement and its associated consciousness-raising efforts. After all, feminists have reasoned that safe space might be necessary to explore intimate issues about sexuality and reproductive health – which might even include techniques for gynecological self-examination championed by foundational texts like Our Bodies, Ourselves – and safe space would also be needed to share confidences about personal histories of rape, domestic violence, and other forms of gendered trauma. How safe space is constituted can be developed along a number of different axes. For example, as awareness about “microaggressions” – a term used to describe the automatic or unconscious utterance of subtle insults (Solorzano, Ceja, & Yosso 2000) – has proliferated, participants at feminist events may be asked to be mindful of their own assumptions, privileges, and power relations in social gatherings. The full sensorium of potential kinds of assault may also be invoked in defining safe spaces, so those speaking loudly or wearing scent may be prohibited from these activities to protect those intolerant, averse, or allergic to certain stimuli.

    Feminists themselves have been reevaluating the assumed need for safe space for a variety of reasons. While media outlets grappling with the concept of “trigger warnings” may characterize any special treatment of vulnerable individuals as coddling or “hiding from scary ideas” (Shulevitz 2015), feminists themselves are often concerned about how the gestures of exclusion mandated by protective impulses enforce particular norms counter to the goal of empowerment. Some argue that “brave spaces” that encourage public acts of asserting identity or declaring solidarity may be more productive than private “safe spaces” (Fox 2004). Homogeneous safe spaces designed for the security of cisgendered whites may be criticized as excluding transgender people (Browne 2009) or people of color (Halberstam 2014). As Betty Sasaki (2002) observes, “safety” can become “the code word for the absence of conflict, a tacit and seductive invitation to collude with the unspoken ideological machinery of the institutional family” (47). And Donadey (2009) points out the irony “that radical feminist pedagogy tends to replicate the assumptions of the bourgeois concept of the public sphere” (214).

    In addition to using the #Gamergate and #SJW (for “social justice warrior”) hashtags on social media platforms such as Twitter, Gamergate adherents frequently use #NotYourShield, which indicates that feminists shouldn’t be shielded from criticism merely because they might claim alliances with underrepresented groups, such as women or minorities, given the fact that members of these groups might not identify with feminism or feel exploited, disenfranchised, or excluded from hardcore gaming communities. #NotYourShield allies of Gamergate may embrace the quintessential hardcore gamer identity of AAA titles with military themes, or may indicate that they are content with conventionally feminized casual games played on mobile devices and don’t want to interfere with so-called “real” games. While Gamergaters may protect the borders of their own magic circles, they criticize those who claim feminist discourse operates in safe spaces devoid of challenges from opponents. Affixing the #NotYourShield piece of metadata to a message supports Gamergaters’ contentions that feminists use the victimization of women and people of color to shield themselves unfairly from rebuttals or tests of truth claims. In videos such as “#NotYourShield – We Are Gamers,” choruses of voices are carefully curated to emphasize “corruption” and “censorship” as features of feminism, and “transparency” and call-out culture as features of Gamergate.

    Although Huizinga’s (2014) magic circle may be more open to public spectatorship than the private sphere of feminist safe space, it is also a zone of exception that is marked off by “secrecy” and “disguise,” according to Homo Ludens (13). Even if the rules for the magic circle are assumed to be uncontested, and the space of play is accepted as apart from the everyday world, the exceptional territory of game play could be a space of less violence (if mockery of authoritarian rulers is tolerated in the case of the Bakhtinian carnivalesque) or more violence (if physical injuries from contact sports are permitted that would normally be prosecuted as assault). Nonetheless, according to Edward Castronova (2007), the membrane of the magic circle “can be considered a shield of sorts, protecting the fantasy world from the outside world. The inner world needs defining and protecting because it is necessary that everyone who goes there adhere to the different set of rules” (147).

    Feminist game critics have begun to question Huizinga’s (2014) concept of a zone of exceptionalism, particularly as the legal, economic, and social consequences of game play are manifested in a variety of “real world” contexts. For example, Mia Consalvo (2009) challenges Castronova’s belief that “fantasy worlds” are a separate domain: “even as he might wish for such spaces, such worlds must inevitably leave the hands of their creators and are then taken up (and altered, bent, modified, extended) by players or users—indicating that the inviolability of the game space is a fiction, as is the magic circle, as pertaining to digital games” (411). Within game spaces of conflict and collaboration, players may bring different agendas into the magic circle, and thus it might be more difficult than Huizinga (or Castronova) imagines to reach consensus about the common rules of play. For example, when a guild of players in World of Warcraft decided to hold a funeral in an area for player-versus-player combat, other participants justified attacking the solemn ceremony in a coordinated raid on the grounds of asserting existing play conventions (Losh 2009). Consalvo further claims the static, formalist vision of bounded play that is grounded in structuralist theory, which is articulated by Huizinga and his disciples, ignores the fact that context is constantly being evaluated by players. Instead of the magic circle, she posits that players “exist or understand ‘reality’ through recourse to various frames” (415).

    For women, queer and transgender persons, and people of color who identify as gamers, neither magic circle nor safe space often seem descriptive of the harsh settings of their game play experiences. As Lisa Nakamura (2012) observes, playing as a woman, a person of color, or a queer person requires extraordinary game skills and talent at a level of hyper-accomplishment because of the extremely rigorous “difficulty setting” of playing in an identity position other than straight white male. Unfortunately, to be an exceptional individual in an exceptional space is often punished rather than rewarded. Moreover, as a woman of color, Shonte Daniels (2014) has insisted that “gaming never was a safe space for women” because “their identity makes them vulnerable to threats or harassment.” However, she also speculates that Gamergate may prove to be “both a blessing and a curse,” given how much attention to online misogyny has been generated by the intensity and egregiousness of Gamergate behavior.

    Many date the Gamergate controversy from fall 2014 – when harassment of dozens of feminists in the videogame industry, including game developers Zoë Quinn and Brianna Wu and cultural critic Anita Sarkeesian, made headlines. However, online misogyny and gender-based aggression have had a long history in digital culture that goes back to bulletin boards, MOOs, and MUDs and the existence of virtual rape in early forms of cyberspace (Dibbell 1998). To coordinate the current campaign of harassment, IRC channels and online forums such as Reddit, 4chan, and 8chan were used by an anonymous and amorphous group that came to be represented by the Twitter hashtag #GamerGate after actor Adam Baldwin deployed a familiar suffix associated with prominent political cover-ups. According to the Wikipedia entry, Gamergate “has been described as a manifestation of a culture war over gaming culture diversification, artistic recognition and social criticism of video games, and the gamer social identity. Some of the people using the Gamergate hashtag allege collusion among feminists, progressives, journalists and social critics, which they believe is the cause of increasing social criticism in video game reviews” (“Gamergate Controversy” 2015).

    It is worth noting that Wikipedia’s handling of its own distributed labor practices defining Gamergate has had a contentious history that included a personal invitation to Gamergaters from Wikipedia founder Jimmy Wales to contribute to improving the Gamergate article (Wales 2014), a pointed rejection of financial contributions to Wikipedia from Gamergaters (“So I Decided to Email Jimbo” 2014), and a defense of banning Wikipedia editors perceived as biased against Gamergate (Beaudette 2014). Ironically, during this intense period of engagement with the “toxic” participants of Gamergate eventually dismissed by Wales, Wikipedia often deployed a rhetoric about volunteerism, disinterested conduct, and playing by a neutral set of rules that paralleled similar rhetorical appeals from Gamergaters.

    Attention to this recent controversy – about who is a gamer and what is a game – has already generated a literature of scholarly response that focuses, as this essay does, on Gamergate rhetoric itself. Shira Chess and Adrienne Shaw’s (2015) essay, “A Conspiracy of Fishes,” analyzes how a particular cultural moment in which “masculine gaming culture became aware of and began responding to feminist game scholars” produced conspiratorial discourses with a specific internal logic that shouldn’t be dismissed as nonsensical:

    It is less useful to consider the validity of a conspiracy in terms of actual persecution, and is more potent if we look at it in terms of a combination of perceived persecution and an examination of the anxieties that the conspiracy is articulating. From this perspective, we can look at gaming culture as a somewhat marginalized group: For years those who have participated in gaming culture have defended their interests in spite of claims by popular media and (some) academics blaming it for violence, racism, and sexism. A perceived threat opens a venue for those who feel their culture has been misunderstood—regardless of whether they are the oppressors or the ones being oppressed. It is easy to negate and mark the claims of this group as inconsequential, but it is more powerful to consider the cultural realities that underline those claims. (217)

    As Chess and Shaw point out, the gamer identity may function in the context of other kinds of intersectional identities in which subjects for which the personal is political can be imagined as oppressors in one context and the oppressed in another.

    In addition to deploying a primary strategy about constructing a narrative about persecution aimed at a marginalized group, Gamergate is also concerned with the secondary strategy of mapping supposed networks of influence across publication venues, media genres, knowledge domains, political spheres, and economic sectors. Such Gamergate infographics seem to have begun with visualizations that were often reminiscent of Wanted posters, in which names and photographs of individual offenders were clustered in particular interest areas. For example, 4chan assembled a list of “SJW Game Journalists” that was republished on Reddit, which goes far beyond the initial allegations of impropriety about game reviewing at Kotaku to target writers at over a dozen other publications.

    As Gamergaters go down the “rabbit hole” of exploring possible connections and exposing hidden networks, they eventually claim political and educational institutions as agents in the conspiracy with a particular focus on DiGRA, the Digital Games Research Association, which was founded in 2003 and holds an international conference each year. One diagram shows the tentacles of DiGRA extending into online venues for gaming news and reviews, such as Kotaku, Gamasutra, and Polygon, as well as mainstream publications with a print tradition, such as The Guardian and TIME, and conference venues for many AAA games, such as the annual Game Developers Conference (GDC), which was founded in 1988 with a focus on fostering more creativity in the industry. Pictures of offender/participants in the network continued to be featured in this denser and more recursive form of network mapping, as though facial recognition would be a key literacy for Gamergaters.

    It is worth noting that many feminists would describe DiGRA as far from being a haven organization from misogyny, given existing biases in game studies that may privilege academics with ties to computer science, corporate start-ups, or other male dominated fields. Members of the feminist game collective Ludica have described strong reactions of denial when they declared at DiGRA in 2007 that the “power elite of the game industry is a predominately white, and secondarily Asian, male-dominated corporate and creative elite that represents a select group of large, global publishing companies in conjunction with a handful of massive chain retail distributors” and thus constitutes a “hegemonic” power that “determines which technologies will be deployed, and which will not; which games will be made, and by which designers; which players are important to design for, and which play styles will be supported” (Fron et al. 2007). The rhetoric of the Ludica manifestos about how games and gamers were being defined too rigidly by an industry enamored of AAA titles often ran counter to the origin stories of organizations such as GDC and SIGGRAPH.

    The third key strategy of Gamergaters – in addition to the fabricating the persecution narrative and the influence maps – is formulating threats of financial retaliation. If liberal members of the press and academic and professional associations in game studies and game development benefit from a supposed flow of money, social capital, and privileged access to career advancement, libertarian Gamergaters will thwart them with economic threats. This creates a paradoxical dynamic in which Gamergaters both assert an ethos of economic disinterest – because gaming is supposed to be a non-profit/non-wage activity that is separate from accumulation of capital in the real world – and seek to exercise their collective power to crowdfund sympathizers, and boycott, divest, and freeze assets of feminist allies and ally organizations. Advertisers are besieged with consumer complaints about the ethics of reporting in game publications, university employees are reported to administrators with accusations about frittering away public funds, and even donations to Wikipedia are withdrawn by indignant Gamergaters.

    Because feminists supposedly use financial interest as a lever, Gamergaters must also use financial interest as a way to assert the fairness, neutrality, and civility of a rational public sphere, which is tied to their fourth strategy about policing discourse. In regulating language in order to keep it freely flowing in a neoliberal marketplace of ideas so that the best notions will be the most valued, hyperbolic and hysterical feminist “strawmanning” and “insulting” very explicitly will not be tolerated by Gamergaters. In insisting that harassers are a statistically insignificant fraction of their movement in a counterfactual account of their power to terrorize targets and dominate channels of communication, language reminiscent of Robert’s Rules of Order can be as commonly encountered in Gamergate discourses as more stereotypical forms of trolling.

    This does not mean that the campaigns of Gamergate to construct us-and-them narratives, to make explicit and to visualize connections in social networks, to block some financial transactions and facilitate others, and to regulate discourse with structures of rational dialogue, leveling effects, and tone policing are not misogynistic. They defend and enable doxxing, swatting, and stalking behaviors that undermine the very barriers between virtual reality and material existence that are central to their contradictory ideologies of exceptionalism and common jurisdiction.

    The need for nurturing diversity among game players and developers (Fron et al. 2007) has been a work in progress for the better part of a decade, but in the wake of Gamergate, hundreds of prominent signatories who asserted the “right to play games, criticize games and make games without getting harassed or threatened” published an “open letter to the gaming community” (IGDA 2014). The fact that this pointed defense of feminist gamers, critics, and designers also used rights-based language might be instructive for better understanding the discursive context of Gamergate as well.

    The Italian biopolitical philosopher Roberto Esposito (2010, 2011) has theorized that two conflicting modalities of “community” and “immunity” operate when members either accept or resist the obligations of the social contract. Looking at the rhetoric of Gamergaters about the magic circle and how they caricature the rhetoric of feminists about safe space, we see how these oppositions are underexamined, and we can ask why opportunities for reflection and reflexive thinking about intersectionality are being foreclosed.

    Works Cited

    • Ahmed, Sara. 2010. The Promise of Happiness. Durham: Duke University Press.
    • Alexander, Leigh. 2014. “‘Gamers’ Don’t Have to Be Your Audience. ‘Gamers’ Are Over.” Gamasutra, August 28. http://www.gamasutra.com/view/news/224400/Gamers_dont_have_to_be_your_audience_Gamers_are_over.php.
    • Bailey, Moya. 2015. “#transform(ing)DH Writing and Research: An Autoethnography of Digital Humanities and Feminist Ethics.” Digital Humanities Quarterly 9, no. 2.
    • Beaudette, Philippe. 2015. “Civility, Wikipedia, and the Conversation on Gamergate.” Wikimedia Blog. January 27. http://blog.wikimedia.org/2015/01/27/civility-wikipedia-Gamergate/.
    • Bokhari, Allum, and Milo Yiannopoulos. 2015. “Entertainment Industry Says ‘No More’ to Social Justice Warriors.” Breitbart. July 20. http://www.breitbart.com/big-hollywood/2015/07/20/enough-entire-entertainment-industry-says-no-more-to-social-justice-warriors/.
    • Browne, Kath. 2009. “Womyn’s Separatist Spaces: Rethinking Spaces of Difference and Exclusion.” Transactions of the Institute of British Geographers, New Series, 34 (4): 541–56.
    • Castronova, Edward. 2007. Synthetic Worlds: The Business and Culture of Online Games. Chicago: University of Chicago Press.
    • Chess, Shira, and Adrienne Shaw. 2015. “A Conspiracy of Fishes, Or, How We Learned to Stop Worrying About #Gamergate and Embrace Hegemonic Masculinity.” Journal of Broadcasting & Electronic Media 59, no. 1: 208–20.
    • Coleman, Beth. 2011. Hello Avatar: Rise of the Networked Generation. Cambridge, MA: MIT Press.
    • Coleman, E. Gabriella. 2014. Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. Brooklyn, NY: Verso.
    • Consalvo, Mia. 2009. “There Is No Magic Circle.” Games and Culture 4, no. 4: 408–17.
    • Daniels, Shonte. 2014. “Gaming Was Never a Safe Space for Women.” RH Reality Check. November 4. http://rhrealitycheck.org/article/2014/11/10/gaming-never-safe-space-women/.
    • Dibbell, Julian. 1998. “A Rape in Cyberspace.” http://www.juliandibbell.com/articles/a-rape-in-cyberspace/.
    • Donadey, Anne. 2009. “Negotiating Tensions: Teaching about Race in a Graduate Feminist Classroom.” In Feminist Pedagogy: Looking back to Move Forward, edited by Robbin Crabtree, David Alan Sapp, and Adela C. Licona, 209–29. Baltimore, MD: Johns Hopkins University Press.
    • Esmay, Dean. 2014. “Keeping up with #Gamergate.” A Voice for Men. October 16. https://lockerdome.com/7754206970916417.
    • Esposito, Roberto. 2010. Communitas: The Origin and Destiny of Community. Stanford, Calif.: Stanford University Press.
    • ———. 2011. Immunitas: The Protection and Negation of Life. Cambridge; Malden MA: Polity.
    • Fox, D. L., and C. Fleischer. 2004. “Beginning Words: Toward ‘Brave Spaces’ in English Education.” English Education. 37, no. 1: 3–4.
    • Fron, Janine, Tracy Fullerton, Jacquelyn Ford Morie, and Celia Pearce. 2007. “The Hegemony of Play.” In Proceedings, DiGRA: Situated Play, Tokyo, September 24-27, 2007, 309–18. Tokyo, Japan. http://www.digra.org/dl/db/07312.31224.pdf.
    • “Gamergate Controversy.” 2015. Wikipedia, the Free Encyclopedia. https://en.wikipedia.org/w/index.php?title=Gamergate_controversy&oldid=682713753.
    • Golding, Dan. 2014. “The End of Gamers.” Dan Golding. August 28. http://dangolding.tumblr.com/post/95985875943/the-end-of-gamers.
    • Halberstam, Jack. 2014. “You Are Triggering Me! The Neo-Liberal Rhetoric of Harm, Danger and Trauma.” Bully Bloggers. July 5. https://bullybloggers.wordpress.com/2014/07/05/you-are-triggering-me-the-neo-liberal-rhetoric-of-harm-danger-and-trauma/.
    • Huizinga, Johan. 2014. Homo Ludens: A Study of the Play-Element in Culture. Mansfield Centre, CT: Martino Fine Books.
    • “IGDA Developer Satisfaction Survey Summary Report Available – International Game Developers Association (IGDA).” 2015. https://www.igda.org/news/179436/IGDA-Developer-Satisfaction-Survey-Summary-Report-Available.htm (accessed September 23, 2015).
    • Jacobs-Huey, Lanita. 2006. From the Kitchen to the Parlor Language and Becoming in African American Women’s Hair Care. Oxford, UK, and New York, NY: Oxford University Press.
    • Koebler, Jason. 2015. “Dear Gamergate: Please Stop Stealing Our Shit.” Motherboard. http://motherboard.vice.com/read/dear-Gamergate-please-stop-stealing-our-shit (accessed September 24, 2015).
    • Levmore, Saul, and Martha Craven Nussbaum. 2010. The Offensive Internet: Speech, Privacy, and Reputation. Cambridge, MA: Harvard University Press.
    • Losh, Elizabeth. 2009. “Regulating Violence in Virtual Worlds: Theorizing Just War and Defining War Crimes in World of Warcraft.” Pacific Coast Philology 44, no. 2: 159–72.
    • MSMPlan. 2015. “The Flaws in Adrienne Shaw’s Paper on Gamergate and Conspiracy Theories.” Medium. March 18. https://medium.com/@MSMPlan/the-flaws-in-adrienne-shaw-s-paper-on-Gamergate-and-conspiracy-theories-7fc91df43bc.
    • Nakamura, Lisa. 2012. “Queer Female of Color: The Highest Difficulty Setting There Is? Gaming Rhetoric as Gender Capital.” Ada: A Journal of Gender, New Media & Technology 1, no. 1. http://adanewmedia.org/2012/11/issue1-nakamura/
    • Negroponte, Nicholas. 1995. Being Digital. New York: Knopf.
    • Plunkett, Luke. 2014. “We Might Be Witnessing The ‘Death of An Identity.’” Kotaku, August 28. http://kotaku.com/we-might-be-witnessing-the-death-of-an-identity-1628203079.
    • Quinn, Zoe. 2015. “August Never Ends.” Quinnspiracy Blog. January 11. http://ohdeargodbees.tumblr.com/post/107838639074/august-never-ends.
    • Salter, Anastasia. 2016. “Code before Content? Brogrammer Culture in Games and Electronic Literature.” presented at the Electronic Literature Organization, University of Victoria, June 10.
    • Sargon of Akkad. 2014. A Conspiracy Within Gaming #Gamergate #NotYourShield. https://www.youtube.com/watch?v=yJyU7RSvs_s.
    • Sasaki, Betty. 2002. “Toward a Pedagogy of Coalition.” In Twenty-First-Century Feminist Classrooms: Pedagogies of Identity and Difference, edited by Amie A. Macdonald and Susan Sánchez-Casal, 31–57. New York, NY: Palgrave Macmillan.
    • Shield Project. 2014. #NotYourShield – We Are Gamers. https://www.youtube.com/watch?v=SYqBdCmDR0M#t=81.
    • Shulevitz, Judith. 2015. “In College and Hiding From Scary Ideas.” The New York Times, March 21. http://www.nytimes.com/2015/03/22/opinion/sunday/judith-shulevitz-hiding-from-scary-ideas.html.
    • “So I Decided to Email Jimbo…” 2015. Reddit. Accessed September 25. https://www.reddit.com/r/KotakuInAction/comments/2pphuo/so_i_decided_to_email_jimbo/cmyzva7?context=3.
    • Solorzano, Daniel, Miguel Ceja, and Tara Yosso. 2000. “Critical Race Theory, Racial Microaggressions, and Campus Racial Climate: The Experiences of African American College Students.” The Journal of Negro Education 69, no. 1/2: 60–73.
    • “The Birth of Vivian.” 2015. http://i.imgur.com/FdqKFwu.jpg (accessed September 27, 2015).
    • Wales, Jimmy. 2014. “I Have an Idea for pro #Gamergate Folks of Good Will. Go to http://Gamergate.wikia.com/Proposed_Wikipedia_Entry … and Write What You Think Is an Appropriate Article.” Microblog. @jimmy_wales. November 12. https://twitter.com/jimmy_wales/status/532624325694992385?ref_src=twsrc%5Etfw.
    • Wernimont, Jacqueline. 2015. “A ‘Conversation’ about Violence against Women Online (with Images, Tweets) · Jwernimo.” Storify. https://storify.com/jwernimo/a-conversation-about-violence-against-women-online (accessed September 23, 2015).
    • Yiannopoulos, Milo. 2014a. “Gamergate: Angry Feminists, Unethical Journalists Are the Ones Not Welcome in the Gaming Community.” Breitbart. September 14. http://www.breitbart.com/big-hollywood/2014/09/15/the-Gamergate-movement-is-making-terrific-progress-don-t-stop-now/.
    • ———. 2014b. “The Authoritarian Left Was on Course to Win the Culture Wars… Then Along Came #Gamergate.” Breitbart. November 12. http://www.breitbart.com/london/2014/11/12/the-authoritarian-left-was-on-course-to-win-the-culture-wars-then-along-came-Gamergate/.
  • Zachary Loeb – What Technology Do We Really Need? – A Critique of the 2016 Personal Democracy Forum

    Zachary Loeb – What Technology Do We Really Need? – A Critique of the 2016 Personal Democracy Forum

    by Zachary Loeb

    ~

    Technological optimism is a dish best served from a stage. Particularly if it’s a bright stage in front of a receptive and comfortably seated audience, especially if the person standing before the assembled group is delivering carefully rehearsed comments paired with compelling visuals, and most importantly if the stage is home to a revolving set of speakers who take turns outdoing each other in inspirational aplomb. At such an event, even occasional moments of mild pessimism – or a rogue speaker who uses their fifteen minutes to frown more than smile – serve to only heighten the overall buoyant tenor of the gathering. From TED talks to the launching of the latest gizmo by a major company, the person on a stage singing the praises of technology has become a familiar cultural motif. And it is a trope that was alive and drawing from that well at the 2016 Personal Democracy Forum, the theme of which was “The Tech We Need.”

    Over the course of two days some three-dozen speakers and a similar number of panelists gathered to opine on the ways in which technology is changing democracy to a rapt and appreciative audience. The commentary largely aligned with the sanguine spirit animating the founding manifesto of the Personal Democracy Forum (PDF) – which frames the Internet as a potent force set to dramatically remake and revitalize democratic society. As the manifesto boldly decrees, “the realization of ‘Personal Democracy,’ where everyone is a full participant, is coming” and it is coming thanks to the Internet. The two days of PDF 2016 consisted of a steady flow of intelligent, highly renowned, well-meaning speakers expounding on the conference’s theme to an audience largely made up of bright caring individuals committed to answering that call. To attend an event like PDF and not feel moved, uplifted or inspired by the speakers would be a testament to an empathic failing. How can one not be moved? But when one’s eyes are glistening and when one’s heart is pounding it is worth being wary of the ideology in which one is being baptized.

    To critique an event like the Personal Democracy Forum – particularly after having actually attended it – is something of a challenge. After all, the event is truly filled with genuine people delivering (mostly) inspiring talks. There is something contagious about optimism, especially when it presents itself as measured optimism. And besides, who wants to be the jerk grousing and grumbling after an activist has just earned a standing ovation? Who wants to cross their arms and scoff that the criticism being offered is precisely the type that serves to shore up the system being criticized? Pessimists don’t often find themselves invited to the after party. Thus, insofar as the following comments – and those that have already been made – may seem prickly and pessimistic it is not meant as an attack upon any particular speaker or attendee. Many of those speakers truly were inspiring (and that is meant sincerely), many speakers really did deliver important comments (that is also meant sincerely), and the goal here is not to question the intentions of PDF’s founders or organizers. Yet prominent events like PDF are integral to shaping the societal discussions surrounding technology – and therefore it is essential to be willing to go beyond the inspirational moments and ask: what is really being said here?

    For events like PDF do serve to advance an ideology, whether they like it or not. And it is worth considering what that ideology means, even if it forces one to wipe the smile from one’s lips. And when it comes to PDF much of its ideology can be discovered simply by dissecting the theme for the 2016 conference: “The Tech We Need.”

    “The Tech”

    What do you (yes, you) think of when you hear the word technology? After all, it is a term that encompasses a great deal, which is one of the reasons why Leo Marx (1997) was compelled to describe technology as a “hazardous concept.” Eyeglasses are technology, but so too is Google Glass. A hammer is technology, and so too is a smart phone. In other words, when somebody says “technology is X” or “technology does Q” or “technology will result in R” it is worth pondering whether technology really is, does or results in those things, or if what is being discussed is really a particular type of technology in a particular context. Granted, technology remains a useful term, it is certainly a convenient shorthand (one which very many people [including me] are guilty of occasionally deploying), but in throwing the term technology about so casually it is easy to obfuscate as much as one clarifies. At PDF it seemed as though a sentence was not complete unless it included a noun, a verb and the word technology – or “tech.” Yet what was meant by “tech” at PDF almost always meant the Internet or a device linked to the Internet – and qualifying this by saying “almost” is perhaps overly generous.

    Thus the Internet (as such), web browsers, smart phones, VR, social networks, server farms, encryption, other social networks, apps, and websites all wound up being pleasantly melted together into “technology.” When “technology” encompasses so much a funny thing begins to happen – people speak effusively about “technology” and only name specific elements when they want to single something out for criticism. When technology is so all encompassing who can possibly criticize technology? And what would it mean to criticize technology when it isn’t clear what is actually meant by the term? Yes, yes, Facebook may be worthy of mockery and smart phones can be used for surveillance but insofar as the discussion is not about the Internet but “technology” on what grounds can one say: “this stuff is rubbish”? For even if it is clear that the term “technology” is being used in a way that focuses on the Internet if one starts to seriously go after technology than one will inevitably be confronted with the question “but aren’t hammers also technology?” In short, when a group talks about “the tech” but by “the tech” only means the Internet and the variety of devices tethered to it, what happens is that the Internet appears as being synonymous with technology. It isn’t just a branch or an example of technology, it is technology! Or to put this in sharper relief: at a conference about “the tech we need” held in the US in 2016 how can one avoid talking about the technology that is needed in the form of water pipes that don’t poison people? The answer: by making it so that the term “technology” does not apply to such things.

    The problem is that when “technology” is used to only mean one set of things it muddles the boundaries of what those things are, and what exists outside of them. And while it does this it allows people to confidently place trust in a big category, “technology,” whereas they would probably have been more circumspect if they were just being asked to place trust in smart phones. After all, “the Internet will save us” doesn’t have quite the same seductive sway as “technology will save us” – even if the belief is usually put more eloquently than that. When somebody says “technology will save us” people can think of things like solar panels and vaccines – even if the only technology actually being discussed is the Internet. Here, though, it is also vital to approach the question of “the tech” with some historically grounded modesty in mind. For the belief that technology is changing the world and fundamentally altering democracy is nothing new. The history of technology (as an academic field) is filled with texts describing how a new tool was perceived as changing everything – from the compass to the telegraph to the phonograph to the locomotive to the [insert whatever piece of technology you (the reader) can think of]. And such inventions were often accompanied by an, often earnest, belief that these inventions would improve everything for the better! Claims that the Internet will save us, invoke déjà vu for those with a familiarity with the history of technology. Carolyn Marvin’s masterful study When Old Technologies Were New (1988) examines the way in which early electrical communications methods were seen at the time of their introduction, and near the book’s end she writes:

    Predictions that strife would cease in a world of plenty created by electrical technology were clichés breathed by the influential with conviction. For impatient experts, centuries of war and struggle testified to the failure of political efforts to solve human problems. The cycle of resentment that fueled political history could perhaps be halted only in a world of electrical abundance, where greed could not impede distributive justice. (206)

    Switch out the words ”electrical technology” for “Internet technology” and the above sentences could apply to the present (and the PDF forum) without further alterations. After all, PDF was certainly a gathering of “the influential” and of “impatient experts.”

    And whenever “tech” and democracy are invoked in the same sentence it is worth pondering whether the tech is itself democratic, or whether it is simply being claimed that the tech can be used for democratic purposes. Lewis Mumford wrote at length about the difference between what he termed “democratic” and “authoritarian” technics – in his estimation “democratic” systems were small scale and manageable by individuals, whereas “authoritarian” technics represented massive systems of interlocking elements where no individual could truly assert control. While Mumford did not live to write about the Internet, his work makes it very clear that he did not consider computer technologies to belong to the “democratic” lineage. Thus, to follow from Mumford, the Internet appears as a wonderful example of an “authoritarian” technic (it is massive, environmentally destructive, turns users into cogs, runs on surveillance, cannot be controlled locally, etc…) – what PDF argues for is that this authoritarian technology can be used democratically. There is an interesting argument there, and it is one with some merit. Yet such a discussion cannot even occur in the confusing morass that one finds oneself in when “the tech” just means the Internet.

    Indeed, by meaning “the Internet” but saying “the tech” groups like PDF (consciously or not) pull a bait and switch whereby a genuine consideration of what “the tech we need” simply becomes a consideration of “the Internet we need.”

    “We”

    Attendees to the PDF conference received a conference booklet upon registration; it featured introductory remarks, a code of conduct, advertisements from sponsors, and a schedule. It also featured a fantastically jarring joke created through the wonders of, perhaps accidental, juxtaposition; however, to appreciate the joke one needed to open the booklet so as to be able to see the front and back cover simultaneously. Here is what that looked like:

    Personal Democracy Forum (2016)

    Get it?

    Hilarious.

    The cover says “The Tech We Need” emblazoned in blue over the faces of the conference speakers, and the back is an advertisement for Microsoft stating: “the future is what we make it.” One almost hopes that the layout was intentional. For, who the heck is the “we” being discussed? Is it the same “we”? Are you included in that “we”? And this is a question that can be asked of each of those covers independently of the other: when PDF says “we” who is included and who is excluded? When Microsoft says “we” who is included and who is excluded? Of course, this gets muddled even more when you consider that Microsoft was the “presenting sponsor” for PDF and that many of the speakers at PDF have funding ties to Microsoft. The reason this is so darkly humorous is that there is certainly an argument to be made that “the tech we need” has no place for mega-corporations like Microsoft, while at the same time the booklet assures that “the future is what we [Microsoft] make it.” In short: the future is what corporations like Microsoft will make it…which might be very different from the kind of tech we need.

    In considering the “we” of PDF it is worth restating that this is a gathering of well-meaning individuals who largely seem to want to approach the idea of “we” with as much inclusivity as possible. Yet defining a “we” is always fraught, speaking for a “we” is always dangerous, and insofar as one can think of PDF with any kind of “we” (or “us”) in mind the only version of the group that really emerges is one that leans heavily towards describing the group actually present at the event. And while one can certainly speak about the level (or lack) of diversity at the PDF event – the “we” who came together at PDF is not particularly representative of the world. This was also brought into interesting relief in some other amusing ways: throughout the event one heard numerous variations of the comment “we all have smart phones” – but this did not even really capture the “we” of PDF. While walking down the stairs to a session one day I clearly saw a man (wearing a conference attendee badge) fiddling with a flip-phone – I suppose he wasn’t included in the “we” of “we all have smart phones.” But I digress.

    One encountered further issues with the “we” when it came to the political content of the forum. While the booklet states, and the hosts repeated over and over, that the event was “non-partisan” such a descriptor is pretty laughable. Those taking to the stage were a procession of people who had cut their teeth working for MoveOn and the activists represented continually self-identified as hailing from the progressive end of the spectrum. The token conservative speaker who stepped onto the stage even made a self-deprecating joke in which she recognized that she was one of only a handful (if that) of Republicans present. So, again, who is missing from this “we”? One can be a committed leftist and genuinely believe that a figure like Donald Trump is a xenophobic demagogue – and still recognize that some of his supporters might have offered a very interesting perspective to the PDF conversation. After all, the Internet (“the tech”) has certainly been used by movements on the right as well – and used quite effectively at that. But this part of a national “we” was conspicuously absent from the forum even if they are not nearly so absent from Twitter, Facebook, or the population of people owning smart phones. Again, it is in no way shape or form an endorsement of anything that Trump has said to point out that when a forum is held to discuss the Internet and democracy that it is worth having the people you disagree with present.

    Another question of the “we” that is worth wrestling with revolves around the way in which events like PDF involve those who offer critical viewpoints. If, as is being argued here, PDF’s basic ideology is that the Internet (“the tech”) is improving people’s lives and will continue to do so (leading towards “personal democracy”) – it is important to note that PDF welcomed several speakers who offered accounts of some of the shortcomings of the Internet. Figures including Sherry Turkle, Kentaro Toyama, Safiya Noble, Kate Crawford, danah boyd, and Douglas Rushkoff all took the stage to deliver some critical points of view – and yet in incorporating such voices into the “we” what occurs is that these critiques function less as genuine retorts and more as safety valves that just blow off a bit of steam. Having Sherry Turkle (not to pick on her) vocally doubt the empathetic potential of the Internet just allows the next speaker (and countless conference attendees) to say “well, I certainly don’t agree with Sherry Turkle.” Nevertheless, one of the best ways to inoculate yourself against the charge of unthinking optimism is to periodically turn the microphone over to a critic. But perhaps the most important things that such critics say are the ways in which they wind up qualifying their comments – thus Turkle says “I’m not anti-technology,” Toyama disparages Facebook only to immediately add “I love Facebook,” and fears regarding the threat posed by AI get laughed off as the paranoia of today’s “apex predators” (rich white men) being concerned that they will lose their spot at the top of the food chain. The environmental costs of the cloud are raised, the biased nature of algorithms is exposed – but these points are couched against a backdrop that says to the assembled technologists “do better” not “the Internet is a corporately controlled surveillance mall, and it’s overrated.” The heresies that are permitted are those that point out the rough edges that need to be rounded so that the pill can be swallowed. To return to the previous paragraph, this is not to say that PDF needs to invite John Zerzan or Chellis Glendinning to speak…but one thing that would certainly expose the weaknesses of the PDF “we” is to solicit viewpoints that genuinely come from outside of that “we.” Granted, PDF is more TED talk than FRED talk.

    And of course, and most importantly, one must think of the “we” that goes totally unheard. Yes, comments were made about the environmental cost of the cloud and passing phrases recognized mining – but PDF’s “we” seems to mainly refer to a “we” defined as those who use the Internet and Internet connected devices. Miners, those assembling high-tech devices, e-waste recyclers, and the other victims of those processes are only a hazy phantom presence. They are mentioned in passing, but not ever included fully in the “we.” PDF’s “the tech we need” is for a “we” that loves the Internet and just wants it to be even better and perhaps a bit nicer, while Microsoft’s we in “the future is what we make it” is a “we” that is committed to staying profitable. But amidst such statements there is an even larger group saying: “we are not being included.” That unheard “we” being the same “we” from the classic IWW song “we have fed you all for a thousand years” (Green et al 2016). And as the second line of that song rings out “and you hail us still unfed.”

    “Need”

    When one looks out upon the world it is almost impossible not to be struck by how much is needed. People need homes, people need –not just to be tolerated – but accepted, people need food, people need peace, people need stability, people need the ability to love without being subject to oppression, people need to be free from bigotry and xenophobia, people need…this list could continue with a litany of despair until we all don sackcloth. But do people need VR headsets? Do people need Facebook or Twitter? Do those in the possession of still-functioning high-tech devices need to trade them in every eighteen months? Of course it is important to note that technology does have an important role in meeting people’s needs – after all “shelter” refers to all sorts of technology. Yet, when PDF talks about “the tech we need” the “need” is shaded by what is meant by “the tech” and as was previously discussed that really means “the Internet.” Therefore it is fair to ask, do people really “need” an iPhone with a slightly larger screen? Do people really need Uber? Do people really need to be able to download five million songs in thirty seconds? While human history is a tale of horror it requires a funny kind of simplistic hubris to think that World War II could have been prevented if only everybody had been connected on Facebook (to be fair, nobody at PDF was making this argument). Are today’s “needs” (and they are great) really a result of a lack of technology? It seems that we already have much of the tech that is required to meet today’s needs, and we don’t even require new ways to distribute it. Or, to put it clearly at the risk of being grotesque: people in your city are not currently going hungry because they lack the proper app.

    The question of “need” flows from both the notion of “the tech” and “we” – and as was previously mentioned it would be easy to put forth a compelling argument that “the tech we need” involves water pipes that don’t poison people with lead, but such an argument is not made when “the tech” means the Internet and when the “we” has already reached the top of Maslow’s hierarchy of needs. If one takes a more expansive view of “the tech” and “we” than the range of what is needed changes accordingly. This issue – the way “tech” “we” and “need” intersect – is hardly a new concern. It is what prompted Ivan Illich (1973) to write, in Tools for Conviviality, that:

    People need new tools to work with rather than tools that ‘work’ for them. They need technology to make the most of the energy and imagination each has, rather than more well-programmed energy slaves. (10)

    Granted, it is certainly fair to retort “but who is the ‘we’ referred to by Illich” or “why can’t the Internet be the type of tool that Illich is writing about” – but here Illich’s response would be in line with the earlier referral to Mumford. Namely: accusations of technological determinism aside, maybe it’s fair to say that some technologies are oversold, and maybe the occasional emphasis on the way that the Internet helps activists serves as a patina that distracts from what is ultimately an environmentally destructive surveillance system. Is the person tethered to their smart phone being served by that device – or are they serving it? Or, to allow Illich to reply with his own words:

    As the power of machines increases, the role of persons more and more decreases to that of mere consumers. (11)

    Mindfulness apps, cameras on phones that can be used to film oppression, new ways of downloading music, programs for raising money online, platforms for connecting people on a political campaign – the user is empowered as a citizen but this empowerment tends to involve needing the proper apps. And therefore that citizen needs the proper device to run that app, and a good wi-fi connection, and… the list goes on. Under the ideology captured in the PDF’s “the tech we need” to participate in democracy becomes bound up with “to consume the latest in Internet innovation.” Every need can be met, provided that it is the type of need, which the Internet can meet. Thus the old canard “to the person with a hammer every problem looks like a nail” finds its modern equivalent in “to the person with a smart phone and a good wi-fi connection, every problem looks like one that can be solved by using the Internet.” But as for needs? Freedom from xenophobia and oppression are real needs – undoubtedly – but the Internet has done a great deal to disseminate xenophobia and prop up oppressive regimes. Continuing to double down on the Internet seems like doing the same thing “we” have been doing and expecting different results because finally there’s an “app for that!”

    It is, again, quite clear that those assembled at PDF came together with well-meaning attitudes, but as Simone Weil (2010) put it:

    Intentions, by themselves, are not of any great importance, save when their aim is directly evil, for to do evil the necessary means are always within easy reach. But good intentions only count when accompanied by the corresponding means for putting them into effect. (180)

    The ideology present at PDF emphasizes that the Internet is precisely “the means” for the realization of its attendees’ good intentions. And those who took to the stage spoke rousingly of using Facebook, Twitter, smart phones, and new apps for all manner of positive effects – but hanging in the background (sometimes more clearly than at other times) is the fact that these systems also track their users’ every move and can be used just as easily by those with very different ideas as to what “positive effects” look like. The issue of “need” is therefore ultimately a matter not simply of need but of “ends” – but in framing things in terms of “the tech we need” what is missed is the more difficult question of what “ends” do we seek. Instead “the tech we need” subtly shifts the discussion towards one of “means.” But, as Jacques Ellul, recognized the emphasis on means – especially technological ones – can just serve to confuse the discussion of ends. As he wrote:

    It must always be stressed that our civilization is one of means…the means determine the ends, by assigning us ends that can be attained and eliminating those considered unrealistic because our means do not correspond to them. At the same time, the means corrupt the ends. We live at the opposite end of the formula that ‘the ends justify the means.’ We should understand that our enormous present means shape the ends we pursue. (Ellul 2004, 238)

    The Internet and the raft of devices and platforms associated with it are a set of “enormous present means” – and in celebrating these “means” the ends begin to vanish. It ceases to be a situation where the Internet is the mean to a particular end, and instead the Internet becomes the means by which one continues to use the Internet so as to correct the current problems with the Internet so that the Internet can finally achieve the… it is a snake eating its own tail.

    And its own tale.

    Conclusion: The New York Ideology

    In 1995, Richard Barbrook and Andy Cameron penned an influential article that described what they called “The Californian Ideology” which they characterized as

    promiscuously combin[ing] the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through a profound faith in the emancipatory potential of the new information technologies. In the digital utopia, everybody will be both hip and rich. (Barbrook and Cameron 2001, 364)

    As the placing of a state’s name in the title of the ideology suggests, Barbrook and Cameron were setting out to describe the viewpoint that was underneath the firms that were (at that time) nascent in Silicon Valley. They sought to describe the mixture of hip futurism and libertarian politics that worked wonderfully in the boardroom, even if there was now somebody in the boardroom wearing a Hawaiian print shirt – or perhaps jeans and a hoodie. As companies like Google and Facebook have grown the “Californian Ideology” has been disseminated widely, and though such companies periodically issued proclamations about not being evil and claimed that connecting the world was their goal they maintained their utopian confidence in the “independence of cyberspace” while directing a distasteful gaze towards the “dinosaurs” of representative democracy that would dare to question their zeal. And though it is a more recent player in the game, one is hard-pressed to find a better example than Uber of the fact that this ideology is alive and well.

    The Personal Democracy Forum is not advancing the Californian Ideology. And though the event may have featured a speaker who suggested that the assembled “we” think of the “founding fathers” as start-up founders – the forum continually returned to the questions of democracy. While the Personal Democracy Forum shares the “faith in the emancipatory potential of the new information technologies” with Silicon Valley startups it seems less “free-wheeling” and more skeptical of “entrepreneurial zeal.” In other words, whereas Barbrook and Cameron spoke of “The Californian Ideology” what PDF makes clear is that there is also a “New York Ideology.” Wherein the ideological hallmark is an embrace of the positive potential of new information technologies tempered by the belief that such potential can best be reached by taming the excesses of unregulated capitalism. Where the Californian Ideology says “libertarian” the New York Ideology says “liberation.” Where the Californian Ideology celebrates capital the New York Ideology celebrates the power found in a high-tech enhanced capitol. The New York Ideology balances the excessive optimism of the Californian Ideology by acknowledging the existence of criticism, and proceeds to neutralize this criticism by making it part and parcel of the celebration of the Internet’s potential. The New York Ideology seeks to correct the hubris of the Californian Ideology by pointing out that it is precisely this hubris that turns many away from the faith in the “emancipatory potential.” If the Californian Ideology is broadcast from the stage at the newest product unveiling or celebratory conference, than the New York Ideology is disseminated from conferences like PDF and the occasional skeptical TED talk. The New York Ideology may be preferable to the Californian Ideology in a thousand ways – but ultimately it is the ideology that manifests itself in the “we” one encounters in the slogan “the tech we need.”

    Or, to put it simply, whereas the Californian Ideology is “wealth meaning,” the New York Ideology is “well-meaning.”

    Of course, it is odd and unfair to speak of either ideology as “Californian” or “New York.” California is filled with Californians who do not share in that ideology, and New York is filled with New Yorkers who do not share in that ideology either. Yet to dub what one encounters at PDF to be “The New York Ideology” is to indicate the way in which current discussions around the Internet are not solely being framed by “The Californian Ideology” but also by a parallel position wherein faith in Internet enabled solutions puts aside its libertarian sneer to adopt a democratic smile. One could just as easily call the New York Ideology the “Tech On Stage Ideology” or the “Civic Tech Ideology” – perhaps it would be better to refer to the Californian Ideology as the SV Ideology (silicon valley) and the New York Ideology as the CV ideology (civic tech). But if the Californian Ideology refers to the tech campus in Silicon Valley than the New York Ideology refers to the foundation based in New York – that may very well be getting much of its funding from the corporations that call Silicon Valley home. While Uber sticks with the Californian Ideology, companies like Facebook have begun transitioning to the New York Ideology so that they can have their panoptic technology and their playgrounds too. Whilst new tech companies emerging in New York (like Kickstarter and Etsy) make positive proclamations about ethics and democracy by making it seem that ethics and democracy are just more consumption choices that one picks from the list of downloadable apps.

    The Personal Democracy Forum is a fascinating event. It is filled with intelligent individuals who speak of democracy with unimpeachable sincerity, and activists who really have been able to use the Internet to advance their causes. But despite all of this, the ideological emphasis on “the tech we need” remains based upon a quizzical notion of “need,” a problematic concept of “we,” and a reductive definition of “tech.” For statements like “the tech we need” are not value neutral – and even if the surface ethics are moving and inspirational, sometimes a problematic ideology is most easily disseminated when it takes care to dispense with ideologues. And though the New York Ideology is much more subtle than the Californian Ideology – and makes space for some critical voices – it remains a vehicle for disseminating an optimistic faith that a technologically enhanced Moses shall lead us into the high-tech promised land.

    The 2016 Personal Democracy Forum put forth an inspirational and moving vision of “the tech we need.”

    But when it comes to promises of technological salvation, isn’t it about time that “we” stopped getting our hopes up?

    Coda

    I confess, I am hardly free of my own ideological biases. And I recognize that everything written here may simply be dismissed of by those who find it hypocritical that I composed such remarks on a computer and then posted them online. But I would say that the more we find ourselves using technology the more careful we must be that we do not allow ourselves to be used by that technology.

    And thus, I shall simply conclude by once more citing a dead, but prescient, pessimist:

    I have no illusions that my arguments will convince anyone. (Ellul 1994, 248)

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, where an earlier version of this post first appeared, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Barbrook, Richard and Andy Cameron. 2001. “The Californian Ideology.” In Peter Ludlow, ed., Crypto Anarchy, Cyberstates and Pirate Utopias. Cambridge: MIT Press. 363-387.
    • Ellul, Jacques. 2004. The Political Illusion. Eugene, OR: Wipf and Stock.
    • Ellul, Jacques. 1994. A Critique of the New Commonplaces. Eugene, OR: Wipf and Stock.
    • Green, Archie, David Roediger, Franklin Rosemont, and Salvatore Salerno. 2016. The Big Red Songbook: 250+ IWW Songs! Oakland, CA: PM Press.
    • Illich, Ivan. 1973. Tools for Conviviality. New York: Harper and Row.
    • Marvin, Carolyn. 1988. When Old Technologies Were New: Thinking About Electric Communication in the Late Nineteenth Century. New York: Oxford University Press.
    • Marx, Leo. 1997. “‘Technology’: The Emergence of a Hazardous Concept.” Social Research 64:3 (Fall). 965-988.
    • Mumford, Lewis. 1964. “Authoritarian and Democratic Technics.” in Technology and Culture, 5:1 (Winter). 1-8.
    • Weil, Simone. 2010. The Need for Roots. London: Routledge.
  • Bradley J. Fest – The Function of Videogame Criticism

    Bradley J. Fest – The Function of Videogame Criticism

    a review of Ian Bogost, How to Talk about Videogames (University of Minnesota Press, 2015)

    by Bradley J. Fest

    ~

    Over the past two decades or so, the study of videogames has emerged as a rigorous, exciting, and transforming field. During this time there have been a few notable trends in game studies (which is generally the name applied to the study of video and computer games). The first wave, beginning roughly in the mid-1990s, was characterized by wide-ranging debates between scholars and players about what they were actually studying, what aspects of videogames were most fundamental to the medium.[1] Like arguments about whether editing or mise-en-scène was more crucial to the meaning-making of film, the early, sometimes heated conversations in the field were primarily concerned with questions of form. Scholars debated between two perspectives known as narratology and ludology, and asked whether narrative or play was more theoretically important for understanding what makes videogames unique.[2] By the middle of the 2000s, however, this debate appeared to be settled (as perhaps ultimately unproductive and distracting—i.e., obviously both narrative and play are important). Over the past decade, a second wave of scholars has emerged who have moved on to more technical, theoretical concerns, on the one hand, and more social and political issues, on the other (frequently at the same time). Writers such as Patrick Crogan, Nick Dyer-Witherford, Alexander R. Galloway, Patrick Jagoda, Lisa Nakamura, Greig de Peuter, Adrienne Shaw, McKenzie Wark, and many, many others write about how issues such as control and empire, race and class, gender and sexuality, labor and gamification, networks and the national security state, action and procedure can pertain to videogames.[3] Indeed, from a wide sampling of contemporary writing about games, it appears that the old anxieties regarding the seriousness of its object have been put to rest. Of course games are important. They are becoming a dominant cultural medium; they make billions of dollars; they are important political allegories for life in the twenty-first century; they are transforming social space along with labor practices; and, after what many consider a renaissance in independent game development over the past decade, some of them are becoming quite good.

    Ian Bogost has been one of the most prominent voices in this second wave of game criticism. A media scholar, game designer, philosopher, historian, and professor of interactive computing at the Georgia Institute of Technology, Bogost has published a number of influential books. His first, Unit Operations: An Approach to Videogame Criticism (2006), places videogames within a broader theoretical framework of comparative media studies, emphasizing that games deserve to be approached on their own terms, not only because they are worthy of attention in and of themselves but also because of what they can show us about the ways other media operate. Bogost argues that “any medium—poetic, literary, cinematic, computational—can be read as a configurative system, an arrangement of discrete, interlocking units of expressive meaning. I call these general instances of procedural expression, unit operations” (2006, 9). His second book, Persuasive Games: The Expressive Power of Videogames (2007), extends his emphasis on the material, discrete processes of games, arguing that they can and do make arguments; that is, games are rhetorical, and they are rhetorical by virtue of what they and their operator can do, their procedures: games make arguments through “procedural rhetoric.”[4] The publication of Persuasive Games in particular—which he promoted with an appearance on The Colbert Report (2005–14)—saw Bogost emerge as a powerful voice in the broad cohort of second wave writers and scholars.

    But I feel that the publication of Bogost’s most recent book, How to Talk about Videogames (2015), might very well end up signaling the beginning of a third phase of videogame criticism. If the first task of game criticism was to formally define its object, and the second wave of game studies involved asking what games can and do say about the world, the third phase might see critics reflecting on their own processes and procedures, thinking, not necessarily about what videogames are and do, but about what videogame criticism is and does. How to Talk about Videogames is a book that frequently poses the (now quite old) question: what is the function of criticism at the present time? In an industry dominated by multinational media megaconglomerates, what should the role of (academic) game criticism be? What can a handful of researchers and scholars possibly do or say in the face of such a massive, implacable, profit-driven industry, where every announcement about future games further stokes its rabid fan base of slobbering, ravening hordes to spend hundreds of dollars and thousands of hours consuming a form known for its spectacular violence, ubiquitous misogyny, and myopic tribalism? What is the point of writing about games when the videogame industry appears to happily carry on as if nothing is being said at all, impervious to any conversation that people may be having about its products beyond what “fans” demand?

    To read the introduction and conclusion of Bogost’s most recent book, one might think that, suggestions about their viability aside, both the videogame industry and the critical writing surrounding it are in serious crisis, and the matter of the cultural status of the videogame has hardly been put to rest. As a scholar, critic, and designer who has been fairly consistent in positively exploring what digital games can do, what they can uniquely accomplish as a process-based medium, it is striking, at least to this reviewer, that Bogost begins by anxiously admitting,

    whenever I write criticism of videogames, someone strongly invested in games as a hobby always asks the question “is this parody?” as if only a miscreant or a comedian or a psychopath would bother to invest the time and deliberateness in even thinking, let alone writing about videogames with the seriousness that random, anonymous Internet users have already used to write about toasters, let alone deliberate intellectuals about film or literature! (Bogost 2015, xi–xii)

    Bogost calls this kind of attention to the status of his critical endeavor in a number of places in How to Talk about Videogames. The book shows him involved in that untimely activity of silently but implicitly assessing his body of work, reflectively approaching his critical task with cautious trepidation. In a variety of moments from the opening and closing of the book, games and criticism are put into serious question. Videogames are puerile, an “empty diversion” (182), and without value; “games are grotesque. . . . [they] are gross, revolting, heaps of arbitrary anguish” (1); “games are stupid” (9); “that there could be a game criticism [seems] unlikely and even preposterous” (181). In How to Talk about Videogames, Bogost, at least in some ways, is giving up his previous fight over whether or not videogames are serious aesthetic objects worthy of the same kind of hermeneutic attention given to more established art forms.[5] If games are predominantly treated as “perversion, excess” (183), a symptom of “permanent adolescence” (180), as unserious, wasteful, unproductive, violently sadistic entertainments—perhaps there is a reason. How to Talk about Videogames shows Bogost turning an intellectual corner toward a decidedly ironic sense of his role as a critic and the worthiness of his critical object.

    Compare Bogost’s current pessimism with the optimism of his previous volume, How to Do Things with Videogames (2011), to which How to Talk about Videogames functions as a kind of sequel or companion. In this earlier book, he is rather more affirmative about the future of the videogame industry (and, by proxy, videogame criticism):

    What if we allowed that videogames have many possible goals and purposes, each of which couples with many possible aesthetics and designs to create many possible player experiences, none of which bears any necessary relationship to the commercial videogame industry as we currently know it. The more games can do, the more the general public will become accepting of, and interested in, the medium in general. (Bogost 2011, 153)

    2011’s How to Do Things with Videogames aims to bring to the table things that previous popular and scholarly approaches to videogames had ignored in order to show all the other ways that videogames operate, what they are capable of beyond mere mimetic simulation or entertaining distraction, and how game criticism might allow their audiences to expand beyond the province of the “gamer” to mirror the diversified audiences of other media. Individual chapters are devoted to how videogames produce empathy and inspire reverence; they can be vehicles for electioneering and promotion; games can relax, titillate, and habituate; they can be work. Practicing what he calls “media microecology,” a critical method that “seeks to reveal the impact of a medium’s properties on society . . . through a more specialized, focused attention . . . digging deep into one dark, unexplored corner of a media ecosystem” (2011, 7), Bogost argues that game criticism should be attentive to more than simply narrative or play. The debates that dominated the early days of critical game studies, in this regard, only account for a rather limited view of what games can do. Appearing at a time when many were arguing that the medium was beginning to reach aesthetic maturity, Bogost’s 2011 book sounds a note of hope and promise for the future of game studies and the many unexplored possibilities for game design.

    How to Talk about Videogames

    I cannot really overstate, however, the ways in which How to Talk about Videogames, published four years later, shows Bogost reversing tack, questioning his entire enterprise.[6] Even with the appearance of such a serious, well-received game as Gone Home (2013)—to which he devotes a particularly scathing chapter about what the celebration of an ostensibly adolescent game tells us about contemporaneity—this is a book that repeatedly emphasizes the cultural ghetto in which videogames reside. Criticism devoted exclusively to this form risks being “subsistence criticism. . . . God save us from a future of game critics, gnawing on scraps like the zombies that fester in our objects of study” (188). Despite previous claims about videogames “[helping] us expose and interrogate the ways we engage the world in general, not just the ways that computational systems structure or limit that experience” (Bogost 2006, 40), How to Talk about Videogames is, at first glance, a book that raises the question of not only how videogames should be talked about, but whether they have anything to say in the first place.

    But it is difficult to gauge the seriousness of Bogost’s skepticism and reluctance given a book filled with twenty short essays of highly readable, informative, and often compelling criticism. (The disappointingly short essay, “The Blue Shell Is Everything That’s Wrong with America”—in which he writes: “This is the Blue Shell of collapse, the Blue Shell of financial hubris, the Blue Shell of the New Gilded Age” [26]—particularly stands out in the way that it reads an important if overlooked aspect of a popular game in terms of larger social issues.) For it is, really, somewhat unthinkable that someone who has written seven books on the subject would arrive at the conclusion that “videogames are a lot like toasters. . . . Like a toaster, a game is both appliance and hearth, both instrument and aesthetic, both gadget and fetish. It’s preposterous to do game criticism, like it’s preposterous to do toaster criticism” (ix and xii).[7] Bogost’s point here is rhetorical, erring on the side of hyperbole in order to emphasize how videogames are primarily process-based—that they work and function like toasters perhaps more than they affect and move like films or novels (a claim with which I imagine many would disagree), and that there is something preposterous in writing criticism about a process-based technology. A decade after emphasizing videogames’ procedurality in Unit Operations, this is a way for him to restate and reemphasize these important claims for the more popular audience intended for How to Talk about Videogames. Games involve actions, which make them different from other media that can be more passively absorbed. This is why videogames are often written about in reviews “full of technical details and thorough testing and final, definitive scores delivered on improbably precise numerical scales” (ix). Bogost is clear. He is not a reviewer. He is not assessing games’ ability to “satisfy our need for leisure [as] their only function.” He is a critic and the critic’s activity, even if his object resembles a toaster, is different.

    But though it is apparent why games might require a different kind of criticism than other media, what remains unclear is what Bogost believes the role of the critic ought to be. He says, contradicting the conclusion of How to Do Things with Videogames, that “criticism is not conducted to improve the work or the medium, to win over those who otherwise would turn up their noses at it. . . . Rather, it is conducted to get to the bottom of something, to grasp its form, context, function, meaning, and capacities” (xii). This seems like somewhat of a mistake, and a mistake that ignores both the history of criticism and Bogost’s own practice as a critic. Yes, of course criticism should investigate its object, but even Matthew Arnold, who emphasized “disinterestedness . . . keeping aloof from . . . ‘the practical view of things,’” also understood that such an approach could establish “a current of fresh and true ideas” (Arnold 1993 [1864], 37 and 49). No matter how disinterested, criticism can change the ways that art and the world are conceived and thought about. Indeed, only a sentence later it is difficult to discern what precisely Bogost believes the function of videogame criticism to be if not for improving the work, the medium, the world, if not for establishing a current from which new ideas might emerge. He writes that criticism can “venture so far from ordinariness of a subject that the terrain underfoot gives way from manicured path to wilderness, so far that the words that we would spin tousle the hair of madness. And then, to preserve that wilderness and its madness, such that both the works and our reflections on them become imbricated with one another and carried forward into the future where others might find them anew” (xii; more on this in a moment). It is clear that Bogost understands the mode of the critic to be disinterested and objective, to answer ‘the question ‘What is even going on here?’” (x), but it remains unclear why such an activity would even be necessary or worthwhile, and indeed, there is enough in the book that points to criticism being a futile, unnecessary, parodic, parasitic, preposterous endeavor with no real purpose or outcome. In other words, he may say how to talk about videogames, but not why anyone would ever really want to do so.

    I have at least partially convinced myself that Bogost’s claims about videogames being more like toasters than other art forms, along with the statements above regarding the disreputable nature of videogames, are meant as rhetorical provocations, ironic salvos to inspire from others more interesting, rigorous, thoughtful, and complex critical writing, both of the popular and academic stripe. I also understand that, as he did in Unit Operations, Bogost balks at the idea of a critical practice wholly devoted to videogames alone: “the era of fields and disciplines ha[s] ended. The era of critical communities ha[s] ended. And the very idea of game criticism risks Balkanizing games writing from other writing, severing it from the rivers and fields that would sustain it” (187). But even given such an understanding, it is unclear who precisely is suggesting that videogame criticism should be a hermetically sealed niche cut off from the rest of the critical tradition. It is also unclear why videogame criticism is so preposterous, why writing it—even if a critic’s task is limited to getting “to the bottom of something”—is so divorced from the current of other works of cultural criticism. And finally, given what are, at the end of the day, some very good short essays on games that deserve a thoughtful readership, it is unclear why Bogost has framed his activity in such a negatively self-aware fashion.

    So, rather than pursue a discussion about the relative merits and faults of Bogost’s critical self-reflexivity, I think it worth asking what changed between his 2011 and 2015 books, what took him from being a cheerleader—albeit a reticent, tempered, and disinterested one—to questioning the very value of videogame criticism itself. Why does he change from thinking about the various possibilities for doing things with videogames to thinking that “entering a games retail outlet is a lot like entering a sex shop or a liquor store . . . game shops are still vaguely unseemly” (182)?[8] I suspect that such events as 2014’s Gamergate—when independent game designer Zoe Quinn, critic Anita Sarkeesian, and others were threatened and harassed for their feminist views—the generally execrable level of discourse found on internet comments pages, and the questionable cultural identity of the “gamer,” probably account for some of Bogost’s malaise.[9] Indeed, most of the essays found in How to Talk about Videogames initially appeared online, largely in The Atlantic (where he is an editor) and Gamasutra, and, I have to imagine, suffered for it in their comments sections. With this change in audience and platform, it seems to follow that the opening and closing of How to Talk about Videogames reflect a general exhaustion with the level of discourse from fans, companies, and internet trolls. How can criticism possibly thrive or have an impact in a community that so frequently demonstrates its intolerance and rage toward other modes of thinking and being that might upset its worldview and sense of cultural identity? How does one talk to those who will not listen?

    And if these questions perhaps sound particularly apt today—that the “gamer” might bear an awfully striking resemblance to other headline-grabbing individuals and groups dominating the public discussion in the months after the publication of Bogost’s book, namely Donald J. Trump and his supporters—they should. I agree with Bogost that it can be difficult to see the value of criticism at a time when many United States citizens appear, at least on the surface, to be actively choosing to be uncritical. (As Philip Mirowski argues, the promotion of “ignorance [is] the lynchpin in the neoliberal project” [2013, 96].) Given such a discursive landscape, what is the purpose of writing, even in Bogost’s admirably clear (yet at times maddeningly spare) prose, if no amount of stylistic precision or rhetorical complexity—let alone a mastery of basic facts—can influence one’s audience? How to Talk about Videogames is framed as a response to the anti-intellectual atmosphere of the middle of the second decade of the twenty-first century, and it is an understandably despairing one. As such, it is not surprising that Bogost concludes that criticism has no role to play in improving the medium (or perhaps the world) beyond mere phenomenological encounter and description given the social fabric of life in the 2010s. In a time of vocally racist demagoguery, an era witnessing a rising tide of reactionary nationalism in the US and around the world, a period during which it often seems like no words of any kind can have any rhetorical effect at all—procedurally or otherwise—perhaps the best response is to be quiet. But I also think that this is to misunderstand the function of critical thought, regardless of what its object might be.

    To be sure, videogame creators have probably not yet produced a Citizen Kane (1941), and videogame criticism has not yet produced a work like Erich Auerbach’s Mimesis (1946). I am unconvinced, however, that such future accomplishments remain out of reach, that videogames are barred from profound aesthetic expression, and that writing about games preclude the heights attained by previous criticism simply because of some ill-defined aspect of the medium which prevents it from ever aspiring to anything beyond mere craft. Is a study of the Metal Gear series (1987–2015) similar to Roland Barthes’s S/Z (1970) really all that preposterous? Is Mario forever denied his own Samuel Johnson simply because he is composed of code rather than words? For if anything is unclear about Bogost’s book, it is what precisely prohibits videogames from having the effects and impacts of other art forms, why they are restricted to the realm of toasters, incapable of anything beyond adolescent poiesis. Indeed, Bogost’s informative and incisive discussion about Ms. Pac-Man (1981), his thought-provoking interpretation of Mountain (2014), or the many moments of accomplished criticism in his previous books—for example, his masterful discussion of the “figure of fascination” in Unit Operations—betray such claims.[10]

    Matthew Arnold once famously suggested that creativity and criticism were intimately linked, and I believe it might be worthwhile to remember this for the future of videogame criticism:

    It is the business of the critical power . . . “in all branches of knowledge, theology, philosophy, history, art, science, to see the object as in itself it really is.” Thus it tends, at last, to make an intellectual situation of which the creative power can profitably avail itself. It tends to establish an order of ideas, if not absolutely true, yet true by comparison with that which it displaces; to make the best ideas prevail. Presently these new ideas reach society, the touch of truth is the touch of life, and there is a stir and growth everywhere; out of this stir and growth come the creative epochs of literature. (Arnold 1993 [1864], 29)

    In other words, criticism has a vital role to play in the development of an art form, especially if an art form is experiencing contraction or stagnation. Whatever disagreements I might have with Arnold, I too believe that criticism and creativity are indissolubly linked, and further, that criticism has the power to shape and transform the world. Bogost says that “being a critic is not an enjoyable job . . . criticism is not pleasurable” (x). But I suspect that there may still be many who share Arnold’s view of criticism as a creative activity, and maybe the problem is not that videogame criticism is akin to preposterous toaster criticism, but that the function of videogame criticism at the present time is to expand its own sense of what it is doing, of what it is capable, of how and why it is written. When Bogost says he wants “words that . . . would . . . tousle the hair of madness,” why not write in such a fashion (Bogost’s controlled style rarely approaches madness), expanding criticism beyond mere phenomenological summary at best or zombified parasitism at worst. Consider, for instance, Jonathan Arac: “Criticism is literary writing that begins from previous literary writing. . . . There need not be a literary avant-garde for criticism to flourish; in some cases criticism itself plays a leading cultural role” (1989, 7). If we are to take seriously Bogost’s point about how the overwhelmingly positive reaction to Gone Home reveals the aesthetic and political impoverishment of the medium, then it is disappointing to see someone so well-positioned to take a leading cultural role in shaping the conversation about how videogames might change or transform surrendering the field.

    Forget analogies. What if videogame criticism were to begin not from comparing games to toasters but from previous writing, from the history of criticism, from literature and theory, from theories of art and architecture and music, from rhetoric and communication, from poetry? For, given the complex mediations present in even the simplest games—i.e., games not only involve play and narrative, but raise concerns about mimesis, music, sound, spatiality, sociality, procedurality, interface effects, et cetera—it increasingly makes less and less sense to divorce or sequester games from other forms of cultural study or to think that videogames are so unique that game studies requires its own critical modality. If Bogost implores game critics not to limit themselves to a strictly bound, niche field uninformed by other spheres of social and cultural inquiry, if game studies is to go forward into a metacritical third wave where it can become interested in what makes videogames different from other forms and self-reflexively aware of the variety of established and interconnecting modes of cultural criticism from which the field can only benefit, then thinking about the function of criticism historically should guide how and why games are written about at the present time.

    Before concluding, I should also note that something else perhaps changed between 2011 and 2015, namely, Bogost’s alignment with the philosophical movements of speculative realism and object-oriented ontology. In 2012, he published Alien Phenomenology, or What It’s Like to Be a Thing, a book that picks up some of the more theoretical aspects of Unit Operations and draws upon the work of Graham Harman and other anti-correlationists to pursue a flat ontology, arguing that the job of the philosopher “is to amplify the black noise of objects to make the resonant frequencies of the stuffs inside them hum in credibly satisfying ways. Our job is to write the speculative fictions of their processes, their unit operations” (Bogost 2012, 34). Rather than continue pursuing an anthropocentric, correlationist philosophy that can only think about objects in relation to human consciousness, Bogost claims that “the answer to correlationism is not the rejection of any correlate but the acknowledgment of endless ones, all self-absorbed, obsessed by givenness rather than by turpitude” (78). He suggests that philosophy should extend the possibility of phenomenological encounter to all objects, to all units, in his parlance; let phenomenology be alien and weird; let toasters encounter tables, refrigerators, books, climate change, Pittsburgh, Higgs boson particles, the 2016 Electronic Entertainment Expo, bagels, et cetera.[11]

    Though this is not the venue to pursue a broader discussion of Bogost’s philosophical writing, I mention his speculative turn because it seems important for understanding his changing attitudes about criticism. That is, as Graham Harman’s 2012 essay, “The Well-Wrought Broken Hammer,” negatively demonstrates, it is unclear what a flat ontology has to say, if anything, about art, what such a philosophy can bring to critical, hermeneutic activity.[12] Indeed, regardless of where one stands with regard to object-oriented ontology and other speculative realisms, what these philosophies might offer to critics seems to be one of the more vexing and polarizing intellectual questions of our time. Hermeneutics may very well prove inescapably “correlationist,” and, indeed, no matter how disinterested, historical. It is an open question whether or not one can ground a coherent and worthwhile critical practice upon a flat ontology. I am tempted to suspect not. I also suspect that the current trends in continental philosophy, at the end of the day, may not be really interested in criticism as such, and perhaps that is not really such a big deal. Criticism, theory, and philosophy are not synonymous activities nor must they be. (The question about criticism vis-à-vis alien phenomenology also appears to have motivated the Object Lessons series that Bogost edits.) This is all to say, rather than ground videogame criticism in what may very well turn out to be an intellectual fad whose possibilities for writing worthwhile criticism remain somewhat dubious, perhaps there may be more ripe currents and streams—namely, the history of criticism—that can inform how we write about videogames. Criticism may be steered by keeping in view many polestars; let us not be overly swayed by what, for now, burns brightest. For an area of humanistic inquiry that is still very much emerging, it seems a mistake to assume it can and should be nothing more than toaster criticism.

    In this review I have purposefully made few claims about the state of videogames. This is partly because I do not feel that any more work needs to be done to justify writing about the medium. It is also partly because I feel that any broad statement about the form would be an overgeneralization at this point. There are too many games being made in too many places by too many different people for any all-encompassing statement about the state of videogame art to be all that coherent. (In this, I think Bogost’s sense of the need for a media microecology of videogames is still apropos.) But I will say that the state of videogame criticism—and, strangely enough, particularly the academic kind—is one of the few places where humanistic inquiry seems, at least to me, to be growing and expanding rather than contracting or ossifying. Such a generally positive and optimistic statement about a field of the humanities may not adhere to present conceptions about academic activity (indeed, it might even be unfashionable!), which seem to more generally despair about the humanities, and rightfully so. Admitting that some modes of criticism might be, at least in some ways, exhausted, would be an important caveat, especially given how the past few years have seen a considerable amount of reflection about contemporary modes of academic criticism—e.g., Rita Felski’s The Limits of Critique (2015) or Eric Hayot’s “Academic Writing, I Love You. Really, I Do” (2014). But I think that, given how the anti-intellectual miasma that has long been present in US life has intensified in recent years, creeping into seemingly every discourse, one of the really useful functions of videogame criticism may very well be its potential ability to allow reflection on the function of criticism itself in the twenty-first century. If one of the most prominent videogame critics is calling his activity “preposterous” and his object “adolescent,” this should be a cause for alarm, for such claims cannot but help to perpetuate present views about the worthlessness of the humanities. So, I would like to modestly suggest that, rather than look to toasters and widgets to inform how we talk about videogames, let us look to critics and what they have written. Edward W. Said once wrote: “for in its essence the intellectual life—and I speak here mainly about the social sciences and the humanities—is about the freedom to be critical: criticism is intellectual life and, while the academic precinct contains a great deal in it, its spirit is intellectual and critical, and neither reverential nor patriotic” (1994, 11). If one can approach videogames—of all things!—in such a spirit, perhaps other spheres of human activity can rediscover their critical spirit as well.

    _____

    Bradley J. Fest will begin teaching writing this fall at Carnegie Mellon University. His work has appeared or is forthcoming in boundary 2 (interviews here and here), Critical Quarterly, Critique, David Foster Wallace and “The Long Thing” (Bloomsbury, 2014), First Person Scholar, The Silence of Fallout (Cambridge Scholars, 2013), Studies in the Novel, and Wide Screen. He is also the author of a volume of poetry, The Rocking Chair (Blue Sketch, 2015), and a chapbook, “The Shape of Things,” was selected as finalist for the 2015 Tomaž Šalamun Prize and is forthcoming in Verse. Recent poems have appeared in Empty Mirror, PELT, PLINTH, TXTOBJX, and Small Po(r)tions. He previously reviewed Alexander R. Galloway’s The Interface Effect for The b2 Review “Digital Studies.”

    Back to the essay
    _____

    NOTES

    [1] On some of the first wave controversies, see Aarseth (2001).

    [2] For a representative sample of essays and books in the narratology versus ludology debate from the early days of academic videogame criticism, see Murray (1997 and 2004), Aarseth (1997, 2003, and 2004), Juul (2001), and Frasca (2003).

    [3] For representative texts, see Crogan (2011), Dyer-Witherford and Peuter (2009), Galloway (2006a and 2006b), Jagoda (2013 and 2016), Nakamura (2009), Shaw (2014), and Wark (2007). My claims about the vitality of the field of game studies are largely a result of having read these and other critics. There have also been a handful of interesting “videogame memoirs” published recently. See Bissell (2010) and Clune (2015).

    [4] Bogost defines procedurality as follows: “Procedural representation takes a different form than written or spoken representation. Procedural representation explains processes with other processes. . . . [It] is a form of symbolic expression that uses process rather than language” (2007, 9). For my own discussion of proceduralism, particularly with regard to The Stanley Parable (2013) and postmodern metafiction, see Fest (forthcoming 2016).

    [5] For instance, in the concluding chapter of Unit Operations, Bogost writes powerfully and convincingly about the need for a comparative videogame criticism in conversation with other forms of cultural criticism, arguing that “a structural change in our thinking must take place for videogames to thrive, both commercially and culturally” (2006, 179). It appears that the lack of any structural change in the nonetheless wildly thriving—at least financially—videogame industry has given Bogost serious pause.

    [6] Indeed, at one point he even questions the justification for the book in the first place: “The truth is, a book like this one is doomed to relatively modest sales and an even more modest readership, despite the generous support of the university press that publishes it and despite the fact that I am fortunate enough to have a greater reach than the average game critic” (Bogost 2015, 185). It is unclear why the limited reach of his writing might be so worrisome to Bogost given that, historically, the audience for, say, poetry criticism has never been all that large.

    [7] In addition to those previously mentioned, Bogost has also published Racing the Beam: The Atari Video Computer System (2009) and, with Simon Ferrari and Bobby Schweizer, Newsgames: Journalism at Play (2010). Also forthcoming is Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games (2016).

    [8] This is, to be sure, a somewhat confusing point. Are not record stores, book stores, and video stores (if such things still exist), along with tea shops, shoe stores, and clothing stores “retail establishment[s] devoted to a singular practice” (Bogost 2015, 182–83)? Are all such establishments unseemly because of the same logic? What makes a game store any different?

    [9] For a brief overview of Gamergate, see Winfield (2014). For a more detailed discussion of both the cultural and technological underpinnings of Gamergate, with a particular emphasis on the relationship between the algorithmic governance of sites such as Reddit or 4chan and online misogyny and harassment, see Massanari’s (2015) important essay. For links to a number of other articles and essays on gaming and feminism, see Ligman (2014) and The New Inquiry (2014). For essays about contemporary “gamer” culture, see Williams (2014) and Frase (2014). On gamers, Bogost writes in a chapter titled “The End of Gamers” from his previous book: “as videogames broaden in appeal, being a ‘gamer’ will actually become less common, if being a gamer means consuming games as one’s primary media diet or identifying with videogames as a primary part of one’s identity” (2011, 154).

    [10] See Bogost (2006, 73–89). Also, to be fair, Bogost devotes a paragraph of the introduction of How to Talk about Videogames to the considerable affective properties of videogames, but concludes the paragraph by saying that games are “Wagnerian Gesamtkunstwerk-flavored chewing gum” (Bogost 2015, ix), which, I feel, considerably undercuts whatever aesthetic value he had just ascribed to them.

    [11] In Alien Phenomenology Bogost calls such lists “Latour litanies” (2012, 38) and discusses this stylistic aspect of object-oriented ontology at some length in the chapter, “Ontography” (35–59).

    [12] See Harman (2012). Bogost addresses such concerns in the conclusion of Alien Phenomenology, responding to criticism about his study of the Atari 2600: “The platform studies project is an example of alien phenomenology. Yet our efforts to draw attention to hardware and software objects have been met with myriad accusations of human erasure: technological determinism most frequently, but many other fears and outrages about ‘ignoring’ or ‘conflating’ or ‘reducing,’ or otherwise doing violence to ‘the cultural aspects’ of things. This is a myth” (2012, 132).

    Back to the essay

    WORKS CITED

    • Aarseth, Espen. 1997. Cybertext: Perspectives on Ergodic Literature. Baltimore: Johns Hopkins University Press.
    • ———. 2001. “Computer Game Studies, Year One.” Game Studies 1, no. 1. http://gamestudies.org/0101/editorial.html.
    • ———. 2003. “Playing Research: Methodological Approaches to Game Analysis.” Game Approaches: Papers from spilforskning.dk Conference, August 28–29. http://hypertext.rmit.edu.au/dac/papers/Aarseth.pdf.
    • ———. 2004. “Genre Trouble: Narrativism and the Art of Simulation.” In First Person: New Media as Story, Performance, and Game, edited by Noah Wardrip-Fruin and Pat Harrigan, 45–55. Cambridge, MA: MIT Press.
    • Arac, Jonathan. 1989. Critical Genealogies: Historical Situations for Postmodern Literary Studies. New York: Columbia University Press.
    • Arnold, Matthew. 1993 (1864). “The Function of Criticism at the Present Time.” In Culture and Anarchy and Other Writings, edited by Stefan Collini, 26–51. New York: Cambridge University Press.
    • Bissell, Tom. 2010. Extra Lives: Why Video Games Matter. New York: Pantheon.
    • Bogost, Ian. 2006. Unit Operations: An Approach to Videogame Criticism. Cambridge, MA:MIT Press.
    • ———. 2007. Persuasive Games: The Expressive Power of Videogame Criticism. Cambridge, MA: MIT Press.
    • ———. 2009. Racing the Beam: The Atari Video Computer System. Cambridge, MA: MIT
    • Press.
    • ———. 2011. How to Do Things with Videogames. Minneapolis: University of Minnesota Press.
    • ———. 2012. Alien Phenomenology, or What It’s Like to Be a Thing. Minneapolis: University of Minnesota Press.
    • ———. 2015. How to Talk about Videogames. Minneapolis: University of Minnesota Press.
    • ———. Forthcoming 2016. Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games. New York: Basic Books.
    • Bogost, Ian, Simon Ferrari, and Bobby Schweizer. 2010. Newsgames: Journalism at Play.
    • Cambridge, MA: MIT Press.
    • Clune, Michael W. 2015. Gamelife: A Memoir. New York: Farrar, Straus and Giroux.
    • Crogan, Patrick. 2011. Gameplay Mode: War, Simulation, and Tehnoculture. Minneapolis: University of Minnesota Press.
    • Dyer-Witherford, Nick, and Greig de Peuter. 2009. Games of Empire: Global Capitalism and Video Games. Minneapolis: University of Minnesota Press.
    • Felski, Rita. 2015. The Limits of Critique. Chicago: University of Chicago Press.
    • Fest, Bradley J. Forthcoming 2016. “Metaproceduralism: The Stanley Parable and the Legacies of Postmodern Metafiction.” “Videogame Adaptation,” edited by Kevin M. Flanagan, special issue, Wide Screen.
    • Frasca, Gonzalo. 2003. “Simulation versus Narrative: Introduction to Ludology.” In The Video Game Theory Reader, edited by Mark J. P. Wolf and Bernard Perron, 221–36. New York: Routledge.
    • Frase, Peter. 2014.  “Gamer’s Revanche.” Peter Frase (blog), September 3. http://www.peterfrase.com/2014/09/gamers-revanche/.
    • Galloway, Alexander R. 2006a. “Warcraft and Utopia.” Ctheory.net, February 16. http://www.ctheory.net/articles.aspx?id=507.
    • ———. 2006b. Gaming: Essays on Algorithmic Culture. Minneapolis: University of Minnesota Press.
    • Harman, Graham. 2012. “The Well-Wrought Broken Hammer: Object-Oriented Literary Criticism.” New Literary History 43, no. 2: 183–203.
    • Hayot, Eric. 2014. “Academic Writing, I Love You. Really, I Do.” Critical Inquiry 41, no. 1: 53–77.
    • Jagoda, Patrick. 2013. “Gamification and Other Forms of Play.” boundary 2 40, no. 2: 113–44.
    • ———. 2016. Network Aesthetics. Chicago: University of Chicago Press.
    • Juul, Jesper. 2001. “Games Telling Stories? A Brief Note on Games and Narratives.” Game Studies 1, no. 1. http://www.gamestudies.org/0101/juul-gts/.
    • Ligman, Chris. 2014. “August 31st.” Critical Distance, August 31. http://www.critical-distance.com/2014/08/31/august-31st/.
    • Massanari, Adrienne . 2015. “#Gamergate and The Fappening: How Reddit’s Algorithm, Governance, and Culture Support Toxic Technocultures.” New Media & Society, OnlineFirst, October 9.
    • Mirowski, Philip. 2013. Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown. New York: Verso.
    • Murray, Janet. 1997. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, MA: MIT Press.
    • ———. 2004. “From Game-Story to Cyberdrama.” In First Person: New Media as Story, Performance, and Game, edited by Noah Wardrip-Fruin and Pat Harrigan, 1–11. Cambridge, MA: MIT Press.
    • Nakamura, Lisa. 2009. “Don’t Hate the Player, Hate the Game: The Racialization of Labor in World of Warcraft.” Critical Studies in Media Communication 26, no. 2: 128–44.
    • The New Inquiry. 2014. “TNI Syllabus: Gaming and Feminism.” New Inquiry, September 2. http://thenewinquiry.com/features/tni-syllabus-gaming-and-feminism/.
    • Said, Edward W. 1994. “Identity, Authority, and Freedom: The Potentate and the Traveler.” boundary 2 21, no. 3: 1–18.
    • Shaw, Adrienne. 2014. Gaming at the Edge: Sexuality and Gender at the Margins of Gamer Culture. Minneapolis: University of Minnesota Press.
    • Wark, McKenzie. 2007. Gamer Theory. Cambridge, MA: Harvard University Press.
    • Williams, Ian. “Death to the Gamer.” Jacobin, September 9. https://www.jacobinmag.com/2014/09/death-to-the-gamer/.
    • Winfield, Nick. 2014. “Feminist Critics of Video Games Facing Threats in ‘GamerGate’ Campaign.” New York Times, October 15. http://www.nytimes.com/2014/10/16/technology/gamergate-women-video-game-threats-anita-sarkeesian.html.

    Back to the essay

  • Audrey Watters – Public Education Is Not Responsible for Tech’s Diversity Problem

    Audrey Watters – Public Education Is Not Responsible for Tech’s Diversity Problem

    By Audrey Watters

    ~

    On July 14, Facebook released its latest “diversity report,” claiming that it has “shown progress” in hiring a more diverse staff. Roughly 90% of its US employees are white or Asian; 83% of those in technical positions at the company are men. (That’s about a 1% improvement from last year’s stats.) Black people still make up just 2% of the workforce at Facebook, and 1% of the technical staff. Those are the same percentages as 2015, when Facebook boasted that it had hired 7 Black people. “Progress.”

    In this year’s report, Facebook blamed the public education system its inability to hire more people of color. I mean, whose fault could it be?! Surely not Facebook’s! To address its diversity problems, Facebook said it would give $15 million to Code.org in order to expand CS education, news that was dutifully reported by the ed-tech press without any skepticism about Facebook’s claims about its hiring practices or about the availability of diverse tech talent.

    The “pipeline” problem, writes Dare Obasanjo, is a “big lie.” “The reality is that tech companies shape the ethnic make up of their employees based on what schools & cities they choose to hire from and where they locate engineering offices.” There is diverse technical talent, ready to be hired; the tech sector, blinded by white, male privilege, does not recognize it, does not see it. See the hashtag #FBNoExcuses which features more smart POC in tech than work at Facebook and Twitter combined, I bet.

    Facebook’s decision to “blame schools” is pretty familiar schtick by now, I suppose, but it’s still fairly noteworthy coming from a company whose founder and CEO is increasingly active in ed-tech investing. More broadly, Silicon Valley continues to try to shape the future of education – mostly by defining that future as an “engineering” or “platform” problem and then selling schools and parents and students a product in return. As the tech industry utterly fails to address diversity within its own ranks, what can we expect from its vision for ed-tech?!

    My fear: ed-tech will ignore inequalities. Ed-tech will expand inequalities. Ed-tech will, as Edsurge demonstrated this week, simply co-opt the words of people of color in order to continue to sell its products to schools. (José Vilson has more to say about this particular appropriation in this week’s #educolor newsletter.)

    And/or: ed-tech will, as I argued this week in the keynote I delivered at the Digital Pedagogy Institute in PEI, confuse consumption with “innovation.” “Gotta catch ’em all” may be the perfect slogan for consumer capitalism; but it’s hardly a mantra I’m comfortable chanting to push for education transformation. You cannot buy your way to progress.

    All of the “Pokémon GO will revolutionize education” claims have made me incredibly angry, even though it’s a claim that’s made about every single new product that ed-tech’s early adopters find exciting (and clickbait-worthy). I realize there are many folks who seem to find a great deal of enjoyment in the mobile game. Hoorah. But there are some significant issues with the game’s security, privacy, its Terms of Service, its business model, and its crowd-sourced data model – a data model that reflects the demographics of those who played an early version of the game and one that means that there are far fewer “pokestops” in Black neighborhoods. All this matters for Pokémon GO; all this matters for ed-tech.

    Pokémon GO.
    Pokémon GO

    Pokémon GO is just the latest example of digital redlining, re-inscribing racist material policies and practices into new, digital spaces. So when ed-tech leaders suggest that we shouldn’t criticize Pokémon GO, I despair. I really do. Who is served by being silent!? Who is served by enforced enthusiasm? How does ed-tech, which has its own problems with diversity, serve to re-inscribe racist policies and practices because its loudest proponents have little interest in examining their own privileges, unless, as José points out, it gets them clicks?

    Sigh.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • Gavin Mueller – Civil Disobedience in the Age of Cyberwar

    Gavin Mueller – Civil Disobedience in the Age of Cyberwar

    a review of Molly Sauter, The Coming Swarm: DDoS Actions, Hacktivism, and Civil Disobedience on the Internet (Bloomsbury Academic, 2014)

    by Gavin Mueller

    ~

    Molly Sauter’s The Coming Swarm begins in an odd way. Ethan Zuckerman, director of MIT’s Center for Civic Media, confesses in the book’s foreword that he disagrees with the book’s central argument: that distributed denial of service (DDoS) actions, where specific websites and/or internet servers are overwhelmed by traffic and knocked offline via the coordinated activity of many computers acting together, should be viewed as a legitimate means of protest.[1] “My research demonstrated that these attacks, once mounted by online extortionists as a form of digital protection racket, were increasingly being mounted by governments as a way of silencing critics,” Zuckerman writes (xii). Sauter’s argument, which takes the form of this slim and knotty book, ultimately does not convince Zuckerman, though he admits he is “a better scholar and a better person” for having engaged with the arguments contained within. “We value civic arguments, whether they unfold in the halls of government, a protest encampment, or the comments thread of an internet post because we believe in the power of deliberation” (xv). This promise of the liberal public sphere is what Sauter grapples with throughout the work, to varying levels of success.

    The Coming Swarm is not a book about DDoS activities in general. As Sauter notes, “DDoS is a popular tactic of extortion, harassment, and silencing” (6): its most common uses come from criminal organizations and government cyberwar operations. Sauter is not interested in these kinds of actions, which encompass the vast majority of DDoS uses. (DDoS itself is a subset of all denial of service or DoS attacks.) Instead they focus on self-consciously political DDoS attacks, first carried out by artist-hacker groups in the 1990s (the electrohippies and the Electronic Disturbance Theater) and more recent actions by the group Anonymous.[2] All told, these are a handful of actions, barely numbering in the double digits, and spread out over two decades. The focus on this small minority of cases can make the book’s argument seem question-begging, since Sauter does not make clear how and why it is legitimate to analyze exclusively those few instances of a widespread phenomenon that happen to conform to an author’s desired outlook. At one level, this is a general problem throughout the book, since Sauter’s analysis is confined to what they call “activist DDoS,” yet the actual meaning of this term is rarely interrogated: viewed from the perspective of the actors, many of the DDoS actions Sauter dismisses by stipulation could also be–and likely are–viewed as “activism.”

    From its earliest inception, political DDoS actions were likened to “virtual sit-ins”: activists use their computers’ ability to ping a server to clog up its functioning, potentially slowing or bringing its activity to a stand-still. This situated the technique within a history of nonviolent civil disobedience, particularly that of the Civil Rights Movement. This metaphor has tended to overdetermine the debate over the use of DDoS in activist contexts, and Sauter is keen to move on from the connection: “such comparisons on the part of the media and the public serve to only stifle innovation within social movements and political action, while at the same time cultivating a deep and unproductive nostalgia for a kind of ‘ideal activism’ that never existed” (22-3). Sauter argues that not only does this leave out contributions to the Civil Rights Movement that the mainstream finds less than respectable; it helps rule out the use of disruptive and destructive forms of activism in future movements.

    This argument has merit, and many activists who want to move beyond nonviolent civil disobedience into direct action forms of political action appear to agree with it. Yet Sauter still wants to claim the label of civil disobedience for DDoS actions that they at other moments discard: “activist DDoS actions are not meaningfully different from other actions within the history of civil disobedience… novelty cannot properly exempt activist DDoS from being classified as a tactic of civil disobedience” (27). However, the main criticisms of DDoS as civil disobedience have nothing to do with its novelty. As Evgeny Morozov points out in his defense of DDoS as a political tactic, “I’d argue, however, that the DDoS attacks launched by Anonymous were not acts of civil disobedience because they failed one crucial test implicit in Rawls’s account: Most attackers were not willing to accept the legal consequences of their actions.” Novelist and digital celebrity Cory Doctorow, who opposes DDoS-based activism, echoes this concern: “A sit-in derives its efficacy not from merely blocking the door to some objectionable place, but from the public willingness to stand before your neighbours and risk arrest and bodily harm in service of a moral cause, which is itself a force for moral suasion.” The complaint is not that DDoS fails to live up to the standards of the Civil Rights Movement, or that it is too novel. It is that it often fails the basic test of civil disobedience: potentially subjecting oneself to punishment as a form of protest that lays bare the workings of the state.

    Zuckerman’s principle critique of Sauter’s arguments is that DDoS, by shutting down sites, censors speech opposed by activists rather than promoting their dissenting messages. Sauter has a two-pronged response to this. First, they say that DDoS attacks make the important point that the internet is not really a public space. Instead, it is controlled by private interests, with large corporations managing the vast majority of online space. This means that no arguments may rest, implicitly or explicitly, on the assumption that the internet is a Habermasian public sphere. Second, Sauter argues, by their own admission counterintuitively, that DDoS, properly contextualized as part of “communicative capitalism,” is itself a form of speech.

    Communicative capitalism is a term developed by Jodi Dean as part of her critique of the Habermasian vision of the internet as a public sphere. With the commodification of online speech, “the exchange value of messages overtakes their use value” (58). The communication of messages is overwhelmed by the priority to circulate content of any kind: “communicative exchanges, rather than being fundamental to democratic politics, are the basic elements of capitalist production” (56). For Dean, this logic undermines political effects from internet communication: “The proliferation, distribution, acceleration and intensification of communicative access and opportunity, far from enhancing democratic governance or resistance, results in precisely the opposite – the post-political formation of communicative capitalism” (53). If, Sauter argues, circulation itself becomes the object of communication, the power of DDoS is to disrupt that circulation of context. “In that context the interruption of that signal becomes an equally powerful contribution…. Under communicative capitalism, it is possible that it is the intentional creation of disruptions and silence that is the most powerful contribution” (29).

    However, this move is contrary to the point of Dean’s concept; Dean specifically rejects the idea that any kind of communicative activity puts forth real political antagonism. Dean’s argument is, admittedly, an overreach. While capital cares little for the specificity of messages, human beings do: as Marx notes, exchange value cannot exist without a use value. Sauter’s own “counterintuitive” use of Dean points to a larger difficulty with Sauter’s argument: it remains wedded to a liberal understanding of political action grounded in the idea of a public sphere. Even when Sauter moves on to discussing DDoS as disruptive direct action, rather than civil disobedience, they return to the discursive tropes of the public sphere: DDoS is “an attempt to assert a fundamental view of the internet as a ‘public forum’ in the face of its attempted designation as ‘private property’” (45). Direct action is evaluated by its contribution to “public debate,” and Sauter even argues that DDoS actions during the 1999 Seattle WTO protests did not infringe on the “rights” of delegates to attend the event because they were totally ineffective. This overlooks the undemocratic and illiberal character of the WTO itself, whose meetings were held behind closed doors (one of the major rhetorical points of the protest), and it implies that the varieties of direct action that successfully blockaded meetings could be morally compromised. These kinds of actions, bereft of an easy classification as forms of speech or communication, are the forms of antagonistic political action Dean argues cannot be found in online space.

    In this light, it is worth returning to some of the earlier theorizations of DDoS actions. The earliest DDoS activists the electrohippies and Electronic Disturbance Theater documented the philosophies behind their work, and Rita Raley’s remarkable book Tactical Media presented a bracing theoretical synthesis of DDoS as an emergent critical art-activist practice. EDT’s most famous action deployed its FloodNet DDoS tool in pro-Zapatista protests. Its novel design incorporated something akin to speech acts: for example, it pinged servers belonging to the Mexican government with requests for “human rights,” leading to a return message “human rights not found on this server,” a kind of technopolitical pun. Yet Raley rejects a theorization of online political interventions strictly in terms of their communicative value. Rather they are a curious hybrid of artistic experiment and militant interrogation, a Deleuzian event where one endeavors “to act without knowing the situation into which one will be propelled, to change things as they exist” (26).

    The goal of EDT’s actions was not simply to have a message be heard, or even to garner media attention: as EDT’s umbrella organization the Critical Art Ensemble puts it in Electronic Civil Disobedience, “The indirect approach of media manipulation using a spectacle of disobedience designed to muster public sympathy and support is a losing proposition” (15). Instead, EDT took on the prerogatives of conceptual art — to use creative practice to pose questions and provoke response — in order to probe the contours of the emerging digital terrain and determine who would control it and how. That their experiments quickly raised the specter of terrorism, even in a pre-9/11 context, seemed to answer this. As Raley describes, drawing from RAND cyberwar researchers, DDoS and related tactics “shift the Internet ‘from the public sphere model and casts it more as conflicted territory bordering on a war zone.’” (44).

    While Sauter repeatedly criticizes treating DDoS actions as criminal, rather than political, acts, the EDT saw its work as both, and even analogous to terrorism. “Not that the activists are initiating terrorist practice, since no one dies in hyperreality, but the effect of this practice can have the  same consequence as terrorism, in that state and corporate power vectors will haphazardly return fire with weapons that have destructive material (and even mortal) consequences” (25). Indeed, civil disobedience is premised on exploiting the ambiguities of activities that can be considered both crime and politics. Rather than attempt to fix distinctions after the fact, EDT recognized the power of such actions precisely in collapsing these distinctions. EDT did criticize the overcriminalization of online activity, as does Sauter, whose analysis of the use of the Computer Fraud and Abuse Act to prosecute DDoS activities is some of the book’s strongest and most useful material.

    Sauter prefers the activities of Anonymous to the earlier actions by the electrohippies and EDT (although EDT co-founder Ricardo Dominguez has been up to his old tricks: he was investigated by the FBI and threatened with revocation of tenure for a “virtual sit-in” against the University of California system during the student occupations of 2010). This is because Anonymous’ actions, with their unpretentious lulzy ardor and open-source tools, “lower the barriers to entry” to activism (104): in other words, they leverage the internet’s capacity to increase participation. For Sauter, the value in Anonymous’ use of its DDoS tool, the Low Orbit Ion Cannon, against targets such as the MPAA and PayPal “lay in the media attention and new participants it attracted, who sympathized with Anonymous’ views and could participate in future actions” (115). The benefit of collaborative open-source development is similar, as is the tool’s feature that allows a user to contribute their computer to a “voluntary botnet” called the “FUCKING HIVE MIND” which “allows for the temporary sharing of an activist identity, which subsequently becomes more easily adopted by those participants who opt to remain involved” (130). This tip of the hat to theorists of participatory media once again reveals the notion of a democratic public sphere as a regulative ideal for the text.

    The price of all this participation is that a “lower level of commitment was required” (129) from activists, which is oddly put forth as a benefit. In fact, Sauter criticizes FloodNet’s instructions — “send your own message to the error log of the institution/symbol of Mexican Neo-Liberalism of your choice” — as relying upon “specialized language that creates a gulf between those who already understand it and those who do not” (112). Not only is it unclear to me what the specialized language in this case is (“neoliberalism” is a widely used, albeit not universally understood term), but it seems paramount that individuals opting to engage in risky political action should understand the causes for which they are putting themselves on the line. Expanding political participation is a laudable goal, but not at the expense of losing the content of politics. Furthermore, good activism requires training: several novice Anons were caught and prosecuted for participating in DDoS actions due to insufficient operational security measures.

    What would it mean to take seriously the idea that the internet is not, in fact, a public sphere, and that, furthermore, the liberal notion of discursive and communicative activities impacting the decisions of rational individuals does not, in fact, adequately describe contemporary politics? Sauter ends up in a compelling place, one akin to the earlier theorists of DDoS: war. After all, states are one of the major participants in DDoS, and Sauter documents how Britain’s Government Communications Headquarters (GCHQ) used Denial of Service attacks, even though deemed illegal, against Anonymous itself. The involvement of state actors “could portend the establishment of a semipermanent state of cyberwar” with activists rebranded as criminals and even terrorists. This is consonant with Raley’s analysis of EDT’s own forays into online space. It also recalls the radical political work of ultraleft formations such as Tiqqun (I had anticipated that The Coming Swarm was a reference to The Coming Insurrection though this does not seem to be the case), for whom war, specifically civil war, becomes the governing metaphor for antagonistic political practice under Empire.

    This would mean that the future of DDoS actions and other disruptive online activism would not be in its mobilization of speech, but in its building of capacities and organization of larger politicized formations. This could potentially be an opportunity to consider the varieties of DDoS so often bracketed away, which often rely on botnets and operate in undeniably criminal ways. Current hacker formations use these practices in political ways (Ghost Squad has recently targeted the U.S. military, cable news stations, the KKK and Black Lives Matters among others with DDoS, accompanying each action with political manifestos). While Sauter claims, no doubt correctly, that these activities are “damaging to [DDoS’s] perceived legitimacy as an activist tactic (160), they also note that measures to circumvent DDoS “continue to outstrip the capabilities of nearly all activist campaigns” (159). If DDoS has a future as a political tactic, it may be in the zones beyond what liberal political theory can touch.

    Notes

    [1] Instances of DDoS are typically referred to in both the popular press and by hacktivsts as “attacks.” Sauter prefers the term “actions,” a usage I follow here.

    [2] I follow Sauter’s preferred usage of the pronouns “they” and “them.”

    Works Cited

    • Critical Art Ensemble. 1996. Electronic Civil Disobedience. Brooklyn: Autonomedia.
    • Dean, Jodi. 2005. “Communicative Capitalism: Circulation and the Foreclosure of Politics.” Cultural Politics 1.1. 51-74.
    • Raley, Rita. 2009. Tactical Media. Minneapolis: University of Minnesota Press.
    • Sauter, Molly. 2014. The Coming Swarm: DDoS Actions, Hacktivism, and Civil Disobedience on the Internet. New York: Bloomsbury Academic.

    _____

    Gavin Mueller (@gavinsaywhat) holds a Ph.D. in Cultural Studies from George Mason University. He is currently a Visiting Assistant Professor of Emerging Media and Communication at the University of Texas-Dallas. He previously reviewed Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous for The b2 Review.

    Back to the essay

  • Zachary Loeb – Mars is Still Very Far Away

    Zachary Loeb – Mars is Still Very Far Away

    a review of McKenzie Wark, Molecular Red (Verso, 2015)

    by Zachary Loeb

    ~

    There are some games where a single player wins, games where a group of players wins, and then there are games where all of the players can share equally in defeat. Yet regardless of the way winners and losers are apportioned, there is something disconcerting about a game where the rules change significantly when one is within sight of victory. Suddenly the strategy that had previously assured success now promises defeat and the confused players are forced to reconsider all of the seemingly right decisions that have now brought them to an impending loss. It may be a trifle silly to talk of winners and losers in the Anthropocene, with its bleak herald climate change, but the epoch in which humans have become a geological force is one in which the strategies that propelled certain societies towards victory no longer seem like such wise tactics. With victory seeming less and less certain it is easy to assume defeat is inevitable.

    Molecular_Red_300dpi_CMYK-max_221-dc0af21fb3204cf05919dfce4acafe57

    “Let’s not despair” is the retort McKenzie Wark offers on the first page of Molecular Red: Theory for the Anthropocene. The book approaches the Anthropocene as both a challenge and an opportunity, not for seeing who can pen the grimmest apocalyptic dirge but for developing new forms of critical theory. Prevailing responses to the Anthropocene – ranging from faith in new technology, to confidence in the market, to hopes for accountability, to despairing of technology – all strike Wark as insufficient, what he deems necessary are theories (which will hopefully lead to solutions) that recognize the ways in which the aforementioned solutions are entangled with each other. For Wark the coming crumbling of the American system was foreshadowed by the collapse of the Soviet system – and thus Molecular Red looks back at Soviet history to consider what other routes could have been taken there, before he switches his focus back to the United States to search for today’s alternate routes. Molecular Red reads aspects of Soviet history through the lens of “what if?” in order to consider contemporary questions from the perspective “what now?” As he writes: “[t]here is no other world, but it can’t be this one” (xxi).

    Molecular Red is an engaging and interesting read that introduces its readers to a raft of under-read thinkers – and its counsel against despair is worth heeding.  And yet, by the book’s end, it is easy to come away with a sense that while it is true that “there is no other world” that it will, alas, almost certainly be exactly this one.

    Before Wark introduces individual writers and theorists he first unveils the main character of his book: “the Carbon Liberation Front” (xiv). In Wark’s estimation the Carbon Liberation Front (CLF from this point forward) represents the truly victorious liberation movement of the past centuries. And what this liberation movement has accomplished is the freeing of – as the name suggests – carbon, an element which has been burnt up by humans in pursuit of energy with the result being an atmosphere filled with heat-trapping carbon dioxide. “The Anthropocene runs on carbon” (xv), and seeing as the scientists who coined the term “Anthropocene” used it to mark the period wherein glacial ice cores began to show a concentration of green house gases, such as CO2 and Ch4 – the CLF appears as a force one cannot ignore.

    Turning to Soviet history, Wark works to rescue Lenin’s rival Alexander Bogdanov from being relegated to a place as a mere footnote. Yet, Wark’s purpose is not to simply emphasize that Lenin and Bogdanov had different ideas regarding what the Bolsheviks should have done, what is of significance in Bogdanov is not questions of tactics but matters of theory. In particular Wark highlights Bogdanov’s ideas of “proletkult” and “tektology” while also drawing upon Bogdanov’s view of nature – he conceived of this “elusive category” as “simply that which labor encounters” (4, italics in original text). Bogdanov’s tektology was to be “a new way of organizing knowledge” while proletkult was to be “a new practice of culture” – as Wark explains “Bogdanov is not really trying to write philosophy so much a to hack it, to repurpose it for something other than the making of more philosophy” (13). Tektology was an attempt to bring together the lived experience of the proletariat along with philosophy and science – to create an active materialism “based on the social production of human existence” (18) and this production sees Nature as the realm within which laboring takes place. Or, as Wark eloquently puts it, tektology “is a way of organizing knowledge for difficult times…and perhaps also for the strange times likely to come in the twenty-first century” (40). Proletkult (which was an actual movement for some time) sought “to change labor, by merging art and work; to change everyday life…and to change affect” (35) – its goal was not to create proletarian culture but to provide a proletarian “point of view.” Deeply knowledgeable about science, himself a sort of science-fiction author (he wrote a quasi-utopian novel set on Mars called Red Star), and hopeful that technological advances would make workers more like engineers and artists, Bogdanov strikes Wark as “not the present writing about the future, but the past writing to the future” (59). Wark suggests that “perhaps Bogdanov is the point to which to return” (59) hence Wark’s touting of tektology, proletkult and Bogdanov’s view of nature.

    While Wark makes it clear that Bogdanov’s ideas did have some impact in Soviet Russia, their effect was far less than what it could have been – and thus Bogdanov’s ideas remain an interesting case of “what if?” Yet, in the figure of Andrey Platonov, Wark finds an example of an individual whose writings reached towards proletkult. Wark sees Platonov as “the great writer of our planet of slums” (68). The fiction written by Platonov, his “(anti)novellas” as Wark calls them, are largely the tales of committed and well-meaning communists whose efforts come to naught. For Platonov’s characters failure is a constant companion, they struggle against nature in the name of utopianism and find that they simply must keep struggling. In Platonov’s work one finds a continual questioning of communism’s authoritarian turn from below, his “Marxism is an ascetic one, based on the experience of sub-proletarian everyday life” (104). And while Platonov’s tales are short on happy endings, Wark detects hope amidst the powerlessness, as long as life goes on, for “if one can keep living then everything is still possible” (80). Such is the type of anti-cynicism that makes Platonov’s Marxism worth considering – it finds the glimmer of utopia on the horizon even if it never seems to draw closer.

    From the cold of the Soviet winter, Wark moves to the birthplace of the Californian Ideology – an ideology which Wark suggests has won the day: “it has no outside, and it is accelerating” (118). Yet, as with the case of Soviet communism, Wark is interested in looking for the fissures within the ideology, and instead of opining on Barbook and Cameron’s term moves through Ernst Mach and Paul Feyerabend en route to a consideration of Donna Haraway. Wark emphasizes how Haraway’s Marxism “insists on including nonhuman actors” (136) – her techno-science functions as a way of further breaking down the barrier that had been constructed between humans and nature. Shattering this divider is necessary to consider the ways that life itself has become caught up with capital in the age of patented life forms like OncoMouse. Amidst these entanglements Haraway’s “Cyborg Manifesto” appears to have lost none of its power – Wark sees that “cyborgs are monsters, or rather demonstrations, in the double sense of to show and to warn, of possible worlds” (146). Such a show of possibilities is to present alternatives even when, “There’s no mother nature, no father science, no way back (or forward) to integrity” (150). Returning to Bogdanov, Wark writes that “Tektology is all about constructing temporary shelter in the world” (150) – and the cyborg identity is simultaneously what constructs such shelter and seeks haven within it. Beyond Haraway, Wark considers the work of Karen Barad and Paul Edwards, in order to further illustrate that “we are at one and the same time a product of techno-science and yet inclined to think ourselves separate from it” (165). Haraway, and the web of thinkers with which Wark connects her, appear as a way to reconnect with “something like the classical Marxist and Bogdanovite open-mindedness toward the sciences” (179).

    After science, Wark transitions to discussing the science fiction of Kim Stanley Robinson – in particular his Mars trilogy. Robinson’s tale of the scientist/technicians colonizing Mars and their attempts to create a better world on the one they are settling is a demonstration of how “the struggle for utopia is both technical and political, and so much else besides” (191). The value of the Mars trilogy, with its tale of revolutions, both successful and unsuccessful, and its portrayal of a transformed Earth, is in the slow unfolding of revolutionary change. In Red Mars (the first book of the trilogy, published in 1992) there is not a glorious revolution that instantly changes everything, but rather “the accumulation of minor, even molecular, elements of a new way of life and their negotiations with each other” (194). At work in the ruminations of the main characters of Red Mars, Wark detects something reminiscent of tektology even as the books themselves seem like a sort of proletkult for the Anthropocene.

    Molecular Red’s tour of oft overlooked, or overly neglected thinkers, is an argument for a reengagement with Marxism, but a reengagement that willfully and carefully looks for the paths not taken. The argument is not that Lenin needs to be re-read, but that Bogdanov needs to be read. Wark does not downplay the dangers of the Anthropocene, but he refuses to wallow in dismay or pine for a pastoral past that was a fantasy in the first place. For Wark, we are closely entwined with our technology and the idea that it should all be turned off is a nonstarter. Molecular Red is not a trudge through the swamps of negativity, rather it’s a call: “Let’s use the time and information and everyday life still available to us to begin the task, quietly but in good cheer, of thinking otherwise, of working and experimenting” (221).

    Wark does not conclude Molecular Red by reminding his readers that they have nothing to lose but their chains. Rather he reminds them that they still have a world to win.  

    Molecular Red begins with an admonishment not to despair, and ends with a similar plea not to lose hope. Granted, in order to find this hope one needs to be willing to consider that the causes for hopelessness may themselves be rooted in looking for hope in the wrong places. Wark argues, that by embracing techno-science, reveling in our cyborg selves, and creating new cultural forms to help us re-imagine our present and future – the left can make itself relevant once more. As a call for the left to embrace technology and look forward Molecular Red occupies a similar cultural shelf-space as that filled by recent books like Inventing the Future and Austerity Ecology and the Collapse-Porn Addicts. Which is to say that those who think that what is needed is “a frank acknowledgment of the entangling of our cyborg bodies within the technical” (xxi), those who think that the left needs to embrace technology with greater gusto, will find Molecular Red’s argument quite appealing. As for those who disagree – they will likely not find their minds changed by Molecular Red.

    As a writer Wark has a talent for discussing dense theoretical terms in a readable and enjoyable format throughout Molecular Red. Regardless of what one ultimately thinks of Wark’s argument, one of the major strengths of Molecular Red is the way it introduces readers to overlooked theorists. After reading Wark’s chapters on Bogdanov and Platonov the reader certainly understands why Wark finds their work so engrossing and inspiring. Similarly, Wark makes a compelling case for the continued importance of Haraway’s cyborg concept and his treatment of Kim Stanley Robinson’s Mars trilogy is an apt demonstration of incorporating science fiction into works of theory. Amidst all of the grim books out there about the Anthropocene, Molecular Red is refreshing in its optimism. This is “Theory for the Anthropocene,” as the book’s subtitle puts it, but it is positive theory.

    Granted, some of Wark’s linguistic flourishes become less entertaining over time – “the carbon liberation front” is an amusing concept at first but by the end of Molecular Red the term is as likely to solicit an eye-roll as introspection. A great deal of carbon has certainly been liberated, but has this been the result of a concerted effort (a “liberation front”) or has this been the result of humans not fully thinking through the consequences of technology? Certainly there are companies that have made fortunes through “liberating” carbon, but who is ultimately responsible for “the carbon liberation front?” One might be willing to treat terms like “liberation front” with less scrutiny were they not being used in a book so invested in re-vitalizing leftist theory. Does not a “liberation front” imply a movement with an ideology? It seems that the liberation of carbon is more of an accident of a capitalist ideology than the driver of that ideology itself. It may seem silly to focus upon the uneasy feeling that accompanies the term “carbon liberation front” but this is an example of a common problem with Molecular Red – the more one thinks about some of the premises the less satisfying Wark’s arguments become.

    Given Wark’s commitment to reconfiguring Marxism for the Anthropocene it is unsurprising that he should choose to devote much of his attention to labor. This is especially fitting given the emphasis that Bogdanov and Platonov place on labor. Wark clearly finds much to approve of in Bogdanov’s idea that “all workers would become more like engineers, and also more like artists” (28). These are largely the type of workers one encounters in Robinson’s work and who are, generally, the heroes of Platonov’s tales, they make up a sort of “proto-hacker class” (90). It is an interesting move from the Soviet laborer to the technician/artists/hacker of Robinson – and it is not surprising that the author of A Hacker Manifesto (2004) should view hackers in such a romantic light. Yet Molecular Red is not a love letter to hackers, which makes it all the more interesting that labor in the Anthropocene is not given broader consideration. Bogdanov might have hoped that automation would make workers more like engineers and artists – but is there not still plenty of laboring going on in the Anthropocene? There is a heck of a lot of labor that goes into making the high-tech devices enjoyed by technicians, hackers and artists – though it may be a type of labor that is more convenient to ignore as it troubles the idea that workers are all metamorphosing into technician/artist/hackers. Given Platonov’s interest in the workers who seemed abandoned by the utopian promises they had been told it is a shame that Molecular Red does not pay greater attention to the forgotten workers of the Anthropocene. Yet, contemporary miners of minerals for high-tech doodads, device assemblers, e-waste recyclers, and the impoverished citizens of areas already suffering the burdens of climate change have more in common with the forgotten proletarians of Platonov than with the utopian scientists of Robinson’s Red Mars.

    One way to read Molecular Red is as a plea to the left not to give up on techno-science. Though it seems worth wondering to what extent the left has actually done anything like this. Some on the left may be less willing to conclude that the Internet is the solution to every problem (“some” does not imply “the majority”), but agitating for green technologies and alternative energies seems a pretty clear demonstration that far from giving up on technology many on the left still approach it with great hope. Wark is arguing for “something like the classical Marxist and Bogdanovite open-mindedness toward the sciences…rather than the Heidegger-inflected critique of Marcuse and others” (179). Yet in looking at contemporary discussions around techno-science and the left, it does not seem that the “Heidegger-inflected critique of Marcuse and others” is particularly dominant. There may be a few theorists here and there still working to advance a rigorous critique of technology – but as the recent issues on technology from The Nation and Jacobin both show – the left is not currently being controlled by a bogey-man of Marcuse. Granted, this is a shame, for Molecular Red could have benefited from engaging with some of the critics of Marxism’s techno-utopian streak. Indeed, is the problem the lack of “open-mindedness toward the sciences” or that being open-minded has failed thus far to do much to stall the Anthropocene? Or is it that, perhaps, the left simply needs to prepare itself for being open-minded about geo-engineering? Wark describes the Anthropocene as being a sort of metabolic rift and cautions that “to reject techno-science altogether is to reject the means of knowing about metabolic rift” (180). Yet this seems to be something of a straw-man argument – how many critics are genuinely arguing that people should “reject techno-science”? Perhaps John Zerzan has a much wider readership than I knew.

    Molecular Red cautions its readers against despair but the text has a significant darkness about it. Wark writes “we are cyborgs, making a cyborg planet with cyborg weather, a crazed, unstable disingression, whose information and energy systems are out of joint” (180) – but the knowledge that “we are cyborgs” does little to help the worker who has lost her job without suddenly becoming an engineer/artist, “a cyborg planet” does nothing to heal the sicknesses of those living near e-waste dumps, and calling it “cyborg weather” does little to help those who are already struggling to cope with the impacts of climate change. We may be cyborgs, but that doesn’t mean the Anthropocene will go easy on us. After all, the scientists in the Mars trilogy may work on transforming that planet into a utopia but while they are at it things do not exactly go well back on Earth. When Wark writes that “here among the ruins, something living yet remains” (xxii) he is echoing the ideology behind every anarcho-punk record cover that shows a better life being built on the ruins of the present world. But another feature of those album covers, and the allusion to “among the ruins,” is that the fact that some “living yet remains” is a testament to all of the dying that has also transpired.

    McKenzie Wark has written an interesting and challenging book in Molecular Red and it is certainly a book with which it is worth engaging. Regardless of whether or not one is ultimately convinced by Wark’s argument, his final point will certainly resonate with those concerned about the present but hopeful for the future.

    After all, we still have a world to win.
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

  • Ending the World as We Know It: Alexander R. Galloway in Conversation with Andrew Culp

    Ending the World as We Know It: Alexander R. Galloway in Conversation with Andrew Culp

    by Alexander R. Galloway and Andrew Culp
    ~

    Alexander R. Galloway: You have a new book called Dark Deleuze (University of Minnesota Press, 2016). I particularly like the expression “canon of joy” that guides your investigation. Can you explain what canon of joy means and why it makes sense to use it when talking about Deleuze?

    Andrew Culp, Dark Deleuze (University of Minnesota Press, 2016)

    Andrew Culp: My opening is cribbed from a letter Gilles Deleuze wrote to philosopher and literary critic Arnaud Villani in the early 1980s. Deleuze suggests that any worthwhile book must have three things: a polemic against an error, a recovery of something forgotten, and an innovation. Proceeding along those three lines, I first argue against those who worship Deleuze as the patron saint of affirmation, second I rehabilitate the negative that already saturates his work, and third I propose something he himself was not capable of proposing, a “hatred for this world.” So in an odd twist of Marx on history, I begin with those who hold up Deleuze as an eternal optimist, yet not to stand on their shoulders but to topple the church of affirmation.

    The canon portion of “canon of joy” is not unimportant. Perhaps more than any other recent thinker, Deleuze queered philosophy’s line of succession. A large portion of his books were commentaries on outcast thinkers that he brought back from exile. Deleuze was unwilling to discard Nietzsche as a fascist, Bergson as a spiritualist, or Spinoza as a rationalist. Apparently this led to lots of teasing by fellow agrégation students at the Sorbonne in the late ’40s. Further showing his strange journey through the history of philosophy, his only published monograph for nearly a decade was an anti-transcendental reading of Hume at a time in France when phenomenology reigned. Such an itinerant path made it easy to take Deleuze at his word as a self-professed practitioner of “minor philosophy.” Yet look at Deleuze’s outcasts now! His initiation into the pantheon even bought admission for relatively forgotten figures such as sociologist Gabriel Tarde. Deleuze’s popularity thus raises a thorny question for us today: how do we continue the minor Deleuzian line when Deleuze has become a “major thinker”? For me, the first step is to separate Deleuze (and Guattari) from his commentators.

    I see two popular joyous interpretations of Deleuze in the canon: unreconstructed Deleuzians committed to liberating flows, and realists committed to belief in this world. The first position repeats the language of molecular revolution, becoming, schizos, transversality, and the like. Some even use the terms without transforming them! The resulting monotony seals Deleuze and Guattari’s fate as a wooden tongue used by people still living in the ’80s. Such calcification of their concepts is an especially grave injustice because Deleuze quite consciously shifted terminology from book to book to avoid this very outcome. Don’t get me wrong, I am deeply indebted to the early work on Deleuze! I take my insistence on the Marxo-Freudian core of Deleuze and Guattari from one of their earliest Anglophone commentators, Eugene Holland, who I sought out to direct my dissertation. But for me, the Tiqqun line “the revolution was molecular, and so was the counter-revolution” perfectly depicts the problem of advocating molecular politics. Why? Today’s techniques of control are now molecular. The result is that control societies have emptied the molecular thinker’s only bag of tricks (Bifo is a good test case here), which leaves us with a revolution that only goes one direction: backward.

    I am equally dissatisfied by realist Deleuzians who delve deep into the early strata of A Thousand Plateaus and away from the “infinite speed of thought” that motivates What is Philosophy? I’m thinking of the early incorporations of dynamical systems theory, the ’90s astonishment over everything serendipitously looking like a rhizome, the mid-00s emergence of Speculative Realism, and the ongoing “ontological” turn. Anyone who has read Manuel DeLanda will know this exact dilemma of materiality versus thought. He uses examples that slow down Deleuze and Guattari’s concepts to something easily graspable. In his first book, he narrates history as a “robot historian,” and in A Thousand Years of Nonlinear History, he literally traces the last thousand years of economics, biology, and language back to clearly identifiable technological inventions. Such accounts are dangerously compelling due to their lucidity, but they come at a steep cost: android realism dispenses with Deleuze and Guattari’s desiring subject, which is necessary for a theory of revolution by way of the psychoanalytic insistence on the human ability to overcome biological instincts (e.g. Freud’s Instincts and their Vicissitudes and Beyond the Pleasure Principle). Realist interpretations of Deleuze conceive of the subject as fully of this world. And with it, thought all but evaporates under the weight of this world. Deleuze’s Hume book is an early version of this criticism, but the realists have not taken heed. Whether emergent, entangled, or actant, strong realists ignore Deleuze and Guattari’s point in What is Philosophy? that thought always comes from the outside at a moment when we are confronted by something so intolerable that the only thing remaining is to think.

    Galloway: The left has always been ambivalent about media and technology, sometimes decrying its corrosive influence (Frankfurt School), sometimes embracing its revolutionary potential (hippy cyberculture). Still, you ditch technical “acceleration” in favor of “escape.” Can you expand your position on media and technology, by way of Deleuze’s notion of the machinic?

    Culp: Foucault says that an episteme can be grasped as we are leaving it. Maybe we can finally catalogue all of the contemporary positions on technology? The romantic (computer will never capture my soul), the paranoiac (there is an unknown force pulling the strings), the fascist-pessimist (computers will control everything)…

    Deleuze and Guattari are certainly not allergic to technology. My favorite quote actually comes from the Foucault book in which Deleuze says that “technology is social before it is technical” (6). The lesson we can draw from this is that every social formation draws out different capacities from any given technology. An easy example is from the nomads Deleuze loved so much. Anarcho-primitivists speculate that humans learn oppression with the domestication of animals and settled agriculture during the Neolithic Revolution. Diverging from the narrative, Deleuze celebrates the horse people of the Eurasian steppe described by Arnold Toynbee. Threatened by forces that would require them to change their habitat, Toynbee says, they instead chose to change their habits. The subsequent domestication of the horse did not sew the seeds of the state, which was actually done by those who migrated from the steppes after the last Ice Age to begin wet rice cultivation in alluvial valleys (for more, see James C Scott’s The Art of Not Being Governed). On the contrary, the new relationship between men and horses allowed nomadism to achieve a higher speed, which was necessary to evade the raiding-and-trading used by padi-states to secure the massive foreign labor needed for rice farming. This is why the nomad is “he who does not move” and not a migrant (A Thousand Plateaus, 381).

    Accelerationism attempts to overcome the capitalist opposition of human and machine through the demand for full automation. As such, it peddles in technological Proudhonism that believes one can select what is good about technology and just delete what is bad. The Marxist retort is that development proceeds by its bad side. So instead of flashy things like self-driving cars, the real dot-communist question is: how will Amazon automate the tedious, low-paying jobs that computers are no good at? What happens to the data entry clerks, abusive-content managers, or help desk technicians? Until it figures out who will empty the recycle bin, accelerationism is only a socialism of the creative class.

    The machinic is more than just machines–it approaches technology as a question of organization. The term is first used by Guattari in a 1968 paper titled “Machine and Structure” that he presented to Lacan’s Freudian School of Paris, a paper that would jumpstart his collaboration with Deleuze. He argues for favoring machine to structure. Structures transform parts of a whole by exchanging or substituting particularities so that every part shares in a general form (in other words, the production of isomorphism). An easy political example is the Leninist Party, which mediates the particularized private interests to form them into the general will of a class. Machines instead treat the relationship between things as a problem of communication. The result is the “control and communication” of Norbert Wiener’s cybernetics, which connects distinct things in a circuit instead of implanting a general logic. The word “machine” never really caught on but the concept has made inroads in the social sciences, where actor-network theory, game theory, behaviorism, systems theory, and other cybernetic approaches have gained acceptance.

    Structure or machine, each engenders a different type of subjectivity, and each realizes a different model of communication. The two are found in A Thousand Plateaus, where Deleuze and Guattari note two different types of state subject formation: social subjection and machinic enslavement (456-460). While it only takes up a few short pages, the distinction is essential to Bernard Stiegler’s work and has been expertly elaborated by Maurizio Lazzarato in the book Signs and Machines. We are all familiar with molar social subjection synonymous with “agency”–it is the power that results from individuals bridging the gap between themselves and broader structures of representation, social roles, and institutional demands. This subjectivity is well outlined by Lacanians and other theorists of the linguistic turn (Virno, Rancière, Butler, Agamben). Missing from their accounts is machinic enslavement, which treats people as simply cogs in the machine. Such subjectivity is largely overlooked because it bypasses existential questions of recognition or self-identity. This is because machinic enslavement operates at the level of the infra-social or pre-individual through the molecular operators of unindividuated affects, sensations, desires not assigned to a subject. Offering a concrete example, Deleuze and Guattari reference Mumford’s megamachines of surplus societies that create huge landworks by treating humans as mere constituent parts. Capitalism revived the megamachine in the sixteenth century, and more recently, we have entered the “third age” of enslavement marked by the development of cybernetic and informational machines. In place of the pyramids are technical machines that use humans at places in technical circuits where computers are incapable or too costly, e.g. Amazon’s Mechanical Turk.

    I should also clarify that not all machines are bad. Rather, Dark Deleuze only trusts one kind of machine, the war machine. And war machines follow a single trajectory–a line of flight out of this world. A major task of the war machine conveniently aligns with my politics of techno-anarchism: to blow apart the networks of communication created by the state.

    Galloway: I can’t resist a silly pun, cannon of joy. Part of your project is about resisting a certain masculinist tendency. Is that a fair assessment? How do feminism and queer theory influence your project?

    Culp: Feminism is hardwired into the tagline for Dark Deleuze through a critique of emotional labor and the exhibition of bodies–“A revolutionary Deleuze for today’s digital world of compulsory happiness, decentralized control, and overexposure.” The major thread I pull through the book is a materialist feminist one: something intolerable about this world is that it demands we participate in its accumulation and reproduction. So how about a different play on words: Sara Ahmed’s feminist killjoy, who refuses the sexual contract that requires women to appear outwardly grateful and agreeable? Or better yet, Joy Division? The name would associate the project with post-punk, its conceptual attack on the mainstream, and the band’s nod to the sexual labor depicted in the novella House of Dolls.

    My critique of accumulation is also a media argument about connection. The most popular critics of ‘net culture are worried that we are losing ourselves. So on the one hand, we have Sherry Turkle who is worried that humans are becoming isolated in a state of being “alone-together”; and on the other, there is Bernard Stiegler, who thinks that the network supplants important parts of what it means to be human. I find this kind of critique socially conservative. It also victim-blames those who use social media the most. Recall the countless articles attacking women who take selfies as part of self-care regimen or teens who creatively evade parental authority. I’m more interested in the critique of early ’90s ‘net culture and its enthusiasm for the network. In general, I argue that network-centric approaches are now the dominant form of power. As such, I am much more interested in how the rhizome prefigures the digitally-coordinated networks of exploitation that have made Apple, Amazon, and Google into the world’s most powerful corporations. While not a feminist issue on its face, it’s easy to see feminism’s relevance when we consider the gendered division of labor that usually makes women the employees of choice for low-paying jobs in electronics manufacturing, call centers, and other digital industries.

    Lastly, feminism and queer theory explicitly meet in my critique of reproduction. A key argument of Deleuze and Guattari in Anti-Oedipus is the auto-production of the real, which is to say, we already live in a “world without us.” My argument is that we need to learn how to hate some of the things it produces. Of course, this is a reworked critique of capitalist alienation and exploitation, which is a system that gives to us (goods and the wage) only because it already stole them behind our back (restriction from the means of subsistence and surplus value). Such ambivalence is the everyday reality of the maquiladora worker who needs her job but may secretly hope that all the factories burn to the ground. Such degrading feelings are the result of the compromises we make to reproduce ourselves. In the book, I give voice to them by fusing together David Halperin and Valerie Traub’s notion of gay shame acting as a solvent to whatever binds us to identity and Deleuze’s shame at not being able to prevent the intolerable. But feeling shame is not enough. To complete the argument, we need to draw out the queer feminist critique of reproduction latent in Marx and Freud. Détourning an old phrase: direct action begins at the point of reproduction. My first impulse is to rely on the punk rock attitude of Lee Edelman and Paul Preciado’s indictment of reproduction. But you are right that they have their masculinist moments, so what we need is something more post-punk–a little less aggressive and a lot more experimental. Hopefully Dark Deleuze is that.

    Galloway: Edelman’s “fuck Annie” is one of the best lines in recent theory. “Fuck the social order and the Child in whose name we’re collectively terrorized; fuck Annie; fuck the waif from Les Mis; fuck the poor, innocent kid on the Net; fuck Laws both with capital ls and small; fuck the whole network of Symbolic relations and the future that serves as its prop” (No Future, 29). Your book claims, in essence, that the Fuck Annies are more interesting than the Aleatory Materialists. But how can we escape the long arm of Lucretius?

    Culp: My feeling is that the politics of aleatory materialism remains ambiguous. Beyond the literal meaning of “joy,” there are important feminist takes on the materialist Spinoza of the encounter that deserve our attention. Isabelle Stengers’s work is among the most comprehensive, though the two most famous are probably Donna Haraway’s cyborg feminism and Karen Barad’s agential realism. Curiously, while New Materialism has been quite a boon for the art and design world, its socio-political stakes have never been more uncertain. One would hope that appeals to matter would lend philosophical credence to topical events such as #blacklivesmatter. Yet for many, New Materialism has simply led to a new formalism focused on material forms or realist accounts of physical systems meant to eclipse the “epistemological excesses” of post-structuralism. This divergence was not lost on commentators in the most recent issue of of October, which functioned as a sort of referendum on New Materialism. On the hand, the issue included a generous accounting of the many avenues artists have taken in exploring various “new materialist” directions. Of those, I most appreciated Mel Chen’s reminder that materialism cannot serve as a “get out of jail free card” on the history of racism, sexism, ablism, and speciesism. While on the other, it included the first sustained attack on New Materialism by fellow travelers. Certainly the New Materialist stance of seeing the world from the perspective of “real objects” can be valuable, but only if it does not exclude old materialism’s politics of labor. I draw from Deleuzian New Materialist feminists in my critique of accumulation and reproduction, but only after short-circuiting their world-building. This is a move I learned from Sue Ruddick, whose Theory, Culture & Society article on the affect of the philosopher’s scream is an absolute tour de force. And then there is Graham Burnett’s remark that recent materialisms are like “Etsy kissed by philosophy.” The phrase perfectly crystallizes the controversy, but it might be too hot to touch for at least a decade…

    Galloway: Let’s focus more on the theme of affirmation and negation, since the tide seems to be changing. In recent years, a number of theorists have turned away from affirmation toward a different set of vectors such as negation, eclipse, extinction, or pessimism. Have we reached peak affirmation?

    Culp: We should first nail down what affirmation means in this context. There is the metaphysical version of affirmation, such as Foucault’s proud title as a “happy positivist.” In this declaration in Archaeology of Knowledge and “The Order of Discourse,” he is not claiming to be a logical positivist. Rather, Foucault is distinguishing his approach from Sartrean totality, transcendentalism, and genetic origins (his secondary target being the reading-between-the-lines method of Althusserian symptomatic reading). He goes on to formalize this disagreement in his famous statement on the genealogical method, “Nietzsche, Genealogy, History.” Despite being an admirer of Sartre, Deleuze shares this affirmative metaphysics with Foucault, which commentators usually describe as an alternative to the Hegelian system of identity, contradiction, determinate negation, and sublation. Nothing about this “happily positivist” system forces us to be optimists. In fact, it only raises the stakes for locating how all the non-metaphysical senses of the negative persist.

    Affirmation could be taken to imply a simple “more is better” logic as seen in Assemblage Theory and Latourian Compositionalism. Behind this logic is a principle of accumulation that lacks a theory of exploitation and fails to consider the power of disconnection. The Spinozist definition of joy does little to dispel this myth, but it is not like either project has revolutionary political aspirations. I think we would be better served to follow the currents of radical political developments over the last twenty years, which have been following an increasingly negative path. One part of the story is a history of failure. The February 15, 2003 global demonstration against the Iraq War was the largest protest in history but had no effect on the course of the war. More recently, the election of democratic socialist governments in Europe has done little to stave off austerity, even as economists publicly describe it as a bankrupt model destined to deepen the crisis. I actually find hope in the current circuit of struggle and think that its lack of alter-globalization world-building aspirations might be a plus. My cues come from the anarchist black bloc and those of the post-Occupy generation who would rather not pose any demands. This is why I return to the late Deleuze of the “control societies” essay and his advice to scramble the codes, to seek out spaces where nothing needs to be said, and to establish vacuoles of non-communication. Those actions feed the subterranean source of Dark Deleuze‘s darkness and the well from which comes hatred, cruelty, interruption, un-becoming, escape, cataclysm, and the destruction of worlds.

    Galloway: Does hatred for the world do a similar work for you that judgment or moralism does in other writers? How do we avoid the more violent and corrosive forms of hate?

    Culp: Writer Antonin Artaud’s attempt “to have done with the judgment of God” plays a crucial role in Dark Deleuze. Not just any specific authority but whatever gods are left. The easiest way to summarize this is “the three deaths.” Deleuze already makes note of these deaths in the preface to Difference and Repetition, but it only became clear to me after I read Gregg Flaxman’s Gilles Deleuze and the Fabulation of Philosophy. We all know of Nietzsche’s Death of God. With it, Nietzsche notes that God no longer serves as the central organizing principle for us moderns. Important to Dark Deleuze is Pierre Klossowski’s Nietzsche, who is part of a conspiracy against all of humanity. Why? Because even as God is dead, humanity has replaced him with itself. Next comes the Death of Man, which we can lay at the feet of Foucault. More than any other text, The Order of Things demonstrates how the birth of modern man was an invention doomed to fail. So if that death is already written in sand about to be washed away, then what comes next? Here I turn to the world, worlding, and world-building. It seems obvious when looking at the problems that plague our world: global climate change, integrated world capitalism, and other planet-scale catastrophes. We could try to deal with each problem one by one. But why not pose an even more radical proposition? What if we gave up on trying to save this world? We are already awash in sci-fi that tries to do this, though most of it is incredibly socially conservative. Perhaps now is the time for thinkers like us to catch up. Fragments of Deleuze already lay out the terms of the project. He ends the preface to Different and Repetition by assigning philosophy the task of writing apocalyptic science fiction. Deleuze’s book opens with lightning across the black sky and ends with the world swelling into a single ocean of excess. Dark Deleuze collects those moments and names it the Death of This World.

    Galloway: Speaking of climate change, I’m reminded how ecological thinkers can be very religious, if not in word then in deed. Ecologists like to critique “nature” and tout their anti-essentialist credentials, while at the same time promulgating tellurian “change” as necessary, even beneficial. Have they simply replaced one irresistible force with another? But your “hatred of the world” follows a different logic…

    Culp: Irresistible indeed! Yet it is very dangerous to let the earth have the final say. Not only does psychoanalysis teach us that it is necessary to buck the judgment of nature, the is/ought distinction at the philosophical core of most ethical thought refuses to let natural fact define the good. I introduce hatred to develop a critical distance from what is, and, as such, hatred is also a reclamation of the future in that it is a refusal to allow what-is to prevail over what-could-be. Such an orientation to the future is already in Deleuze and Guattari. What else is de-territorialization? I just give it a name. They have another name for what I call hatred: utopia.

    Speaking of utopia, Deleuze and Guattari’s definition of utopia in What is Philosophy? as simultaneously now-here and no-where is often used by commentators to justify odd compromise positions with the present state of affairs. The immediate reference is Samuel Butler’s 1872 book Erewhon, a backward spelling of nowhere, which Deleuze also references across his other work. I would imagine most people would assume it is a utopian novel in the vein of Edward Bellamy’s Looking Backward. And Erewhon does borrow from the conventions of utopian literature, but only to skewer them with satire. A closer examination reveals that the book is really a jab at religion, Victorian values, and the British colonization of New Zealand! So if there is anything that the now-here of Erewhon has to contribute to utopia, it is that the present deserves our ruthless criticism. So instead of being a simultaneous now-here and no-where, hatred follows from Deleuze and Guattari’s suggestion in A Thousand Plateaus to “overthrow ontology” (25). Therefore, utopia is only found in Erewhon by taking leave of the now-here to get to no-where.

    Galloway: In Dark Deleuze you talk about avoiding “the liberal trap of tolerance, compassion, and respect.” And you conclude by saying that the “greatest crime of joyousness is tolerance.” Can you explain what you mean, particularly for those who might value tolerance as a virtue?

    Culp: Among the many followers of Deleuze today, there are a number of liberal Deleuzians. Perhaps the biggest stronghold is in political science, where there is a committed group of self-professed radical liberals. Another strain bridges Deleuze with the liberalism of John Rawls. I was a bit shocked to discover both of these approaches, but I suppose it was inevitable given liberalism’s ability to assimilate nearly any form of thought.

    Herbert Marcuse recognized “repressive tolerance” as the incredible power of liberalism to justify the violence of positions clothed as neutral. The examples Marcuse cites are governments who say they respect democratic liberties because they allow political protest although they ignore protesters by labeling them a special interest group. For those of us who have seen university administrations calmly collect student demands, set up dead-end committees, and slap pictures of protestors on promotional materials as a badge of diversity, it should be no surprise that Marcuse dedicated the essay to his students. An important elaboration on repressive tolerance is Wendy Brown’s Regulating Aversion. She argues that imperialist US foreign policy drapes itself in tolerance discourse. This helps diagnose why liberal feminist groups lined up behind the US invasion of Afghanistan (the Taliban is patriarchal) and explains how a mere utterance of ISIS inspires even the most progressive liberals to support outrageous war budgets.

    Because of their commitment to democracy, Brown and Marcuse can only qualify liberalism’s universal procedures for an ethical subject. Each criticizes certain uses of tolerance but does not want to dispense with it completely. Deleuze’s hatred of democracy makes it much easier for me. Instead, I embrace the perspective of a communist partisan because communists fight from a different structural position than that of the capitalist.

    Galloway: Speaking of structure and position, you have a section in the book on asymmetry. Most authors avoid asymmetry, instead favoring concepts like exchange or reciprocity. I’m thinking of texts on “the encounter” or “the gift,” not to mention dialectics itself as a system of exchange. Still you want to embrace irreversibility, incommensurability, and formal inoperability–why?

    Culp: There are a lot of reasons to prefer asymmetry, but for me, it comes down to a question of political strategy.

    First, a little background. Deleuze and Guattari’s critique of exchange is important to Anti-Oedipus, which was staged through a challenge to Claude Lévi-Strauss. This is why they shift from the traditional Marxist analysis of mode of production to an anthropological study of anti-production, for which they use the work of Pierre Clastres and Georges Bataille to outline non-economic forms of power that prevented the emergence of capitalism. Contemporary anthropologists have renewed this line of inquiry, for instance, Eduardo Viveiros de Castro, who argues in Cannibal Metaphysics that cosmologies differ radically enough between peoples that they essentially live in different worlds. The cannibal, he shows, is not the subject of a mode of production but a mode of predation.

    Those are not the stakes that interest me the most. Consider instead the consequence of ethical systems built on the gift and political systems of incommensurability. The ethical approach is exemplified by Derrida, whose responsibility to the other draws from the liberal theological tradition of accepting the stranger. While there is distance between self and other, it is a difference that is bridged through the democratic project of radical inclusion, even if such incorporation can only be aporetically described as a necessary-impossibility. In contrast, the politics of asymmetry uses incommensurability to widen the chasm opened by difference. It offers a strategy for generating antagonism without the formal equivalence of dialectics and provides an image of revolution based on fundamental transformation. The former can be seen in the inherent difference between the perspective of labor and the perspective of capital, whereas the latter is a way out of what Guy Debord calls “a perpetual present.”

    Galloway: You are exploring a “dark” Deleuze, and I’m reminded how the concepts of darkness and blackness have expanded and interwoven in recent years in everything from afro-pessimism to black metal theory (which we know is frighteningly white). How do you differentiate between darkness and blackness? Or perhaps that’s not the point?

    Culp: The writing on Deleuze and race is uneven. A lot of it can be blamed on the imprecise definition of becoming. The most vulgar version of becoming is embodied by neoliberal subjects who undergo an always-incomplete process of coming more into being (finding themselves, identifying their capacities, commanding their abilities). The molecular version is a bit better in that it theorizes subjectivity as developing outside of or in tension with identity. Yet the prominent uses of becoming and race rarely escaped the postmodern orbit of hybridity, difference, and inclusive disjunction–the White Man’s face as master signifier, miscegenation as anti-racist practice, “I am all the names of history.” You are right to mention afro-pessimism, as it cuts a new way through the problem. As I’ve written elsewhere, Frantz Fanon describes being caught between “infinity and nothingness” in his famous chapter on the fact of blackness in Black Skin White Masks. The position of infinity is best championed by Fred Moten, whose black fugitive is the effect of an excessive vitality that has survived five hundred years of captivity. He catches fleeting moments of it in performances of jazz, art, and poetry. This position fits well with the familiar figures of Deleuzo-Guattarian politics: the itinerant nomad, the foreigner speaking in a minor tongue, the virtuoso trapped in-between lands. In short: the bastard combination of two or more distinct worlds. In contrast, afro-pessimism is not the opposite of the black radical tradition but its outside. According to afro-pessimism, the definition of blackness is nothing but the social death of captivity. Remember the scene of subjection mentioned by Fanon? During that nauseating moment he is assailed by a whole series of cultural associations attached to him by strangers on the street. “I was battered down by tom-toms, cannibalism, intellectual deficiency, fetishism, racial defects, slave-ships, and above all else, above all: ‘Sho’ good eatin”” (112). The lesson that afro-pessimism draws from this scene is that cultural representations of blackness only reflect back the interior of white civil society. The conclusion is that combining social death with a culture of resistance, such as the one embodied by Fanon’s mentor Aimé Césaire, is a trap that leads only back to whiteness. Afro-pessimism thus follows the alternate route of darkness. It casts a line to the outside through an un-becoming that dissolves the identity we are give as a token for the shame of being a survivor.

    Galloway: In a recent interview the filmmaker Haile Gerima spoke about whiteness as “realization.” By this he meant both realization as such–self-realization, the realization of the self, the ability to realize the self–but also the more nefarious version as “realization through the other.” What’s astounding is that one can replace “through” with almost any other preposition–for, against, with, without, etc.–and the dynamic still holds. Whiteness is the thing that turns everything else, including black bodies, into fodder for its own realization. Is this why you turn away from realization toward something like profanation? And is darkness just another kind of whiteness?

    Culp: Perhaps blackness is to the profane as darkness is to the outside. What is black metal if not a project of political-aesthetic profanation? But as other commentators have pointed out, the politics of black metal is ultimately telluric (e.g. Benjamin Noys’s “‘Remain True to the Earth!’: Remarks on the Politics of Black Metal”). The left wing of black metal is anarchist anti-civ and the right is fascist-nativist. Both trace authority back to the earth that they treat as an ultimate judge usurped by false idols.

    The process follows what Badiou calls “the passion for the real,” his diagnosis of the Twentieth Century’s obsession with true identity, false copies, and inauthentic fakes. His critique equally applies to Deleuzian realists. This is why I think it is essential to return to Deleuze’s work on cinema and the powers of the false. One key example is Orson Welles’s F for Fake. Yet my favorite is the noir novel, which he praises in “The Philosophy of Crime Novels.” The noir protagonist never follows in the footsteps of Sherlock Holmes or other classical detectives’s search for the real, which happens by sniffing out the truth through a scientific attunement of the senses. Rather, the dirty streets lead the detective down enough dead ends that he proceeds by way of a series of errors. What noir reveals is that crime and the police have “nothing to do with a metaphysical or scientific search for truth” (82). The truth is rarely decisive in noir because breakthroughs only come by way of “the great trinity of falsehood”: informant-corruption-torture. The ultimate gift of noir is a new vision of the world whereby honest people are just dupes of the police because society is fueled by falsehood all the way down.

    To specify the descent to darkness, I use darkness to signify the outside. The outside has many names: the contingent, the void, the unexpected, the accidental, the crack-up, the catastrophe. The dominant affects associated with it are anticipation, foreboding, and terror. To give a few examples, H. P. Lovecraft’s scariest monsters are those so alien that characters cannot describe them with any clarity, Maurice Blanchot’s disaster is the Holocaust as well as any other event so terrible that it interrupts thinking, and Don DeLillo’s “airborne toxic event” is an incident so foreign that it can only be described in the most banal terms. Of Deleuze and Guattari’s many different bodies without organs, one of the conservative varieties comes from a Freudian model of the psyche as a shell meant to protect the ego from outside perturbations. We all have these protective barriers made up of habits that help us navigate an uncertain world–that is the purpose of Guattari’s ritornello, that little ditty we whistle to remind us of the familiar even when we travel to strange lands. There are two parts that work together, the refrain and the strange land. The refrains have only grown yet the journeys seem to have ended.

    I’ll end with an example close to my own heart. Deleuze and Guattari are being used to support new anarchist “pre-figurative politics,” which is defined as seeking to build a new society within the constraints of the now. The consequence is that the political horizon of the future gets collapsed into the present. This is frustrating for someone like me, who holds out hope for a revolutionary future that ceases the million tiny humiliations that make up everyday life. I like J. K. Gibson-Graham’s feminist critique of political economy, but community currencies, labor time banks, and worker’s coops are not my image of communism. This is why I have drawn on the gothic for inspiration. A revolution that emerges from the darkness holds the apocalyptic potential of ending the world as we know it.

    Works Cited

    • Ahmed, Sara. The Promise of Happiness. Durham, NC: Duke University Press, 2010.
    • Artaud, Antonin. To Have Done With The Judgment of God. 1947. Live play, Boston: Exploding Envelope, c1985. https://www.youtube.com/watch?v=VHtrY1UtwNs.
    • Badiou, Alain. The Century. 2005. Cambridge, UK: Polity Press, 2007.
    • Barad, Karen. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter. Durham, NC: Duke University Press, 2007.
    • Bataille, Georges. “The Notion of Expenditure.” 1933. In Visions of Excess: Selected Writings, 1927-1939, translated by Allan Stoekl, Carl R. Lovin, and Donald M. Leslie Jr., 167-81. Minneapolis: University of Minnesota Press, 1985.
    • Bellamy, Edward. Looking Backward: From 2000 to 1887. Boston: Ticknor & co., 1888.
    • Blanchot, Maurice. The Writing of the Disaster. 1980. Translated by Ann Smock. Lincoln, NE: University of Nebraska Press, 1995.
    • Brown, Wendy. Regulating Aversion: Tolerance in the Age of Identity and Empire. Princeton, N.J.: Princeton University Press, 2006.
    • Burnett, Graham. “A Questionnaire on Materialisms.” October 155 (2016): 19-20.
    • Butler, Samuel. Erewhon: or, Over the Range. 1872. London: A.C. Fifield, 1910. http://www.gutenberg.org/files/1906/1906-h/1906-h.htm.
    • Chen, Mel Y. “A Questionnaire on Materialisms.” October 155 (2016): 21-22.
    • Clastres, Pierre. Society against the State. 1974. Translated by Robert Hurley and Abe Stein. New York: Zone Books, 1987.
    • Culp, Andrew. Dark Deleuze. Minneapolis: University of Minnesota Press, 2016.
    • ———. “Blackness.” New York: Hostis, 2015.
    • Debord, Guy. The Society of the Spectacle. 1967. Translated by Fredy Perlman et al. Detroit: Red and Black, 1977.
    • DeLanda, Manuel. A Thousand Years of Nonlinear History. New York: Zone Books, 2000.
    • ———. War in the Age of Intelligent Machines. New York: Zone Books, 1991.
    • DeLillo, Don. White Noise. New York: Viking Press, 1985.
    • Deleuze, Gilles. Cinema 2: The Time-Image. 1985. Translated by Hugh Tomlinson and Robert Galeta. Minneapolis: University of Minnesota Press, 1989.
    • ———. “The Philosophy of Crime Novels.” 1966. Translated by Michael Taormina. In Desert Islands and Other Texts, 1953-1974, 80-85. New York: Semiotext(e), 2004.
    • ———. Difference and Repetition. 1968. Translated by Paul Patton. New York: Columbia University Press, 1994.
    • ———. Empiricism and Subjectivity: An Essay on Hume’s Theory of Human Nature. 1953. Translated by Constantin V. Boundas. New York: Columbia University Press, 1995.
    • ———. Foucault. 1986. Translated by Seán Hand. Minneapolis: University of Minnesota Press, 1988.
    • Deleuze, Gilles, and Félix Guattari. Anti-Oedipus. 1972. Translated by Robert Hurley, Mark Seem, and Helen R. Lane. Minneapolis: University of Minnesota Press, 1977.
    • ———. A Thousand Plateaus. 1980. Translated by Brian Massumi. Minneapolis: University of Minnesota Press, 1987.
    • ———. What Is Philosophy? 1991. Translated by Hugh Tomlinson and Graham Burchell. New York: Columbia University Press, 1994.
    • Derrida, Jacques. The Gift of Death and Literature in Secret. Translated by David Willis. Chicago: University of Chicago Press, 2007; second edition.
    • Edelman, Lee. No Future: Queer Theory and the Death Drive. Durham, N.C.: Duke University Press, 2004.
    • Fanon, Frantz. Black Skin White Masks. 1952. Translated by Charles Lam Markmann. New York: Grove Press, 1968.
    • Flaxman, Gregory. Gilles Deleuze and the Fabulation of Philosophy. Minneapolis: University of Minnesota Press, 2011.
    • Foucault, Michel. The Archaeology of Knowledge and the Discourse on Language. 1971. Translated by A.M. Sheridan Smith. New York: Pantheon Books, 1972.
    • ———. “Nietzsche, Genealogy, History.” 1971. In Language, Counter-Memory, Practice: Selected Essays and Interviews, translated by Donald F. Bouchard and Sherry Simon, 113-38. Ithaca, N.Y.: Cornell University Press, 1977.
    • ———. The Order of Things. 1966. New York: Pantheon Books, 1970.
    • Freud, Sigmund. Beyond the Pleasure Principle. 1920. Translated by James Strachley. London: Hogarth Press, 1955.
    • ———. “Instincts and their Vicissitudes.” 1915. Translated by James Strachley. In Standard Edition of the Complete Psychological Works of Sigmund Freud 14, 111-140. London: Hogarth Press, 1957.
    • Gerima, Haile. “Love Visual: A Conversation with Haile Gerima.” Interview by Sarah Lewis and Dagmawi Woubshet. Aperture, Feb 23, 2016. http://aperture.org/blog/love-visual-haile-gerima/.
    • Gibson-Graham, J.K. The End of Capitalism (As We Knew It): A Feminist Critique of Political Economy. Hoboken: Blackwell, 1996.
    • ———. A Postcapitalist Politics. Minneapolis: University of Minnesota Press, 2006.
    • Guattari, Félix. “Machine and Structure.” 1968. Translated by Rosemary Sheed. In Molecular Revolution: Psychiatry and Politics, 111-119. Harmondsworth, Middlesex: Penguin, 1984.
    • Halperin, David, and Valerie Traub. “Beyond Gay Pride.” In Gay Shame, 3-40. Chicago: University of Chicago Press, 2009.
    • Haraway, Donna. Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge, 1991.
    • Klossowski, Pierre. “Circulus Vitiosus.” Translated by Joseph Kuzma. The Agonist: A Nietzsche Circle Journal 2, no. 1 (2009): 31-47.
    • ———. Nietzsche and the Vicious Circle. 1969. Translated by Daniel W. Smith. Chicago: University of Chicago Press, 1997.
    • Lazzarato, Maurizio. Signs and Machines. 2010. Translated by Joshua David Jordan. Los Angeles: Semiotext(e), 2014.
    • Marcuse, Herbert. “Repressive Tolerance.” In A Critique of Pure Tolerance, 81-117. Boston: Beacon Press, 1965.
    • Mauss, Marcel. The Gift: The Form and Reason for Exchange in Archaic Societies. 1950. Translated by W. D. Hallis. New York: Routledge, 1990.
    • Moten, Fred. In The Break: The Aesthetics of the Black Radical Tradition. Minneapolis: University of Minnesota Press, 2003.
    • Mumford, Lewis. Technics and Human Development. San Diego: Harcourt Brace Jovanovich, 1967.
    • Noys, Benjamin. “‘Remain True to the Earth!’: Remarks on the Politics of Black Metal.” In: Hideous Gnosis: Black Metal Theory Symposium 1 (2010): 105-128.
    • Preciado, Paul. Testo-Junkie: Sex, Drugs, and Biopolitics in the Phamacopornographic Era. 2008. Translated by Bruce Benderson. New York: The Feminist Press, 2013.
    • Ruddick, Susan. “The Politics of Affect: Spinoza in the Work of Negri and Deleuze.” Theory, Culture, Society 27, no. 4 (2010): 21-45.
    • Scott, James C. The Art of Not Being Governed: An Anarchist History of Upland Southeast Asia. New Haven: Yale University Press, 2009.
    • Sexton, Jared. “Afro-Pessimism: The Unclear Word.” In Rhizomes 29 (2016). http://www.rhizomes.net/issue29/sexton.html.
    • ———. “Ante-Anti-Blackness: Afterthoughts.” In Lateral 1 (2012). http://lateral.culturalstudiesassociation.org/issue1/content/sexton.html.
    • ———. “The Social Life of Social Death: On Afro-Pessimism and Black Optimism.” In Intensions 5 (2011). http://www.yorku.ca/intent/issue5/articles/jaredsexton.php.
    • Stiegler, Bernard. For a New Critique of Political Economy. Cambridge: Polity Press, 2010.
    • ———. Technics and Time 1: The Fault of Epimetheus. 1994. Translated by George Collins and Richard Beardsworth. Redwood City, CA: Stanford University Press, 1998.
    • Tiqqun. “How Is It to Be Done?” 2001. In Introduction to Civil War. 2001. Translated by Alexander R. Galloway and Jason E. Smith. Los Angeles, Calif.: Semiotext(e), 2010.
    • Toynbee, Arnold. A Study of History. Abridgement of Volumes I-VI by D.C. Somervell. London, Oxford University Press, 1946.
    • Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books, 2012.
    • Viveiros de Castro, Eduardo. Cannibal Metaphysics: For a Post-structural Anthropology. 2009. Translated by Peter Skafish. Minneapolis, Minn.: Univocal, 2014.
    • Villani, Arnaud. La guêpe et l’orchidée. Essai sur Gilles Deleuze. Paris: Éditions de Belin, 1999.
    • Welles, Orson, dir. F for Fake. 1974. New York: Criterion Collection, 2005.
    • Wiener, Norbert. Cybernetics: Cybernetics: Or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press, 1948; second revised edition.
    • Williams, Alex, and Nick Srincek. “#ACCELERATE MANIFESTO for an Accelerationist Politics.” Critical Legal Thinking. 2013. http://criticallegalthinking.com/2013/05/14/accelerate-manifesto-for-an-accelerationist-politics/.

    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. He is a frequent contributor to The b2 Review “Digital Studies.”

    Andrew Culp is a Visiting Assistant Professor of Rhetoric Studies at Whitman College. He specializes in cultural-communicative theories of power, the politics of emerging media, and gendered responses to urbanization. His work has appeared in Radical Philosophy, Angelaki, Affinities, and other venues. He previously pre-reviewed Galloway’s Laruelle: Against the Digital for The b2 Review “Digital Studies.”

    Back to the essay

  • Richard Hill — The Root Causes of Internet Fragmentation

    Richard Hill — The Root Causes of Internet Fragmentation


    a review of Scott Malcomson, Splinternet: How Geopolitics and Commerce Are Fragmenting the World Wide Web
      (OR Books, 2016)
    by Richard Hill
    ~

    The implicit premise of this valuable book is that “we study the past to understand the present; we understand the present to guide the future.” In that light, the book makes a valuable contribution by offering a sound and detailed historical survey of aspects of the Internet which are not well-known nor easily accessible outside the realms of dedicated internet research. However, as explained below, the author has not covered some important aspects of the past and thus the work is incomplete as a guide to the future. This should not be taken as criticism, but as a call for the author, or other scholars, to complete the work.

    The book starts by describing how modern computers and computer networks evolved from the industrialization of war and in particular due to the advantages that could be gained by automating the complex mathematical calculations required for ballistics on the one hand (computers) and by speeding up communications between elements of armed forces on the other hand (networks). Given the effectiveness of ICTs for war, belligerents before, during, and after World War II heavily funded research and development of those technologies in the military context, even if much of the research was outsourced to the private sector.

    Malcomson documents how the early founders of what we now call computer science were based in the USA and were closely associated with US military efforts: “the development of digital computing was principally an unintended byproduct of efforts to improve the accuracy of gunfire against moving targets” (49).

    Chapter 1 ends with an account of how Cold War military concerns (especially so-called mutual assured destruction by nuclear weapons) led to the development of packet switched networks in order to interconnect powerful computers: ARPANET, which evolved to become the Internet.

    Chapter 2 explores a different, but equally important, facet of Internet history: the influence of the anti-authoritarian hacker culture, which started with early computer enthusiasts, and fully developed in the 1970s and 1980s, in particular in the West Coast (most famously documented in Steven Levy’s 1984 book Hackers: Heroes of the Computer Revolution). The book explains the origins of the venture capitalism that largely drove the development of ICTs (including the Internet) as private risk capital replaced state funding for research and development in ICTs.

    The book documents the development of the geek culture’s view that computers and networks should be “an instrument of personal liberation and create a frictionless, alternative world free from the oppressing state” (101). Malcomson explains how this led to the belief that the Internet should not be subject to normal laws, culminating in Barlow’s well known utopian “Declaration of the Independence of Cyberspace,” and explains how such ideas could not, and did not survive. The chapter concludes: “The subculture had lost the battle. Governments and large corporations would now shape the Internet” (137). But, as the book notes later (171), it was in fact primarily one government, the US government, that shaped the Internet. And, as Shawn Powers and Michael Jablonski explain in The Real Cyberwar, the US used its influence to further its own geopolitical and global economic goals.

    Chapter 3 explores the effects of globalization, the weakening of American power, the rise of competing powers, and the resulting tensions regarding US dominance of ICTs in general and the Internet in particular. It also covers the rise of policing of the Internet induced by fear of “terrorists, pedophiles, drug dealers, and money launderers” (153).

    We have come full circle: a technology initially designed for war is now once again used by the military to achieve its aims, the so-called “war on terror.” So there is a tension between three different forces, all of which were fundamental to the development of ICTs (including the Internet): the government, military, and security apparatus; more-or-less anarchic technologists; and dominant for-profit companies (which may have started small, but can quickly become very large and dominant – at least for a few years until they are displaced by newcomers).

    As the subtitle indicates, the book is mostly about the World Wide Web, so some of the other aspects of the history of the Internet are not covered. For example, there is no mention of the very significant commercial and political battles that took place between proponents of the Internet and proponents of the Open Systems Interconnection (OSI) suite of standards; this is a pity, because the residual effects of those battles are still being felt today. Nor does the book explore the reasons for and effects of the transition of the management of the Internet from the US Department of Defense to the US Department of Commerce (even if it correctly notes that the chief interest of the Clinton administration “was in a thriving Internet that would lead to new industries and economic growth” [133]).

    Malcomson explains well how there were four groups competing for influence in the late 1990s: technologists, the private sector, the US government, and other governments, and notes how the US government was in an impossible situation, since it could not credibly argue simultaneously that other governments (or intergovernmental organizations such as the ITU) should not influence the Internet while it itself formally supervised the management and administration of the domain name system (DNS). However, he does not explain how the origins of the DNS, its subsequent development, or how its management and administration were unilaterally hijacked by the US, leading to much of the international tension that has bedeviled discussions on Internet governance since 1998.

    Regarding the World Wide Web, the book does not discuss how the end-to-end principle and its premise of secure end devices resulted in unforeseen consequences (such as spam, cybercrime, and cyberattacks) when unsecure personal computers became the dominant device connected via the Internet. Nor does it discuss how the lack of billing mechanisms in the Internet protocol suite has led to the rise of advertising as the sole revenue generation mechanism and the consequences of that development.

    The book analyses the splintering (elsewhere called fragmentation) brought about by the widespread adoption of proprietary systems operating system and their associated “apps”, and by mass surveillance. As Malcomson puts the matter, mass surveillance “was fatal to the universality of the web, because major web companies were and are global but cannot be both global and subject to the intricate agendas of US intelligence and defense institutions, whose purpose is to defend national interests, not universal interests” (160).

    However, the book does not discuss in any depth other sources of splintering, such as calls by some governments for national control over some portions of the Internet, or violations of network neutrality, or zero rating. Yet the book notes that the topic of network neutrality had been raised by Vice President Gore as early as 1993: “Without provisions for open access, the companies that own the networks could use their control of the networks to ensure that their customers only have access to their programming. We have already seen cases where cable company owners have used their monopoly control of their networks to exclude programming that competes with their own. Our legislation will contain strong safeguards against such behavior” (124). As we know, the laws called for in the last sentence were never implemented, and it was only in 2015 that the Federal Communication Commission imposed network neutrality. Malcomson could have used his deep knowledge of the history of the Internet to explain why Gore’s vision was not realized, no doubt because of the tensions mentioned above between the groups competing for influence.

    The book concludes that the Internet will increasingly cease to be “an entirely cross border enterprise”(190), but that the benefits of interoperability will result in a global infrastructure being preserved, so that “a fragmented Internet will retain aspects of universality” (197).

    As mentioned above, the book provides an excellent account of much of the historical origins of the World Wide Web and the disparate forces involved in its creation. The book would be even more valuable if it built on that account to analyze more deeply and put into context trends (which it does mention) other than splintering, such as the growing conflict between Apple, Google et al. who want no restrictions on data collection and encryption (so that they can continue to collect and monetize data), governments who want no encryption so they can censor and/or surveil, and governments who recognize that privacy is a human right, that privacy rules should be strengthened, and that end-users should have full ownership and control of their data.

    Readers keen to understand the negative economic impacts of the Internet should read Dan Schiller’s Digital Depression, and readers keen to understand the negative impacts of the Internet on democracy should read Robert McChesney’s Digital Disconnect. This might lead some to believe that we have would up exactly where we didn’t want to be: “government-driven, corporate-interest driven, profit-driven, monopoly-driven.” The citation (from Lyman Chapin, one of the founders of the Internet Society), found on p. 132 of Malcomson’s book, dates back to 1991, and it reflects what the technologists of the time wanted to avoid.

    To conclude, it is worth noting the quotation on page 57 from Norbert Wiener: “Just as the skilled carpenter, the skilled mechanic, the skilled dressmaker have in some degree survived the first industrial revolution, so the skilled scientist and the skilled administrator might survive the second [the cybernetic revolution]. However, taking the second revolution as accomplished, the average human of mediocre attainments has nothing to sell that is worth anyone’s money to buy. The answer, of course, is to have a society based on human values other than buying and selling.”

    Wiener thus foresaw the current fundamental trends and dilemmas that have been well documented and analyzed by Robert McChesney and John Nichols in their new book People Get Ready: The Fight Against a Jobless Economy and a Citizenless Democracy (Nation Books, 2016).

    There can be no doubt that the current trends are largely conditioned by the early history of ICTs (and in particular of the Internet) and its roots in military applications. Thus Splinternet is a valuable source of material that should be carefully considered by all who are involved in Internet policy matters.
    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2 Review Digital Studies magazine.

    Back to the essay

  • Michelle Moravec — The Never-ending Night of Wikipedia’s Notable Woman Problem

    Michelle Moravec — The Never-ending Night of Wikipedia’s Notable Woman Problem

    By Michelle Moravec
    ~

    Author’s note: this is the written portion of a talk given at St. Joseph University’s Art + Feminism Wikipedia editathon, February 27, 2016. Thanks to Rachael Sullivan for the invite and  Rosalba Ugliuzza for Wikipedia data culling!

    Millions of the sex whose names were never known beyond the circles of their own home influences have been as worthy of commendation as those here commemorated. Stars are never seen either through the dense cloud or bright sunshine; but when daylight is withdrawn from a clear sky they tremble forth
    — Sarah Josepha Hale, Woman’s Record (1853)

    and others was a womanAs this poetic quote by Sarah Josepha Hale, nineteenth-century author and influential editor reminds us, context is everything.   The challenge, if we wish to write women back into history via Wikipedia, is to figure out how to shift the frame of references so that our stars can shine, since the problem of who precisely is “worthy of commemoration” or in Wikipedia language, who is deemed notable, so often seems to exclude women.

    As as Shannon Mattern asked at last year’s Art + Feminism Wikipedia edit-a-thon, “Could Wikipedia embody some alternative to the ‘Great Man Theory’ of how the world works?” Literary scholar Alison Booth, in How To Make It as a Woman, notes that the first book in praise of women by a woman appeared in 1404 (Christine de Pizan’s Book of the City of Ladies), launching a lengthy tradition of “exemplary biographical collections of women.” Booth identified more than 900 voluanonymous was toomes of prosopography published during what might be termed the heyday of the genre, 1830-1940, when the rise of the middle class and increased literacy combined with relatively cheap production of books to make such volumes both practicable and popular. Booth also points out, that lest we consign the genre to the realm of mere curiosity, predating the invention of “women’s history” the compilers, editrixes or authors of these volumes considered them a contribution to “national history” and indeed Booth concludes that the volumes were “indispensable aids in the formation of nationhood.”

    Booth compiled a list of the most frequently mentioned women in a subset of these books and tracked their frequency over time.  In an exemplary project, she made this data available on the web, allowing for the creation of the visualization below of American figures on that chart.

    booth data by date

    This chart makes clear what historians already know, notability is historically specific and contingent, something Wikipedia does not take into account in formulating guidelines that take this to be a stable concept.

    Only Pocahontas deviates from the great white woman school of history and she too becomes less salient over time.  Furthermore, by the standards of this era, at least as represented by these books, black women were largely considered un-notable. This perhaps explains why, in 1894, Gertrude Mossell published The Work of the Afro-American Woman, a compilation of achievements that she described as “historical in character.” Mossell’s volume itself is a rich source of information of women worthy of commemoration and commendation.

    Looking further into the twentieth-century, the successor to this sort of volume is aptly titled, Notable American Women, a three-volume set that while published in 1971 had its roots in the 1950s when Arthur Schlesinger, as head of Radcliffe’s College council, suggested that a biographical dictionary of women might be a useful thing. Perhaps predictably, a publisher could not be secured, so Radcliffe funded the project itself. The question then becomes does inclusion in a volume declaring women as “notable” mean that these women would meet Wikipedia’s “notability” standards?

    Studies have found varying degrees of bias in coverage of female figures compared to male figures. The latest numbers I found, as of January 2015, concluded that women constituted only 15.5 percent of the biographical entries on the English Wikipedia, and that prior to the 20th century, the problem was wildly exacerbated by “sourcing and notability issues.” Using the “missing” biographies concept borrowed from a 2010 study of Wikipedia’s “completeness,” I compared selected “classified” areas for biographies of Notable American Women (analysis was conducted by hand with tremendous assistance from Rosalba Ugliuzza).

    Working with the digitized copy of Notable American Women in Women and Social Movements, I began compiling a “missing” biographies quotient,  the percentage of entries missing for individuals by the “classified list of biographies” that appeared at the end of the third volume of Notable American Women. Mirroring the well-known category issues of Wikipedia, the editors finessed the difficulties of limiting individuals to one area by including them in multiple, including a section called “Negro Women” and another called “Indian Women”:

    missing for blog

    Initially I had suspected that larger classifications might have a greater percentage of missing entries, but that is not true. Social workers, the classification with the highest percentage of missing entries, is a relatively small classification with only nine individuals. The six classifications with no missing entries ranged in size from five to eleven.  I then created my own meta-categories to summarize what larger classifications might exacerbate this “missing” biographies problem.

    legend missing blog

    Inclusion in Notable American Women does not translate into inclusion in Wikipedia.   Influential individuals associated with female-dominated professions, social work and nursing, are less likely to be considered notable, as are those “leaders” in settlement houses or welfare work or “reformers” like peace advocates.   Perhaps due to edit-a-thons or Wikipedians-in-residence, female artists and female scientists have fared quite well.  Both Indian Women and Negro Women have the same percentage of missing women.

    Looking at the network of “Negro Women” by their Notable American Women classified entries, I noted their centrality. Frances Harper and Ida B. Wells are the most networked women in the volumes, which is representative of their position as bridge leaders (I also noted the centrality of Frances Gage, who does not have a Wikipedia entry yet, a fate she shares with the white abolitionists Sallie Holley and Caroline Putnam).

    negro network colors

    Visualizing further, I located two women who don’t have Wikipedia entries and are not included in Notable American Women:

    missing negro women

    Eva del Vakia Bowles was a long time YWCA worker who spent her life trying to improve interracial relations. She was the first black woman hired by the YWCA to head a branch. During WWI, Bowles had charge of Y’s established near war work factories to provide R & R for workers. Throughout her tenure at the Y, Bowles pressed the organization to promote black women to positions within the organization. In 1932 she resigned from her beloved Y in protest over policies she believed excluded black women from the decision making processes of the National Board.

    Addie D. Waites Hunton, also a Y worker and founding member of the NAACP, was an amazing woman who along with her friend Kathryn Magnolia Johnson authored Two Colored Women with the American Expeditionary Forces (1920), which details their time as Y workers in WWI where they were among the very first black women sent. Later, she became a field worker for the NAACP, a member of the WILPF, and was an observer in Haiti in 1926 as part of that group

    Finally, using a methodology I developed when working on the racially-biased History of Woman Suffrage, I scraped names from Mossell’s The Work of the Afro-American Woman to find women that should have appeared in Notable American Women and in Wikipedia. Although this is rough result of named extractions, it gave me a place to start.

    overlaps negro women

    Alice Dugged Cary does not appear in Notable American Women or Wikipedia.  She was born free in 1859 became president of the State Federation of Colored Women of Georgia, librarian of first branch for African Americans in Atlanta, established first free kindergartens for African American children in Georgia, nominated as honorary member in Zeta Phi Beta and was involved in its spread.

    Similarly, Lucy Ella Moten, born free in 1851, became principal of Miner Normal School, earned an M.D., and taught in the South during summer “vacations, appears in neither Notable American Women nor Wikipedia (or at least she didn’t until Mike Lyons started her page yesterday at the editathon!).

    _____

    Michelle Moravec (@ProfessMoravec) is Associate Professor of History at Rosemont College. She is a prominent digital historian and the digital history editor for Women and Social Movements. Her current project, The Politics of Women’s Culture, uses a combination of digital and traditional approaches to produce an intellectual history of the concept of women’s culture. She writes a monthly column for the Mid-Atlantic Regional Center for the Humanities, and maintains her own blog History in the City, at which an earlier version of this post first appeared.

    Back to the essay

  • Jürgen Geuter — Liberty, an iPhone, and the Refusal to Think Politically

    Jürgen Geuter — Liberty, an iPhone, and the Refusal to Think Politically

    By Jürgen Geuter
    ~

    The relationship of government and governed has always been complicated. Questions of power, legitimacy, structural and institutional violence, of rights and rules and restrictions keep evading any ultimate solution, chaining societies to constant struggles about shifting balances between different positions and extremes or defining completely new aspects or perspectives on them to shake off the often perceived stalemate. Politics.

    Politics is a simple word but one with a lot of history. Coming from the ancient Greek term for “city” (as in city-state) the word pretty much shows what it is about: Establishing the structures that a community can thrive on. Policy is infrastructure. Not made of wire or asphalt but of ideas and ways of connecting them while giving the structure ways of enforcing the integrity of itself.

    But while the processes of negotiation and discourse that define politics will never stop while intelligent beings exist recent years have seen the emergence of technology as a replacement of politics. From Lawrence Lessig’s “Code is Law” to Marc Andreessen’s “Software Is Eating the World”: A small elite of people building the tools and technologies that we use to run our lives have in a way started emancipating from politics as an idea. Because where politics – especially in democratic societies – involves potentially more people than just a small elite, technologism and its high priests pull off a fascinating trick: defining policy and politics while claiming not to be political.

    This is useful for a bunch of reasons. It allows to effectively sidestep certain existing institutions and structures avoiding friction and loss of forward momentum. “Move fast and break things” was Facebook’s internal motto until only very recently. It also makes it easy to shed certain responsibilities that we expect political entities of power to fulfill. Claiming “not to be political” allows you to have mobs of people hunting others on your service without really having to do anything about it until it becomes a PR problem. Finally, evading the label of politics grants a lot more freedoms when it comes to wielding powers that the political structures have given you: It’s no coincidence that many Internet platform declare “free speech” a fundamental and absolute right, a necessary truth of the universe, unless it’s about showing a woman breastfeeding or talking about the abuse free speech extremists have thrown at feminists.

    Yesterday news about a very interesting case directly at the contact point of politics and technologism hit mainstream media: Apple refused – in a big and well-written open letter to its customers – to fulfill an order by the District Court of California to help the FBI unlock an iPhone 5c that belonged to one of the shooters in last year’s San Bernadino shooting, in which 14 people were killed and 22 more were injured.

    Apple’s argument is simple and ticks all the boxes of established technical truths about cryptography: Apple’s CEO Tim Cook points out that adding a back door to its iPhones would endanger all of Apple’s customers because nobody can make sure that such a back door would only be used by law enforcement. Some hacker could find that hole and use it to steal information such as pictures, credit card details or personal data from people’s iPhones or make these little pocket computers do illegal things. The dangers Apple correctly outlines are immense. The beautifully crafted letter ends with the following statements:

    Opposing this order is not something we take lightly. We feel we must speak up in the face of what we see as an overreach by the U.S. government.

    We are challenging the FBI’s demands with the deepest respect for American democracy and a love of our country. We believe it would be in the best interest of everyone to step back and consider the implications.

    While we believe the FBI’s intentions are good, it would be wrong for the government to force us to build a backdoor into our products. And ultimately, we fear that this demand would undermine the very freedoms and liberty our government is meant to protect.

    Nothing in that defense is new: The debate about government backdoors has been going on for decades with companies, software makers and government officials basically exchanging the same bullets points every few years. Government: “We need access. For security.” Software people: “Yeah but then nobody’s system is secure anymore.” Rinse and repeat. That whole debate hasn’t even changed through Edward Snowden’s leaks: While the positions were presented in an increasingly shriller and shriller tone the positions themselves stayed monolithic and unmoved. Two unmovable objects yelling at each other to get out of the way.

    Apple’s open letter was received with high praise all through the tech-savvy elites, from the cypherpunks to journalists and technologists. One tweet really stood out for me because it illustrates a lot of what we have so far talked about:

    Read that again. Tim Cook/Apple are clearly separated from politics and politicians when it comes to – and here’s the kicker – the political concept of individual liberty. A deeply political debate, the one about where the limits of individual liberty might be is ripped out of the realm of politicians (and us, but we’ll come to that later). Sing the praises of the new Guardian of the Digital Universe.

    But is the court order really exactly the fundamental danger for everybody’s individual liberty that Apple presents? The actual text paints a different picture. The court orders Apple to help the FBI access one specific, identified iPhone. The court order lists the actual serial number of the device. What “help” means in this context is also specified in great detail:

    1. Apple is supposed to disable features of the iPhone automatically deleting all user data stored on the device which are usually in place to prevent device thieves from accessing the data the owners of the device stored on it.
    2. Apple will also give the FBI some way to send passcodes (guesses of the PIN that was used to lock the phone) to the device. This sounds strange but will make sense later.
    3. Apple will disable all software features that introduce delays for entering more passcodes. You know the drill: You type the wrong passcode and the device just waits for a few seconds before you can try a new one.

    Apple is compelled to write a little piece of software that runs only on the specified iPhone (the text is very clear on that) and that disables the 2 security features explained in 1 and 3. Because the court actually recognizes the dangers of having that kind of software in the wild it explicitly allows Apple to do all of this within its own facilities: the Phone would be sent to an Apple facility, the software loaded to the RAM of the device. This is where 2 comes in: When the device has been modified by loading the Apple-signed software into its RAM the FBI needs a way to send PIN code guesses to the device. The court order even explicitly states that Apple’s new software package is only supposed to go to RAM and not change the device in other ways. Potentially dangerous software would never leave Apple’s premises, Apple also doesn’t have to introduce or weaken the security of all its devices and if Apple can fulfill the tasks described in some other way the court is totally fine with it. The government, any government doesn’t get a generic backdoor to all iPhones or all Apple products. In a more technical article than this on Dan Guido outlines that what the court order asks for would work on the iPhone in question but not on most newer ones.

    So while Apple’s PR evokes the threat of big government’s boots marching on to step on everybody’s individual freedoms, the text of the court order and the technical facts make the case ultra specific: Apple isn’t supposed to build a back door for iPhones but help law enforcement to open up one specific phone within their possession connected not to a theoretical crime in the future but the actual murder of 14 people.

    We could just attribute it all to Apple effectively taking a PR opportunity to strengthen the image it has been developing after realizing that they just couldn’t really do data and services, the image of the protector of privacy and liberty. An image that they kicked into overdrive post-Snowden. But that would be too simple because the questions here are a lot more fundamental.

    How do we – as globally networked individuals living in digitally connected and mutually overlaying societies – define the relationship of transnational corporations and the rules and laws we created?

    Cause here’s the fact: Apple was ordered by a democratically legitimate court to help in the investigation of a horrible, capital crime leading to the murder of 14 people by giving it a way to potentially access one specific phone of the more than 700 million phones Apple has made. And Apple refuses.

    Which – don’t get me wrong – is their right as an entity in the political system of the US: They can fight the court order using the law. They can also just refuse and see what the government, what law enforcement will do to make them comply. Sometimes the cost of breaking that kind of resistance overshadow the potential value so the request gets dropped. But where do we as individuals stand whose liberty is supposedly at stake? Where is our voice?

    One of the main functions of political systems is generating legitimacy for power. While some less-than-desirable systems might generate legitimacy by being the strongest, in modern times less physical legitimizations of power were established: a king for example often is supposed to rule because one or more god(s) say so. Which generates legitimacy especially if you share the same belief. In democracies legitimacy is generated by elections or votes: by giving people the right to speak their mind, elect representatives and be elected the power (and structural violence) that a government exerts is supposedly legitimized.

    Some people dispute the legitimacy of even democratically distributed power, and it’s not like they have no point, but let’s not dive into the teachings of Anarchism here. The more mainstream position is that there is a rule of law and that the institutions of the United States as a democracy are legitimized as the representation of US citizens. They represent every US citizen, they each are supposed to keep the political structure, the laws and rules and rights that come with being a US citizen (or living there) intact. And when that system speaks to a company it’s supposed to govern and the company just gives it the finger (but in a really nice letter) how does the public react? They celebrate.

    But what’s to celebrate? This is not some clandestine spy network gathering everybody’s every waking move to calculate who might commit a crime in 10 years and assassinate them. This is a concrete case, a request confirmed by a court in complete accordance with the existing practices in many other domains. If somebody runs around and kills people, the police can look into their mail, enter their home. That doesn’t abolish the protections of the integrity of your mail or home but it’s an attempt to balance the rights and liberties of the individual as well as the rights and needs of all others and the social system they form.

    Rights hardly ever are absolute, some might even argue that no right whatsoever is absolute: you have the right to move around freely. But I can still lock you out of my home and given certain crimes you might be locked up in prison. You have the right to express yourself but when you start threatening others, limits kick in. This balancing act that I also started this essay with has been going on publicly for ages and it will go on for a lot longer. Because the world changes. New needs might emerge, technology might create whole new domains of life that force us to rethink how we interact and which restrictions we apply. But that’s nothing that one company just decides.

    In unconditionally celebrating Cook’s letter a dangerous “apolitical” understanding of politics shows its ugly face: An ideology so obsessed with individual liberty that it happily embraces its new unelected overlords. Code is Law? More like “Cook is Law”.

    This isn’t saying that Apple (or any other company in that situation) just has to automatically do everything a government tells them to. It’s quite obvious that many of the big tech companies are not happy about the idea of establishing precedent in helping government authorities. Today it’s the FBI but what if some agency from some dictatorship wants the data from some dissident’s phone? Is a company just supposed to pick and choose?

    The world might not grow closer together but it gets connected a lot more and that leads to inconsistent laws, regulations, political ideologies etc colliding. And so far we as mankind have no idea how to deal with it. Facebook gets criticized in Europe for applying very puritanic standards when it comes to nudity but it does follow as a US company established US traditions. Should they apply German traditions which are a lot more open when it comes to depictions of nudity as well? What about rules of other countries? Does Facebook need to follow all? Some? If so which ones?

    While this creates tough problems for international law makers, governments and us more mortal people, it does concern companies very little as they can – when push comes to shove – just move their base of operation somewhere else. Which they already do to “optimize” avoid taxes, about which Cook also recently expressed indignant refusal to comply with US government requirements as “total political crap” – is this also a cause for all of us across the political spectrum to celebrate Apple’s protection of individual liberty? I wonder how the open letter would have looked if Ireland, which is a tax haven many technology companies love to use, would have asked for the same thing California did?

    This is not specifically about Apple. Or Facebook. Or Google. Or Volkswagen. Or Nestle. This is about all of them and all of us. If we uncritically accept that transnational corporations decide when and how to follow the rules we as societies established just because right now their (PR) interests and ours might superficially align how can we later criticize when the same companies don’t pay taxes or decide to not follow data protection laws? Especially as a kind of global digital society (albeit of a very small elite) we have between cat GIFs and shaking the fist at all the evil that governments do (and there’s lots of it) dropped the ball on forming reasonable and consistent models for how to integrate all our different inconsistent rules and laws. How we gain any sort of politically legitimized control over corporations, governments and other entities of power.

    Tim Cook’s letter starts with the following words:

    This moment calls for public discussion, and we want our customers and people around the country to understand what is at stake.

    On that he and I completely agree.


    _____

    Jürgen Geuter (@tante) is a political computer scientist living in Germany. For about 10 years he has been speaking and writing about technology, digitalization, digital culture and the way these influence mainstream society. His writing has been featured in Der Spiegel, Wired Germany and other publications as well as his own blog Nodes in a Social Network, on which an earlier version of this post first appeared.

    Back to the essay

  • Data and Desire in Academic Life

    Data and Desire in Academic Life

    a review of Erez Aiden and Jean-Baptiste Michel, Uncharted: Big Data as a Lens on Human Culture (Riverhead Books, reprint edition, 2014)
    by Benjamin Haber
    ~

    On a recent visit to San Francisco, I found myself trying to purchase groceries when my credit card was declined. As the cashier is telling me this news, and before I really had time to feel any particular way about it, my leg vibrates. I’ve received a text: “Chase Fraud-Did you use card ending in 1234 for $100.40 at a grocery store on 07/01/2015? If YES reply 1, NO reply 2.” After replying “yes” (which was recognized even though I failed to follow instructions), I swiped my card again and was out the door with my food. Many have probably had a similar experience: most if not all credit card companies automatically track purchases for a variety of reasons, including fraud prevention, the tracking of illegal activity, and to offer tailored financial products and services. As I walked out of the store, for a moment, I felt the power of “big data,” how real-time consumer information can be read as be a predictor of a stolen card in less time than I had to consider why my card had been declined. It was a too rare moment of reflection on those networks of activity that modulate our life chances and capacities, mostly below and above our conscious awareness.

    And then I remembered: didn’t I buy my plane ticket with the points from that very credit card? And in fact, hadn’t I used that card on multiple occasions in San Francisco for purchases not much less than the amount my groceries cost. While the near-instantaneous text provided reassurance before I could consciously recognize my anxiety, the automatic card decline was likely not a sophisticated real-time data-enabled prescience, but a rather blunt instrument, flagging the transaction on the basis of two data points: distance from home and amount of purchase. In fact, there is plenty of evidence to suggest that the gap between data collection and processing, between metadata and content and between current reality of data and its speculative future is still quite large. While Target’s pregnancy predicting algorithm was a journalistic sensation, the more mundane computational confusion that has Gmail constantly serving me advertisements for trade and business schools shows the striking gap between the possibilities of what is collected and the current landscape of computationally prodded behavior. The text from Chase, your Klout score, the vibration of your FitBit, or the probabilistic genetic information from 23 and me are all primarily affective investments in mobilizing a desire for data’s future promise. These companies and others are opening of new ground for discourse via affect, creating networked infrastructures for modulating the body and social life.

    I was thinking about this while reading Uncharted: Big Data as a Lens on Human Culture, a love letter to the power and utility of algorithmic processing of the words in books. Though ostensibly about the Google Ngram Viewer, a neat if one-dimensional tool to visualize the word frequency of a portion of the books scanned by Google, Uncharted is also unquestionably involved in the mobilization of desire for quantification. Though about the academy rather than financialization, medicine, sports or any other field being “revolutionized” by big data, its breathless boosterism and obligatory cautions are emblematic of the emergent datafied spirit of capitalism, a celebratory “coming out” of the quantifying systems that constitute the emergent infrastructures of sociality.

    While published fairly recently, in 2013, Uncharted already feels dated in its strangely muted engagement with the variety of serious objections to sprawling corporate and state run data systems in the post-Snowden, post-Target, post-Ashley Madison era (a list that will always be in need of update). There is still the dazzlement about the sheer magnificent size of this potential new suitor—“If you wrote out all five zettabytes that humans produce every year by hand, you would reach the core of the Milky Way” (11)—all the more impressive when explicitly compared to the dusty old technologies of ink and paper. Authors Erez Aiden and Jean-Baptiste Michel are floating in a world of “simple and beautiful” formulas (45), “strange, fascinating and addictive” methods (22), producing “intriguing, perplexing and even fun” conclusions (119) in their drive to colonize the “uncharted continent” (76) that is the English language. The almost erotic desire for this bounty is made more explicit in their tongue-in-cheek characterization of their meetings with Google employees as an “irresistible… mating dance” (22):

    Scholars and scientists approach engineers, product managers, and even high-level executives about getting access to their companies’ data. Sometimes the initial conversation goes well. They go out for coffee. One thing leads to another, and a year later, a brand-new person enters the picture. Unfortunately this person is usually a lawyer. (22)

    There is a lot to unpack in these metaphors, the recasting of academic dependence on data systems designed and controlled by corporate entities as a sexy new opportunity for scholars and scientists. There are important conversations to be had about these circulations of quantified desire; about who gets access to this kind of data, the ethics of working with companies who have an existential interest in profit and shareholder return and the cultural significance of wrapping business transactions in the language of heterosexual coupling. Here however I am mostly interested in the real allure that this passage and others speaks to, and the attendant fear that mostly whispers, at least in a book written by Harvard PhDs with Ted talks to give.

    For most academics in the social sciences and the humanities “big data” is a term more likely to get caught in the throat than inspire butterflies in the stomach. While Aiden and Michel certainly acknowledge that old-fashion textual analysis (50) and theory (20) will have a place in this brave new world of charts and numbers, they provide a number of contrasts to suggest the relative poverty of even the most brilliant scholar in the face of big data. One hypothetical in particular, that is not directly answered but is strongly implied, spoke to my discipline specifically:

    Consider the following question: Which would help you more if your quest was to learn about contemporary human society—unfettered access to a leading university’s department of sociology, packed with experts on how societies function, or unfettered access to Facebook, a company whose goal is to help mediate human social relationships online? (12)

    The existential threat at the heart of this question was catalyzed for many people in Roger Burrows and Mike Savage’s 2007 “The Coming Crisis of Empirical Sociology,” an early canary singing the worry of what Nigel Thrift has called “knowing capitalism” (2005). Knowing capitalism speaks to the ways that capitalism has begun to take seriously the task of “thinking the everyday” (1) by embedding information technologies within “circuits of practice” (5). For Burrows and Savage these practices can and should be seen as a largely unrecognized world of sophisticated and profit-minded sociology that makes the quantitative tools of academics look like “a very poor instrument” in comparison (2007: 891).

    Indeed, as Burrows and Savage note, the now ubiquitous social survey is a technology invented by social scientists, folks who were once seen as strikingly innovative methodologists (888). Despite ever more sophisticated statistical treatments however, the now over 40 year old social survey remains the heart of social scientific quantitative methodology in a radically changed context. And while declining response rates, a constraining nation-based framing and competition from privately-funded surveys have all decreased the efficacy of academic survey research (890), nothing has threatened the discipline like the embedded and “passive” collecting technologies that fuel big data. And with these methodological changes come profound epistemological ones: questions of how, when, why and what we know of the world. These methods are inspiring changing ideas of generalizability and new expectations around the temporality of research. Does it matter, for example, that studies have questioned the accuracy of the FitBit? The growing popularity of these devices suggests at the very least that sociologists should not count on empirical rigor to save them from irrelevance.

    As academia reorganizes around the speculative potential of digital technologies, there is an increasing pile of capital available to those academics able to translate between the discourses of data capitalism and a variety of disciplinary traditions. And the lure of this capital is perhaps strongest in the humanities, whose scholars have been disproportionately affected by state economic retrenchment on education spending that has increasingly prioritized quantitative, instrumental, and skill-based majors. The increasing urgency in the humanities to use bigger and faster tools is reflected in the surprisingly minimal hand wringing over the politics of working with companies like Facebook, Twitter and Google. If there is trepidation in the N-grams project recounted in Uncharted, it is mostly coming from Google, whose lawyers and engineers have little incentive to bother themselves with the politically fraught, theory-driven, Institutional Review Board slow lane of academic production. The power imbalance of this courtship leaves those academics who decide to partner with these companies at the mercy of their epistemological priorities and, as Uncharted demonstrates, the cultural aesthetics of corporate tech.

    This is a vision of the public humanities refracted through the language of public relations and the “measurable outcomes” culture of the American technology industry. Uncharted has taken to heart the power of (re)branding to change the valence of your work: Aiden and Michel would like you to call their big data inflected historical research “culturomics” (22). In addition to a hopeful attempt to coin a buzzy new work about the digital, culturomics linguistically brings the humanities closer to the supposed precision, determination and quantifiability of economics. And lest you think this multivalent bringing of culture to capital—or rather the renegotiation of “the relationship between commerce and the ivory tower” (8)—is unseemly, Aiden and Michel provide an origin story to show how futile this separation has been.

    But the desire for written records has always accompanied economic activity, since transactions are meaningless unless you can clearly keep track of who owns what. As such, early human writing is dominated by wheeling and dealing: a menagerie of bets, chits, and contracts. Long before we had the writings of prophets, we had the writing of profits. (9)

    And no doubt this is true: culture is always already bound up with economy. But the full-throated embrace of culturomics is not a vision of interrogating and reimagining the relationship between economic systems, culture and everyday life; [1] rather it signals the acceptance of the idea of culture as transactional business model. While Google has long imagined itself as a company with a social mission, they are a publicly held company who will be punished by investors if they neglect their bottom line of increasing the engagement of eyeballs on advertisements. The N-gram Viewer does not make Google money, but it perhaps increases public support for their larger book-scanning initiative, which Google clearly sees as a valuable enough project to invest many years of labor and millions of dollars to defend in court.

    This vision of the humanities is transactionary in another way as well. While much of Uncharted is an attempt to demonstrate the profound, game-changing implications of the N-gram viewer, there is a distinctly small-questions, cocktail-party-conversation feel to this type of inquiry that seems ironically most useful in preparing ABD humanities and social science PhDs for jobs in the service industry than in training them for the future of academia. It might be more precise to say that the N-gram viewer is architecturally designed for small answers rather than small questions. All is resolved through linear projection, a winner and a loser or stasis. This is a vision of research where the precise nature of the mediation (what books have been excluded? what is the effect of treating all books as equally revealing of human culture? what about those humans whose voices have been systematically excluded from the written record?) is ignored, and where the actual analysis of books, and indeed the books themselves, are black-boxed from the researcher.

    Uncharted speaks to perils of doing research under the cloud of existential erasure and to the failure of academics to lead with a different vision of the possibilities of quantification. Collaborating with the wealthy corporate titans of data collection requires an acceptance of these companies own existential mandate: make tons of money by monetizing a dizzying array of human activities while speculatively reimagining the future to attempt to maintain that cash flow. For Google, this is a vision where all activities, not just “googling” are collected and analyzed in a seamlessly updating centralized system. Cars, thermostats, video games, photos, businesses are integrated not for the public benefit but because of the power of scale to sell or rent or advertise products. Data is promised as a deterministic balm for the unknowability of life and Google’s participation in academic research gives them the credibility to be your corporate (sen.se) mother. What, might we imagine, are the speculative possibilities of networked data not beholden to shareholder value?
    _____

    Benjamin Haber is a PhD candidate in Sociology at CUNY Graduate Center and a Digital Fellow at The Center for the Humanities. His current research is a cultural and material exploration of emergent infrastructures of corporeal data through a queer theoretical framework. He is organizing a conference called “Queer Circuits in Archival Times: Experimentation and Critique of Networked Data” to be held in New York City in May 2016.

    Back to the essay

    _____

    Notes

    [1] A project desperately needed in academia, where terms like “neoliberalism,” “biopolitics” and “late capitalism” more often than not are used briefly at end of a short section on implications rather than being given the critical attention and nuanced intentionality that they deserve.

    Works Cited

    Savage, Mike, and Roger Burrows. 2007. “The Coming Crisis of Empirical Sociology.” Sociology 41 (5): 885–99.

    Thrift, Nigel. 2005. Knowing Capitalism. London: SAGE.

  • The Human Condition and The Black Box Society

    The Human Condition and The Black Box Society

    Frank Pasquale, The Black Box Society (Harvard University Press, 2015)a review of Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015)
    by Nicole Dewandre
    ~

    1. Introduction

    This review is informed by its author’s specific standpoint: first, a lifelong experience in a policy-making environment, i.e. the European Commission; and, second, a passion for the work of Hannah Arendt and the conviction that she has a great deal to offer to politics and policy-making in this emerging hyperconnected era. As advisor for societal issues at DG Connect, the department of the European Commission in charge of ICT policy at EU level, I have had the privilege of convening the Onlife Initiative, which explored the consequences of the changes brought about by the deployment of ICTs on the public space and on the expectations toward policy-making. This collective thought exercise, which took place in 2012-2013, was strongly inspired by Hannah Arendt’s 1958 book The Human Condition.

    This is the background against which I read the The Black Box Society: The Secret Algorithms Behind Money and Information by Frank Pasquale (references to which are indicated here parenthetically by page number). Two of the meanings of “black box“—a device that keeps track of everything during a flight, on the one hand, and the node of a system that prevents an observer from identifying the link(s) between input and output, on the other hand—serve as apt metaphors for today’s emerging Big Data environment.

    Pasquale digs deep into three sectors that are at the root of what he calls the black box society: reputation (how we are rated and ranked), search (how we use ratings and rankings to organize the world), and finance (money and its derivatives, whose flows depend crucially on forms of reputation and search). Algorithms and Big Data have permeated these three activities to a point where disconnection with human judgment or control can transmogrify them into blind zombies, opening new risks, affordances and opportunities. We are far from the ideal representation of algorithms as support for decision-making. In these three areas, decision-making has been taken over by algorithms, and there is no “invisible hand” ensuring that profit-driven corporate strategies will deliver fairness or improve the quality of life.

    The EU and the US contexts are both distinct and similar. In this review, I shall not comment on Pasquale’s specific policy recommendations in detail, even if as European, I appreciate the numerous references to European law and policy that Pasquale commends as good practices (ranging from digital competition law, to welfare state provision, to privacy policies). I shall instead comment from a meta-perspective, that of challenging the worldview that implicitly undergirds policy-making on both sides of the Atlantic.

    2. A Meta-perspective on The Black Box Society

    The meta-perspective as I see it is itself twofold: (i) we are stuck with Modern referential frameworks, which hinder our ability to attend to changing human needs, desires and expectations in this emerging hyperconnected era, and (ii) the personification of corporations in policymaking reveals shortcomings in the current representation of agents as interest-led beings.

    a) Game over for Modernity!

    As stated by the Onlife Initiative in its “Onlife Manifesto,” through its expression “Game over for Modernity?“, it is time for politics and policy-making to leave Modernity behind. That does not mean going back to the Middle Ages, as feared by some, but instead stepping firmly into this new era that is coming to us. I believe with Genevieve Bell and Paul Dourish that it is more effective to consider that we are now entering into the ubiquitous computing era instead of looking at it as if it was approaching fast.[1] With the miniaturisation of devices and sensors, with mobile access to broadband internet and with the generalized connectivity of objects as well as of people, not only do we witness an increase of the online world, but, more fundamentally, a collapse of the distinction between the online and the offline worlds, and therefore a radically new socio-technico-natural compound. We live in an environment which is increasingly reactive and talkative as a result of the intricate mix between off-line and online universes. Human interactions are also deeply affected by this new socio-technico-natural compound, as they are or will soon be “sticky”, i.e. leave a material trace by default and this for the first time in history. These new affordances and constraints destabilize profoundly our Modern conceptual frameworks, which rely on distinctions that are blurring, such as the one between the real and the virtual or the ones between humans, artefacts and nature, understood with mental categories dating back from the Enlightenment and before. The very expression “post-Modern” is not accurate anymore or is too shy, as it continues to position Modernity as its reference point. It is time to give a proper name to this new era we are stepping into, and hyperconnectivity may be such a name.

    Policy-making however continues to rely heavily on Modern conceptual frameworks, and this not only from the policy-makers’ point of view but more widely from all those engaging in the public debate. There are many structuring features of the Modern conceptual frameworks and it goes certainly beyond this review to address them thoroughly. However, when it comes to addressing the challenges described by The Black Box Society, it is important to mention the epistemological stance that has been spelled out brilliantly by Susan H. Williams in her Truth, Autonomy, and Speech: Feminist Theory and the First Amendment: “the connection forged in Cartesianism between knowledge and power”[2]. Before encountering Susan Williams’s work, I came to refer to this stance less elegantly with the expression “omniscience-omnipotence utopia”[3]. Williams writes that “this epistemological stance has come to be so widely accepted and so much a part of many of our social institutions that it is almost invisible to us” and that “as a result, lawyers and judges operate largely unself-consciously with this epistemology”[4]. To Williams’s “lawyers and judges”, we should add policy-makers and stakeholders.  This Cartesian epistemological stance grounds the conviction that the world can be elucidated in causal terms, that knowledge is about prediction and control, and that there is no limit to what men can achieve provided they have the will and the knowledge. In this Modern worldview, men are considered as rational subjects and their freedom is synonymous with control and autonomy. The fact that we have a limited lifetime and attention span is out of the picture as is the human’s inherent relationality. Issues are framed as if transparency and control is all that men need to make their own way.

    1) One-Way Mirror or Social Hypergravity?

    Frank Pasquale is well aware of and has contributed to the emerging critique of transparency and he states clearly that “transparency is not just an end in itself” (8). However, there are traces of the Modern reliance on transparency as regulative ideal in the Black Box Society. One of them is when he mobilizes the one-way mirror metaphor. He writes:

    We do not live in a peaceable kingdom of private walled gardens; the contemporary world more closely resembles a one-way mirror. Important corporate actors have unprecedented knowledge of the minutiae of our daily lives, while we know little to nothing about how they use this knowledge to influence the important decisions that we—and they—make. (9)

    I refrain from considering the Big Data environment as an environment that “makes sense” on its own, provided someone has access to as much data as possible. In other words, the algorithms crawling the data can hardly be compared to a “super-spy” providing the data controller with an absolute knowledge.

    Another shortcoming of the one-way mirror metaphor is that the implicit corrective is a transparent pane of glass, so the watched can watch the watchers. This reliance on transparency is misleading. I prefer another metaphor that fits better, in my view: to characterise the Big Data environment in a hyperconnected conceptual framework. As alluded to earlier, in contradistinction to the previous centuries and even millennia, human interactions will, by default, be “sticky”, i.e. leave a trace. Evanescence of interactions, which used to be the default for millennia, will instead require active measures to be ensured. So, my metaphor for capturing the radicality and the scope of this change is a change of “social atmosphere” or “social gravity”, as it were. For centuries, we have slowly developed social skills, behaviors and regulations, i.e. a whole ecosystem, to strike a balance between accountability and freedom, in a world where “verba volant and scripta manent[5], i.e. where human interactions took place in an “atmosphere” with a 1g “social gravity”, where they were evanescent by default and where action had to be taken to register them. Now, with all interactions leaving a trace by default, and each of us going around with his, her or its digital shadow, we are drifting fast towards an era where the “social atmosphere” will be of heavier gravity, say “10g”. The challenge is huge and will require a lot of collective learning and adaptation to develop the literacy and regulatory frameworks that will recreate and sustain the balance between accountability and freedom for all agents, human and corporations.

    The heaviness of this new data density stands in-between or is orthogonal to the two phantasms of bright emancipatory promises of Big Data, on the one hand, or frightening fears of Big Brother, on the other hand. Because of this social hypergravity, we, individually and collectively, have indeed to be cautious about the use of Big Data, as we have to be cautious when handling dangerous or unknown substances. This heavier atmosphere, as it were, opens to increased possibilities of hurting others, notably through harassment, bullying and false rumors. The advent of Big Data does not, by itself, provide a “license to fool” nor does it free agents from the need to behave and avoid harming others. Exploiting asymmetries and new affordances to fool or to hurt others is no more acceptable behavior as it was before the advent of Big Data. Hence, although from a different metaphorical standpoint, I support Pasquale’s recommendations to pay increased attention to the new ways the current and emergent practices relying on algorithms in reputation, search and finance may be harmful or misleading and deceptive.

    2) The Politics of Transparency or the Exhaustive Labor of Watchdogging?

    Another “leftover” of the Modern conceptual framework that surfaces in The Black Box Society is the reliance on watchdogging for ensuring proper behavior by corporate agents. Relying on watchdogging for ensuring proper behavior nurtures the idea that it is all right to behave badly, as long as one is not seen doing do. This reinforces the idea that the qualification of an act depends from it being unveiled or not, as if as long as it goes unnoticed, it is all right. This puts the entire burden on the watchers and no burden whatsoever on the doers. It positions a sort of symbolic face-to-face between supposed mindless firms, who are enabled to pursue their careless strategies as long as they are not put under the light and people who are expected to spend all their time, attention and energy raising indignation against wrong behaviors. Far from empowering the watchers, this framing enslaves them to waste time monitoring actors who should be acting in much better ways already. Indeed, if unacceptable behavior is unveiled, it raises outrage, but outrage is far from bringing a solution per se. If, instead, proper behaviors are witnessed, then the watchers are bound to praise the doers. In both cases, watchers are stuck in a passive, reactive and specular posture, while all the glory or the shame is on the side of the doers. I don’t deny the need to have watchers, but I warn against the temptation of relying excessively on the divide between doers and watchers to police behaviors, without engaging collectively in the formulation of what proper and inappropriate behaviors are. And there is no ready-made consensus about this, so that it requires informed exchange of views and hard collective work. As Pasquale explains in an interview where he defends interpretative approaches to social sciences against quantitative ones:

    Interpretive social scientists try to explain events as a text to be clarified, debated, argued about. They do not aspire to model our understanding of people on our understanding of atoms or molecules. The human sciences are not natural sciences. Critical moral questions can’t be settled via quantification, however refined “cost benefit analysis” and other political calculi become. Sometimes the best interpretive social science leads not to consensus, but to ever sharper disagreement about the nature of the phenomena it describes and evaluates. That’s a feature, not a bug, of the method: rather than trying to bury normative differences in jargon, it surfaces them.

    The excessive reliance on watchdogging enslaves the citizenry to serve as mere “watchdogs” of corporations and government, and prevents any constructive cooperation with corporations and governments. It drains citizens’ energy for pursuing their own goals and making their own positive contributions to the world, notably by engaging in the collective work required to outline, nurture and maintain the shaping of what accounts for appropriate behaviours.

    As a matter of fact, watchdogging would be nothing more than an exhausting laboring activity.

    b) The Personification of Corporations

    One of the red threads unifying The Black Box Society’s treatment of numerous technical subjects is unveiling the oddness of the comparative postures and status of corporations, on the one hand, and people, on the other hand. As nicely put by Pasquale, “corporate secrecy expands as the privacy of human beings contracts” (26), and, in the meantime, the divide between government and business is narrowing (206). Pasquale points also to the fact that at least since 2001, people have been routinely scrutinized by public agencies to deter the threatening ones from hurting others, while the threats caused by corporate wrongdoings in 2008 gave rise to much less attention and effort to hold corporations to account. He also notes that “at present, corporations and government have united to focus on the citizenry. But why not set government (and its contractors) to work on corporate wrongdoings?” (183) It is my view that these oddnesses go along with what I would call a “sensitive inversion”. Corporations, which are functional beings, are granted sensitivity as if they were human beings, in policy-making imaginaries and narratives, while men and women, who are sensitive beings, are approached in policy-making as if they were functional beings, i.e. consumers, job-holders, investors, bearer of fundamental rights, but never personae per se. The granting of sensitivity to corporations goes beyond the legal aspect of their personhood. It entails that corporations are the one whose so-called needs are taken care of by policy makers, and those who are really addressed to, qua persona. Policies are designed with business needs in mind, to foster their competitiveness or their “fitness”. People are only indirect or secondary beneficiaries of these policies.

    The inversion of sensitivity might not be a problem per se, if it opened pragmatically to an effective way to design and implement policies which bear indeed positive effects for men and women in the end. But Pasquale provides ample evidence showing that this is not the case, at least in the three sectors he has looked at more closely, and certainly not in finance.

    Pasquale’s critique of the hypostatization of corporations and reduction of humans has many theoretical antecedents. Looking at it from the perspective of Hannah Arendt’s The Human Condition illuminates the shortcomings and risks associated with considering corporations as agents in the public space and understanding the consequences of granting them sensitivity, or as it were, human rights. Action is the activity that flows from the fact that men and women are plural and interact with each other: “the human condition of action is plurality”.[6] Plurality is itself a ternary concept made of equality, uniqueness and relationality. First, equality as what we grant to each other when entering into a political relationship. Second, uniqueness refers to the fact that what makes each human a human qua human is precisely that who s/he is is unique. If we treat other humans as interchangeable entities or as characterised by their attributes or qualities, i.e., as a what, we do not treat them as human qua human, but as objects. Last and by no means least, the third component of plurality is the relational and dynamic nature of identity. For Arendt, the disclosure of the who “can almost never be achieved as a wilful purpose, as though one possessed and could dispose of this ‘who’ in the same manner he has and can dispose of his qualities”[7]. The who appears unmistakably to others, but remains somewhat hidden from the self. It is this relational and revelatory character of identity that confers to speech and action such a critical role and that articulates action with identity and freedom. Indeed, for entities for which the who is partly out of reach and matters, appearance in front of others, notably with speech and action, is a necessary condition of revealing that identity:

    Action and speech are so closely related because the primordial and specifically human act must at the same time contain the answer to the question asked of every newcomer: who are you? In acting and speaking, men show who they are, they appear. Revelatory quality of speech and action comes to the fore where people are with others and neither for, nor against them, that is in sheer togetherness.[8]

    So, in this sense, the public space is the arena where whos appear to other whos, personae to other personae.

    For Arendt, the essence of politics is freedom and is grounded in action, not in labour and work. The public space is where agents coexist and experience their plurality, i.e. the fact that they are equal, unique and relational. So, it is much more than the usual American pluralist (i.e., early Dahl-ian) conception of a space where agents worry for exclusively for their own needs by bargaining aggressively. In Arendt’s perspective, the public space is where agents, self-aware of their plural characteristic, interact with each other once their basic needs have been taken care of in the private sphere. As highlighted by Seyla Benhabib in The Reluctant Modernism of Hannah Arendt, “we not only owe to Hannah Arendt’s political philosophy the recovery of the public as a central category for all democratic-liberal politics; we are also indebted to her for the insight that the public and the private are interdependent”.[9] One could not appear in public if s/he or it did not have also a private place, notably to attend to his, her or its basic needs for existence. In Arendtian terms, interactions in the public space take place between agents who are beyond their satiety threshold. Acknowledging satiety is a precondition for engaging with others in a way that is not driven by one’s own interest, but rather by their desire to act together with others—”in sheer togetherness”—and be acknowledged as who they are. If an agent perceives him-, her- or itself and behave only as a profit-maximiser or as an interest-led being, i.e. if s/he or it has no sense of satiety and no self-awareness of the relational and revelatory character of his, her or its identity, then s/he or it cannot be a “who” or an agent in political terms, and therefore, respond of him-, her- or itself. It does simply not deserve -and therefore should not be granted- the status of a persona in the public space.

    It is easy to imagine that there can indeed be no freedom below satiety, and that “sheer togetherness” would just be impossible among agents below their satiety level or deprived from having one. This is however the situation we are in, symbolically, when we grant corporations the status of persona while considering efficient and appropriate that they care only for profit-maximisation. For a business, making profit is a condition to stay alive, as for humans, eating is a condition to stay alive. However, in the name of the need to compete on global markets, to foster growth and to provide jobs, policy-makers embrace and legitimize an approach to businesses as profit-maximisers, despite the fact this is a reductionist caricature of what is allowed by the legal framework on company law[10]. So, the condition for businesses to deserve the status of persona in the public space is, no less than for men and women, to attend their whoness and honour their identity, by staying away from behaving according to their narrowly defined interests. It means also to care for the world as much, if not more, as for themselves.

    This resonates meaningfully with the quotation from Heraclitus that serves as the epigraph for The Black Box Society: “There is one world in common for those who are awake, but when men are asleep each turns away into a world of his own”. Reading Arendt with Heraclitus’s categories of sleep and wakefulness, one might consider that totalitarianism arises—or is not far away—when human beings are awake in private, but asleep in public, in the sense that they silence their humanness or that their humanness is silenced by others when appearing in public. In this perspective, the merging of markets and politics—as highlighted by Pasquale—could be seen as a generalized sleep in the public space of human beings and corporations, qua personae, while all awakened activities are taking place in the private, exclusively driven by their needs and interests.

    In other words—some might find a book like The Black Box Society, which offers a bold reform agenda for numerous agencies, to be too idealistic. But in my view, it falls short of being idealistic enough: there is a missing normative core to the proposals in the book, which can be corrected by democratic, political, and particularly Arendtian theory. If a populace has no acceptance of a certain level of goods and services prevailing as satiating its needs, and if it distorts the revelatory character of identity into an endless pursuit of a limitless growth, it cannot have the proper lens and approach to formulate what it takes to enable the fairness and fair play described in The Black Box Society.

    3. Stepping into Hyperconnectivity

    1) Agents as Relational Selves

    A central feature of the Modern conceptual framework underlying policymaking is the figure of the rational subject as political proxy of humanness. I claim that this is not effective anymore in ensuring a fair and flourishing life for men and women in this emerging hyperconnected era and that we should adopt instead the figure of a “relational self” as it emerges from the Arendtian concept of plurality.

    The concept of the rational subject was forged to erect Man over nature. Nowadays, the problem is not so much to distinguish men from nature, but rather to distinguish men—and women—from artefacts. Robots come close to humans and even outperform them, if we continue to define humans as rational subjects. The figure of the rational subject is torn apart between “truncated gods”—when Reason is considered as what brings eventually an overall lucidity—on the one hand, and “smart artefacts”—when reason is nothing more than logical steps or algorithms—on the other hand. Men and women are neither “Deep Blue” nor mere automatons. In between these two phantasms, the humanness of men and women is smashed. This is indeed what happens in the Kafkaesque and ridiculous situations where a thoughtless and mindless approach to Big Data is implemented, and this from both stance, as workers and as consumers. As far as the working environment is concerned, “call centers are the ultimate embodiment of the panoptic workspace. There, workers are monitored all the time” (35). Indeed, this type of overtly monitored working environment is nothing else that a materialisation of the panopticon. As consumers, we all see what Pasquale means when he writes that “far more [of us] don’t even try to engage, given the demoralizing experience of interacting with cyborgish amalgams of drop- down menus, phone trees, and call center staff”. In fact, this mindless use of automation is only the last version of the way we have been thinking for the last decades, i.e. that progress means rationalisation and de-humanisation across the board. The real culprit is not algorithms themselves, but the careless and automaton-like human implementers and managers who act along a conceptual framework according to which rationalisation and control is all that matters. More than the technologies, it is the belief that management is about control and monitoring that makes these environments properly in-human. So, staying stuck with the rational subject as a proxy for humanness, either ends up in smashing our humanness as workers and consumers and, at best, leads to absurd situations where to be free would mean spending all our time controlling we are not controlled.

    As a result, keeping the rational subject as the central representation of humanness will increasingly be misleading politically speaking. It fails to provide a compass for treating each other fairly and making appropriate decisions and judgments, in order to impacting positively and meaningfully on human lives.

    With her concept of plurality, Arendt offers an alternative to the rational subject for defining humanness: that of the relational self. The relational self, as it emerges from the Arendtian’s concept of plurality[11], is the man, woman or agent self-aware of his, her or its plurality, i.e. the facts that (i) he, she or it is equal to his, her or its fellows; (ii) she, he or it is unique as all other fellows are unique; and (iii) his, her or its identity as a revelatory character requiring to appear among others in order to reveal itself through speech and action. This figure of the relational self accounts for what is essential to protect politically in our humanness in a hyperconnected era, i.e. that we are truly interdependent from the mutual recognition that we grant to each other and that our humanity is precisely grounded in that mutual recognition, much more than in any “objective” difference or criteria that would allow an expert system to sort out human from non-human entities.

    The relational self, as arising from Arendt’s plurality, combines relationality and freedom. It resonates deeply with the vision proposed by Susan H. Williams, i.e. the relational model of truth and the narrative model to autonomy, in order to overcome the shortcomings of the Cartesian and liberal approaches to truth and autonomy without throwing the baby, i.e. the notion of agency and responsibility, out with the bathwater, as the social constructionist and feminist critique of the conceptions of truth and autonomy may be understood of doing.[12]

    Adopting the relational self as the canonical figure of humanness instead of the rational subject‘s one puts under the light the direct relationship between the quality of interactions, on the one hand, and the quality of life, on the other hand. In contradistinction with transparency and control, which are meant to empower non-relational individuals, relational selves are self-aware that they are in need of respect and fair treatment from others, instead. It also makes room for vulnerability, notably the vulnerability of our attentional spheres, and saturation, i.e. the fact that we have a limited attention span, and are far from making a “free choice” when clicking on “I have read and accept the Terms & Conditions”. Instead of transparency and control as policy ends in themselves, the quality of life of relational selves and the robustness of the world they construct together and that lies between them depend critically on being treated fairly and not being fooled.

    It is interesting to note that the word “trust” blooms in policy documents, showing that the consciousness of the fact that we rely from each other is building up. Referring to trust as if it needed to be built is however a signature of the fact that we are in transition from Modernity to hyperconnectivity, and not yet fully arrived. By approaching trust as something that can be materialized we look at it with Modern eyes. As “consent is the universal solvent” (35) of control, transparency-and-control is the universal solvent of trust. Indeed, we know that transparency and control nurture suspicion and distrust. And that is precisely why they have been adopted as Modern regulatory ideals. Arendt writes: “After this deception [that we were fooled by our senses], suspicions began to haunt Modern man from all sides”[13]. So, indeed, Modern conceptual frameworks rely heavily on suspicion, as a sort of transposition in the realm of human affairs of the systematic doubt approach to scientific enquiries. Frank Pasquale quotes moral philosopher Iris Murdoch for having said: “Man is a creature who makes pictures of himself and then comes to resemble the picture” (89). If she is right—and I am afraid she is—it is of utmost importance to shift away from picturing ourselves as rational subjects and embrace instead the figure of relational selves, if only to save the fact that trust can remain a general baseline in human affairs. Indeed, if it came true that trust can only be the outcome of a generalized suspicion, then indeed we would be lost.

    Besides grounding the notion of relational self, the Arendtian concept of plurality allows accounting for interactions among humans and among other plural agents, which are beyond fulfilling their basic needs (necessity) or achieving goals (instrumentality), and leads to the revelation of their identities while giving rise to unpredictable outcomes. As such, plurality enriches the basket of representations for interactions in policy making. It brings, as it were, a post-Modern –or should I dare saying a hyperconnected- view to interactions. The Modern conceptual basket for representations of interactions includes, as its central piece, causality. In Modern terms, the notion of equilibrium is approached through a mutual neutralization of forces, either with the invisible hand metaphor, or with Montesquieu’s division of powers. The Modern approach to interactions is either anchored into the representation of one pole being active or dominating (the subject) and the other pole being inert or dominated (nature, object, servant) or, else, anchored in the notion of conflicting interests or dilemmas. In this framework, the notion of equality is straightjacketed and cannot be embodied. As we have seen, this Modern straitjacket leads to approaching freedom with control and autonomy, constrained by the fact that Man is, unfortunately, not alone. Hence, in the Modern approach to humanness and freedom, plurality is a constraint, not a condition, while for relational selves, freedom is grounded in plurality.

    2) From Watchdogging to Accountability and Intelligibility

    If the quest for transparency and control is as illusory and worthless for relational selves, as it was instrumental for rational subjects, this does not mean that anything goes. Interactions among plural agents can only take place satisfactorily if basic and important conditions are met.  Relational selves are in high need of fairness towards themselves and accountability of others. Deception and humiliation[14] should certainly be avoided as basic conditions enabling decency in the public space.

    Once equipped with this concept of the relational self as the canonical figure of what can account for political agents, be they men, women, corporations and even States. In a hyperconnected era, one can indeed see clearly why the recommendations Pasquale offers in his final two chapters “Watching (and Improving) the Watchers” and “Towards an Intelligible Society,” are so important. Indeed, if watchdogging the watchers has been criticized earlier in this review as an exhausting laboring activity that does not deliver on accountability, improving the watchers goes beyond watchdogging and strives for a greater accountability. With regard to intelligibility, I think that it is indeed much more meaningful and relevant than transparency.

    Pasquale invites us to think carefully about regimes of disclosure, along three dimensions:  depth, scope and timing. He calls for fair data practices that could be enhanced by establishing forms of supervision, of the kind that have been established for checking on research practices involving human subjects. Pasquale suggests that each person is entitled to an explanation of the rationale for the decision concerning them and that they should have the ability to challenge that decision. He recommends immutable audit logs for holding spying activities to account. He calls also for regulatory measures compensating for the market failures arising from the fact that dominant platforms are natural monopolies. Given the importance of reputation and ranking and the dominance of Google, he argues that the First Amendment cannot be mobilized as a wild card absolving internet giants from accountability. He calls for a “CIA for finance” and a “Corporate NSA,” believing governments should devote more effort to chasing wrongdoings from corporate actors. He argues that the approach taken in the area of Health Fraud Enforcement could bear fruit in finance, search and reputation.

    What I appreciate in Pasquale’s call for intelligibility is that it does indeed calibrate the needs of relational selves to interact with each other, to make sound decisions and to orient themselves in the world. Intelligibility is different from omniscience-omnipotence. It is about making sense of the world, while keeping in mind that there are different ways to do so. Intelligibility connects relational selves to the world surrounding them and allows them to act with other and move around. In the last chapter, Pasquale mentions the importance of restoring trust and the need to nurture a public space in the hyperconnected era. He calls for an end game to the Black Box. I agree with him that conscious deception inherently dissolves plurality and the common world, and needs to be strongly combatted, but I think that a lot of what takes place today goes beyond that and is really new and unchartered territories and horizons for humankind. With plurality, we can also embrace contingency in a less dramatic way that we used to in the Modern era. Contingency is a positive approach to un-certainty. It accounts for the openness of the future. The very word un-certainty is built in such a manner that certainty is considered the ideal outcome.

    4. WWW, or Welcome to the World of Women or a World Welcoming Women[15]

    To some extent, the fears of men in a hyperconnected era reflect all-too-familiar experiences of women. Being objects of surveillance and control, exhausting laboring without rewards and being lost through the holes of the meritocracy net, being constrained in a specular posture of other’s deeds: all these stances have been the fate of women’s lives for centuries, if not millennia. What men fear from the State or from “Big (br)Other”, they have experienced with men. So, welcome to world of women….

    But this situation may be looked at more optimistically as an opportunity for women’s voices and thoughts to go mainstream and be listened to. Now that equality between women and men is enshrined in the political and legal systems of the EU and the US, concretely, women have been admitted to the status of “rational subject”, but that does not dissolve its masculine origin, and the oddness or uneasiness for women to embrace this figure. Indeed, it was forged by men with men in mind, women, for those men, being indexed on nature. Mainstreaming the figure of the relational self, born in the mind of Arendt, will be much more inspiring and empowering for women, than was the rational subject. In fact, this enhances their agency and the performativity of their thoughts and theories. So, are we heading towards a world welcoming women?

    In conclusion, the advent of Big Data can be looked at in two ways. The first one is to look at it as the endpoint of the materialisation of all the promises and fears of Modern times. The second one is to look at it as a wake-up call for a new beginning; indeed, by making obvious the absurdity or the price of going all the way down to the consequences of the Modern conceptual frameworks, it calls on thinking on new grounds about how to make sense of the human condition and make it thrive. The former makes humans redundant, is self-fulfilling and does not deserve human attention and energy. Without any hesitation, I opt for the latter, i.e. the wake-up call and the new beginning.

    Let’s engage in this hyperconnected era bearing in mind Virginia Woolf’s “Think we must”[16] and, thereby, shape and honour the human condition in the 21st century.
    _____

    Nicole Dewandre has academic degrees in engineering, economics and philosophy. She is a civil servant in the European Commission, since 1983. She was advisor to the President of the Commission, Jacques Delors, between 1986 and 1993. She then worked in the EU research policy, promoting gender equality, partnership with civil society and sustainability issues. Since 2011, she has worked on the societal issues related to the deployment of ICT technologies. She has published widely on organizational and political issues relating to ICTs.

    The views expressed in this article are the sole responsibility of the author and in no way represent the view of the European Commission and its services.

    Back to the essay
    _____

    Acknowledgments: This review has been made possible by the Faculty of Law of the University of Maryland in Baltimore, who hosted me as a visiting fellow for the month of September 2015. I am most grateful to Frank Pasquale, first for having written this book, but also for engaging with me so patiently over the month of September and paying so much attention to my arguments, even suggesting in some instances the best way for making my points, when I was diverging from his views. I would also like to thank Jérôme Kohn, director of the Hannah Arendt Center at the New School for Social Research, for his encouragements in pursuing the mobilisation of Hannah Arendt’s legacy in my professional environment. I am also indebted, and notably for the conclusion, to the inspiring conversations I have had with Shauna Dillavou, excecutive director of CommunityRED, and Soraya Chemaly, Washington-based feminist writer, critic and activist. Last, and surely not least, I would like to thank David Golumbia for welcoming this piece in his journal and for the care he has put in editing this text written by a non-English native speaker.

    [1] This change of perspective, in itself, has the interesting side effect to take the carpet under the feet of those “addicted to speed”, as Pasquale is right when he points to this addiction (195) as being one of the reasons “why so little is being done” to address the challenges arising from the hyperconnected era.

    [2] Williams, Truth, Autonomy, and Speech, New York: New York University Press, 2004 (35).

    [3] See, e.g., Nicole Dewandre, ‘Rethinking the Human Condition in a Hyperconnected Era: Why Freedom Is Not About Sovereignty But About Beginnings’, in The Onlife Manifesto, ed. Luciano Floridi, Springer International Publishing, 2015 (195–215).

    [4]Williams, Truth, Autonomy, and Speech (32).

    [5] Literally: “spoken words fly; written ones remain”

    [6] Apart from action, Arendt distinguishes two other fundamental human activities that together with action account for the vita activa. These two other activities are labour and work. Labour is the activity that men and women engage in to stay alive, as organic beings: “the human condition of labour is life itself”. Labour is totally pervaded by necessity and processes. Work is the type of activity men and women engage with to produce objects and inhabit the world: “the human condition of work is worldliness”. Work is pervaded by a means-to-end logic or an instrumental rationale.

    [7] Arendt, The Human Condition, 1958; reissued, University of Chicago Press, 1998 (159).

    [8] Arendt, The Human Condition (160).

    [9] Seyla Benhabib, The Reluctant Modernism of Hannah Arendt, Revised edition, Lanham, MD: Rowman & Littlefield Publishers, 2003, (211).

    [10] See notably the work of Lynn Stout and the Frank Bold Foundation’s project on the purpose of corporations.

    [11] This expression has been introduced in the Onlife Initiative by Charles Ess, but in a different perspective. The Ess’ relational self is grounded in pre-Modern and Eastern/oriental societies. He writes: “In “Western” societies, the affordances of what McLuhan and others call “electric media,” including contemporary ICTs, appear to foster a shift from the Modern Western emphases on the self as primarily rational, individual, and thereby an ethically autonomous moral agent towards greater (and classically “Eastern” and pre-Modern) emphases on the self as primarily emotive, and relational—i.e., as constituted exclusively in terms of one’s multiple relationships, beginning with the family and extending through the larger society and (super)natural orders”. Ess, in Floridi, ed.,  The Onlife Manifesto (98).

    [12] Williams, Truth, Autonomy, and Speech.

    [13] Hannah Arendt and Jerome Kohn, Between Past and Future, Revised edition, New York: Penguin Classics, 2006 (55).

    [14] See Richard Rorty, Contingency, Irony, and Solidarity, New York: Cambridge University Press, 1989.

    [15] I thank Shauna Dillavou for suggesting these alternate meanings for “WWW.”

    [16] Virginia Woolf, Three Guineas, New York: Harvest, 1966.

  • Coding Bootcamps and the New For-Profit Higher Ed

    Coding Bootcamps and the New For-Profit Higher Ed

    By Audrey Watters
    ~
    After decades of explosive growth, the future of for-profit higher education might not be so bright. Or, depending on where you look, it just might be…

    In recent years, there have been a number of investigations – in the media, by the government – into the for-profit college sector and questions about these schools’ ability to effectively and affordably educate their students. Sure, advertising for for-profits is still plastered all over the Web, the airwaves, and public transportation, but as a result of journalistic and legal pressures, the lure of these schools may well be a lot less powerful. If nothing else, enrollment and profits at many for-profit institutions are down.

    Despite the massive amounts of money spent by the industry to prop it up – not just on ads but on lobbying and legal efforts, the Obama Administration has made cracking down on for-profits a centerpiece of its higher education policy efforts, accusing these schools of luring students with misleading and overblown promises, often leaving them with low-status degrees sneered at by employers and with loans students can’t afford to pay back.

    But the Obama Administration has also just launched an initiative that will make federal financial aid available to newcomers in the for-profit education sector: ed-tech experiments like “coding bootcamps” and MOOCs. Why are these particular for-profit experiments deemed acceptable? What do they do differently from the much-maligned for-profit universities?

    School as “Skills Training”

    In many ways, coding bootcamps do share the justification for their existence with for-profit universities. That is, they were founded in order to help to meet the (purported) demands of the job market: training people with certain technical skills, particularly those skills that meet the short-term needs of employers. Whether they meet students’ long-term goals remains to be seen.

    I write “purported” here even though it’s quite common to hear claims that the economy is facing a “STEM crisis” – that too few people have studied science, technology, engineering, or math and employers cannot find enough skilled workers to fill jobs in those fields. But claims about a shortage of technical workers are debatable, and lots of data would indicate otherwise: wages in STEM fields have remained flat, for example, and many who graduate with STEM degrees cannot find work in their field. In other words, the crisis may be “a myth.”

    But it’s a powerful myth, and one that isn’t terribly new, dating back at least to the launch of the Sputnik satellite in 1957 and subsequent hand-wringing over the Soviets’ technological capabilities and technical education as compared to the US system.

    There are actually a number of narratives – some of them competing narratives – at play here in the recent push for coding bootcamps, MOOCs, and other ed-tech initiatives: that everyone should go to college; that college is too expensive – “a bubble” in the Silicon Valley lexicon; that alternate forms of credentialing will be developed (by the technology sector, naturally); that the tech sector is itself a meritocracy, and college degrees do not really matter; that earning a degree in the humanities will leave you unemployed and burdened by student loan debt; that everyone should learn to code. Much like that supposed STEM crisis and skill shortage, these narratives might be powerful, but they too are hardly provable.

    Nor is the promotion of a more business-focused education that new either.

    Image credits

    Career Colleges: A History

    Foster’s Commercial School of Boston, founded in 1832 by Benjamin Franklin Foster, is often recognized as the first school established in the United States for the specific purpose of teaching “commerce.” Many other commercial schools opened on its heels, most located in the Atlantic region in major trading centers like Philadelphia, Boston, New York, and Charleston. As the country expanded westward, so did these schools. Bryant & Stratton College was founded in Cleveland in 1854, for example, and it established a chain of schools, promising to open a branch in every American city with a population of more than 10,000. By 1864, it had opened more than 50, and the chain is still in operation today with 18 campuses in New York, Ohio, Virginia, and Wisconsin.

    The curriculum of these commercial colleges was largely based around the demands of local employers alongside an economy that was changing due to the Industrial Revolution. Schools offered courses in bookkeeping, accounting, penmanship, surveying, and stenography. This was in marketed contrast to those universities built on a European model, which tended to teach topics like theology, philosophy, and classical language and literature. If these universities were “elitist,” the commercial colleges were “popular” – there were over 70,000 students enrolled in them in 1897, compared to just 5800 in colleges and universities – something that highlights what’s a familiar refrain still today: that traditional higher ed institutions do not meet everyone’s needs.

    Image credits

    The existence of the commercial colleges became intertwined in many success stories of the nineteenth century: Andrew Carnegie attended night school in Pittsburgh to learn bookkeeping, and John D. Rockefeller studied banking and accounting at Folsom’s Commercial College in Cleveland. The type of education offered at these schools was promoted as a path to become a “self-made man.”

    That’s the story that still gets told: these sorts of classes open up opportunities for anyone to gain the skills (and perhaps the certification) that will enable upward mobility.

    It’s a story echoed in the ones told about (and by) John Sperling as well. Born into a working class family, Sperling worked as a merchant marine, then attended community college during the day and worked as a gas station attendant at night. He later transferred to Reed College, went on to UC Berkeley, and completed his doctorate at Cambridge University. But Sperling felt as though these prestigious colleges catered to privileged students; he wanted a better way for working adults to be able to complete their degrees. In 1976, he founded the University of Phoenix, one of the largest for-profit colleges in the US which at its peak in 2010 enrolled almost 600,000 students.

    Other well-known names in the business of for-profit higher education: Walden University (founded in 1970), Capella University (founded in 1993), Laureate Education (founded in 1999), Devry University (founded in 1931), Education Management Corporation (founded in 1962), Strayer University (founded in 1892), Kaplan University (founded in 1937 as The American Institute of Commerce), and Corinthian Colleges (founded in 1995 and defunct in 2015).

    It’s important to recognize the connection of these for-profit universities to older career colleges, and it would be a mistake to see these organizations as distinct from the more recent development of MOOCs and coding bootcamps. Kaplan, for example, acquired the code school Dev Bootcamp in 2014. Laureate Education is an investor in the MOOC provider Coursera. The Apollo Education Group, the University of Phoenix’s parent company, is an investor in the coding bootcamp The Iron Yard.

    Image credits

    Promises, Promises

    Much like the worries about today’s for-profit universities, even the earliest commercial colleges were frequently accused of being “purely business speculations” – “diploma mills” – mishandled by administrators who put the bottom line over the needs of students. There were concerns about the quality of instruction and about the value of the education students were receiving.

    That’s part of the apprehension about for-profit universities’ (almost most) recent manifestations too: that these schools are charging a lot of money for a certification that, at the end of the day, means little. But at least the nineteenth century commercial colleges were affordable, UC Berkley history professor Caitlin Rosenthal argues in a 2012 op-ed in Bloomberg,

    The most common form of tuition at these early schools was the “life scholarship.” Students paid a lump sum in exchange for unlimited instruction at any of the college’s branches – $40 for men and $30 for women in 1864. This was a considerable fee, but much less than tuition at most universities. And it was within reach of most workers – common laborers earned about $1 per day and clerks’ wages averaged $50 per month.

    Many of these “life scholarships” promised that students who enrolled would land a job – and if they didn’t, they could always continue their studies. That’s quite different than the tuition at today’s colleges – for-profit or not-for-profit – which comes with no such guarantee.

    Interestingly, several coding bootcamps do make this promise. A 48-week online program at Bloc will run you $24,000, for example. But if you don’t find a job that pays $60,000 after four months, your tuition will be refunded, the startup has pledged.

    Image credits

    According to a recent survey of coding bootcamp alumni, 66% of graduates do say they’ve found employment (63% of them full-time) in a job that requires the skills they learned in the program. 89% of respondents say they found a job within 120 days of completing the bootcamp. Yet 21% say they’re unemployed – a number that seems quite high, particularly in light of that supposed shortage of programming talent.

    For-Profit Higher Ed: Who’s Being Served?

    The gulf between for-profit higher ed’s promise of improved job prospects and the realities of graduates’ employment, along with the price tag on its tuition rates, is one of the reasons that the Obama Administration has advocated for “gainful employment” rules. These would measure and monitor the debt-to-earnings ratio of graduates from career colleges and in turn penalize those schools whose graduates had annual loan payments more than 8% of their wages or 20% of their discretionary earnings. (The gainful employment rules only apply to those schools that are eligible for Title IV federal financial aid.)

    The data is still murky about how much debt attendees at coding bootcamps accrue and how “worth it” these programs really might be. According to the aforementioned survey, the average tuition at these programs is $11,852. This figure might be a bit deceiving as the price tag and the length of bootcamps vary greatly. Moreover, many programs, such as App Academy, offer their program for free (well, plus a $5000 deposit) but then require that graduates repay up to 20% of their first year’s salary back to the school. So while the tuition might appear to be low in some cases, the indebtedness might actually be quite high.

    According to Course Report’s survey, 49% of graduates say that they paid tuition out of their own pockets, 21% say they received help from family, and just 1.7% say that their employer paid (or helped with) the tuition bill. Almost 25% took out a loan.

    That percentage – those going into debt for a coding bootcamp program – has increased quite dramatically over the last few years. (Less than 4% of graduates in the 2013 survey said that they had taken out a loan). In part, that’s due to the rapid expansion of the private loan industry geared towards serving this particular student population. (Incidentally, the two ed-tech companies which have raised the most money in 2015 are both loan providers: SoFi and Earnest. The former has raised $1.2 billion in venture capital this year; the latter $245 million.)

    Image credits

    The Obama Administration’s newly proposed “EQUIP” experiment will open up federal financial aid to some coding bootcamps and other ed-tech providers (like MOOC platforms), but it’s important to underscore some of the key differences here between federal loans and private-sector loans: federal student loans don’t have to be repaid until you graduate or leave school; federal student loans offer forbearance and deferment if you’re struggling to make payments; federal student loans have a fixed interest rate, often lower than private loans; federal student loans can be forgiven if you work in public service; federal student loans (with the exception of PLUS loans) do not require a credit check. The latter in particular might help to explain the demographics of those who are currently attending coding bootcamps: if they’re having to pay out-of-pocket or take loans, students are much less likely to be low-income. Indeed, according to Course Report’s survey, the cost of the bootcamps and whether or not they offered a scholarship was one of the least important factors when students chose a program.

    Here’s a look at some coding bootcamp graduates’ demographic data (as self-reported):

    Age
    Mean Age 30.95
    Gender
    Female 36.3%
    Male 63.1%
    Ethnicity
    American Indian 1.0%
    Asian American 14.0%
    Black 5.0%
    Other 17.2%
    White 62.8%
    Hispanic Origin
    Yes 20.3%
    No 79.7%
    Citizenship
    Yes, born in the US 78.2%
    Yes, naturalized 9.7%
    No 12.2%
    Education
    High school dropout 0.2%
    High school graduate 2.6%
    Some college 14.2%
    Associate’s degree 4.1%
    Bachelor’s degree 62.1%
    Master’s degree 14.2%
    Professional degree 1.5%
    Doctorate degree 1.1%

    (According to several surveys of MOOC enrollees, these students also tend to be overwhelmingly male from more affluent neighborhoods, and MOOC students also tend to already possess Bachelor’s degrees. The median age of MITx registrants is 27.)

    It’s worth considering how the demographics of students in MOOCs and coding bootcamps may (or may not) be similar to those enrolled at other for-profit post-secondary institutions, particularly since all of these programs tend to invoke the rhetoric about “democratizing education” and “expanding access.” Access for whom?

    Some two million students were enrolled in for-profit colleges in 2010, up from 400,000 a decade earlier. These students are disproportionately older, African American, and female when compared to the entire higher ed student population. While one in 20 of all students are enrolled in a for-profit college, 1 in 10 African American students, 1 in 14 Latino students, and 1 in 14 first-generation college students are enrolled at a for-profit. Students at for-profits are more likely to be single parents. They’re less likely to enter with a high school diploma. Dependent students in for-profits have about half as much family income as students in not-for-profit schools. (This demographic data is drawn from the NCES and from Harvard University researchers David Deming, Claudia Goldin, and Lawrence Katz in their 2013 study on for-profit colleges.)

    Deming, Goldin, and Katz argue that

    The snippets of available evidence suggest that the economic returns to students who attend for-profit colleges are lower than those for public and nonprofit colleges. Moreover, default rates on student loans for proprietary schools far exceed those of other higher-education institutions.

    Image credits

    According to one 2010 report, just 22% of first- and full-time students pursuing Bachelor’s degrees at for-profit colleges in 2008 graduated, compared to 55% and 65% of students at public and private non-profit universities respectively. Of the more than 5000 career programs that the Department of Education tracks, 72% of those offered by for-profit institutions produce graduates who earn less than high school dropouts.

    For their part, today’s MOOCs and coding bootcamps also boast that their students will find great success on the job market. Coursera, for example, recently surveyed its students who’d completed one of its online courses and 72% who responded said they had experienced “career benefits.” But without the mandated reporting that comes with federal financial aid, a lot of what we know about their student population and student outcomes remains pretty speculative.

    What kind of students benefit from coding bootcamps and MOOC programs, the new for-profit education? We don’t really know… although based on the history of higher education and employment, we can guess.

    EQUIP and the New For-Profit Higher Ed

    On October 14, the Obama Administration announced a new initiative, the Educational Quality through Innovative Partnerships (EQUIP) program, which will provide a pathway for unaccredited education programs like coding bootcamps and MOOCs to become eligible for federal financial aid. According to the Department of Education, EQUIP is meant to open up “new models of education and training” to low income students. In a press release, it argues that “Some of these new models may provide more flexible and more affordable credentials and educational options than those offered by traditional higher institutions, and are showing promise in preparing students with the training and education needed for better, in-demand jobs.”

    The EQUIP initiative will partner accredited institutions with third-party providers, loosening the “50% rule” that prohibits accredited schools from outsourcing more than 50% of an accredited program. Since bootcamps and MOOC providers “are not within the purview of traditional accrediting agencies,” the Department of Education says, “we have no generally accepted means of gauging their quality.” So those organizations that apply for the experiment will have to provide an outside “quality assurance entity,” which will help assess “student outcomes” like learning and employment.

    By making financial aid available for bootcamps and MOOCs, one does have to wonder if the Obama Administration is not simply opening the doors for more of precisely the sort of practices that the for-profit education industry has long been accused of: expanding rapidly, lowering the quality of instruction, focusing on marketing to certain populations (such as veterans), and profiting off of taxpayer dollars.

    Who benefits from the availability of aid? And who benefits from its absence? (“Who” here refers to students and to schools.)

    Shawna Scott argues in “The Code School-Industrial Complex” that without oversight, coding bootcamps re-inscribe the dominant beliefs and practices of the tech industry. Despite all the talk of “democratization,” this is a new form of gatekeeping.

    Before students are even accepted, school admission officers often select for easily marketable students, which often translates to students with the most privileged characteristics. Whether through intentionally targeting those traits because it’s easier to ensure graduates will be hired, or because of unconscious bias, is difficult to discern. Because schools’ graduation and employment rates are their main marketing tool, they have a financial stake in only admitting students who are at low risk of long-term unemployment. In addition, many schools take cues from their professional developer founders and run admissions like they hire for their startups. Students may be subjected to long and intensive questionnaires, phone or in-person interviews, or be required to submit a ‘creative’ application, such as a video. These requirements are often onerous for anyone working at a paid job or as a caretaker for others. Rarely do schools proactively provide information on alternative application processes for people of disparate ability. The stereotypical programmer is once again the assumed default.

    And so, despite the recent moves to sanction certain ed-tech experiments, some in the tech sector have been quite vocal in their opposition to more regulations governing coding schools. It’s not just EQUIP either; there was much outcry last year after several states, including California, “cracked down” on bootcamps. Many others have framed the entire accreditation system as a “cabal” that stifles innovation. “Innovation” in this case implies alternate certificate programs – not simply Associate’s or Bachelor’s degrees – in timely, technical topics demanded by local/industry employers.

    Image credits

    The Forgotten Tech Ed: Community Colleges

    Of course, there is an institution that’s long offered alternate certificate programs in timely, technical topics demanded by local/industry employers, and that’s the community college system.

    Vox’s Libby Nelson observed that “The NYT wrote more about Harvard last year than all community colleges combined,” and certainly the conversations in the media (and elsewhere) often ignore that community colleges exist at all, even though these schools educate almost half of all undergraduates in the US.

    Like much of public higher education, community colleges have seen their funding shrink in recent decades and have been tasked to do more with less. For community colleges, it’s a lot more with a lot less. Open enrollment, for example, means that these schools educate students who require more remediation. Yet despite many community colleges students being “high need,” community colleges spend far less per pupil than do four-year institutions. Deep budget cuts have also meant that even with their open enrollment policies, community colleges are having to restrict admissions. In 2012, some 470,000 students in California were on waiting lists, unable to get into the courses they need.

    This is what we know from history: as the funding for public higher ed decreased – for two- and four-year schools alike, for-profit higher ed expanded, promising precisely what today’s MOOCs and coding bootcamps now insist they’re the first and the only schools to do: to offer innovative programs, training students in the kinds of skills that will lead to good jobs. History tells us otherwise…
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this essay first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • How We Think About Technology (Without Thinking About Politics)

    How We Think About Technology (Without Thinking About Politics)

    N. Katherine Hayles, How We Think: Digital Media and Contemporary Technogenesis (Chicago, 2012)a review of N. Katherine Hayles, How We Think: Digital Media and Contemporary Technogenesis (Chicago, 2012)
    by R. Joshua Scannell

    ~

    In How We Think, N Katherine Hayles addresses a number of increasingly urgent problems facing both the humanities in general and scholars of digital culture in particular. In keeping with the research interests she has explored at least since 2002’s Writing Machines (MIT Press), Hayles examines the intersection of digital technologies and humanities practice to argue that contemporary transformations in the orientation of the University (and elsewhere) are attributable to shifts that ubiquitous digital culture have engendered in embodied cognition. She calls this process of mutual evolution between the computer and the human technogenesis (a term that is mostly widely associated with the work of Bernard Stiegler, although Hayles’s theories often aim in a different direction from Stiegler’s). Hayles argues that technogenesis is the basis for the reorientation of the academy, including students, away from established humanistic practices like close reading. Put another way, not only have we become posthuman (as Hayles discusses in her landmark 1999 University of Chicago Press book, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics), but our brains have begun to evolve to think with computers specifically and digital media generally. Rather than a rearguard eulogy for the humanities that was, Hayles advocates for an opening of the humanities to digital dromology; she sees the Digital Humanities as a particularly fertile ground from which to reimagine the humanities generally.

    Hayles is an exceptional scholar, and while her theory of technogenesis is not particularly novel, she articulates it with a clarity and elegance that are welcome and useful in a field that is often cluttered with good ideas, unintelligibly argued. Her close engagement with work across a range of disciplines – from Hegelian philosophy of mind (Catherine Malabou) to theories of semiosis and new media (Lev Manovich) to experimental literary production – grounds an argument about the necessity of transmedial engagement in an effective praxis. Moreover, she ably shifts generic gears over the course of a relatively short manuscript, moving from quasi-ethnographic engagement with University administrators, to media archaeology a la Friedrich Kittler, to contemporary literary theory, with grace. Her critique of the humanities that is, therefore, doubles as a praxis: she is actually producing the discipline-flouting work that she calls on her colleagues to pursue.

    The debate about the death and/or future of the humanities is weather worn, but Hayles’s theory of technogenesis as a platform for engaging in it is a welcome change. For Hayles, the technogenetic argument centers on temporality, and the multiple temporalities embedded in computer processing and human experience. She envisions this relation as cybernetic, in which computer and human are integrated as a system through the feedback loops of their coemergent temporalities. So, computers speed up human responses, which lag behind innovations, which prompt beta test cycles at quicker rates, which demand humans to behave affectively, nonconsciously. The recursive relationship between human duration and machine temporality effectively mutates both. Humanities professors might complain that their students cannot read “closely” like they used to, but for Hayles this is a fault of those disciplines to imagine methods in step with technological changes. Instead of digital media making us “dumber” by reducing our attention spans, as Nicholas Carr argues, Hayles claims that the movement towards what she calls “hyper reading” is an ontological and biological fact of embodied cognition in the age of digital media. If “how we think” were posed as a question, the answer would be: bodily, quickly, cursorily, affectively, non-consciously.

    Hayles argues that this doesn’t imply an eliminative teleology of human capacity, but rather an opportunity to think through novel, expansive interventions into this cyborg loop. We may be thinking (and feeling, and experiencing) differently than we used to, but this remains a fact of human existence. Digital media has shifted the ontics of our technogenetic reality, but it has not fundamentally altered its ontology. Morphological biology, in fact, entails ontological stability. To be human, and to think like one, is to be with machines, and to think with them. The kids, in other words, are all right.

    This sort of quasi-Derridean or Stieglerian Hegelianism is obviously not uncommon in media theory. As Hayles deploys it, this disposition provides a powerful framework for thinking through the relationship of humans and machines without ontological reductivism on either end. Moreover, she engages this theory in a resolutely material fashion, evading the enervating tendency of many theorists in the humanities to reduce actually existing material processes to metaphor and semiosis. Her engagement with Malabou’s work on brain plasticity is particularly useful here. Malabou has argued that the choice facing the intellectual in the age of contemporary capitalism is between plasticity and self-fashioning. Plasticity is a quintessential demand of contemporary capitalism, whereas self-fashioning opens up radical possibilities for intervention. The distinction between these two potentialities, however, is unclear – and therefore demands an ideological commitment to the latter. Hayles is right to point out that this dialectic insufficiently accounts for the myriad ways in which we are engaged with media, and are in fact produced, bodily, by it.

    But while Hayles’ critique is compelling, the responses she posits may be less so. Against what she sees as Malabou’s snide rejection of the potential of media, she argues

    It is precisely because contemporary technogenesis posits a strong connection between ongoing dynamic adaptation of technics and humans that multiple points of intervention open up. These include making new media…adapting present media to subversive ends…using digital media to reenvision academic practices, environments and strategies…and crafting reflexive representations of media self fashionings…that call attention to their own status as media, in the process raising our awareness of both the possibilities and dangers of such self-fashioning. (83)

    With the exception of the ambiguous labor done by the word “subversive,” this reads like a catalog of demands made by administrators seeking to offload ever-greater numbers of students into MOOCs. This is unfortunately indicative of what is, throughout the book, a basic failure to engage with the political economics of “digital media and contemporary technogenesis.” Not every book must explicitly be political, and there is little more ponderous than the obligatory, token consideration of “the political” that so many media scholars feel compelled to make. And yet, this is a text that claims to explain “how” “we” “think” under post-industrial, cognitive capitalism, and so the lack of this engagement cannot help but show.

    Universities across the country are collapsing due to lack of funding, students are practically reduced to debt bondage to cope with the costs of a desperately near-compulsory higher education that fails to deliver economic promises, “disruptive” deployment of digital media has conjured teratic corporate behemoths that all presume to “make the world a better place” on the backs of extraordinarily exploited workforces. There is no way for an account of the relationship between the human and the digital in this capitalist context not to be political. Given the general failure of the book to take these issues seriously, it is unsurprising that two of Hayles’ central suggestions for addressing the crisis in the humanities are 1) to use voluntary, hobbyist labor to do the intensive research that will serve as the data pool for digital humanities scholars and 2) to increasingly develop University partnerships with major digital conglomerates like Google.

    This reads like a cost-cutting administrator’s fever dream because, in the chapter in which Hayles promulgates novel (one might say “disruptive”) ideas for how best to move the humanities forward, she only speaks to administrators. There is no consideration of labor in this call for the reformation of the humanities. Given the enormous amount of writing that has been done on affective capitalism (Clough 2008), digital labor (Scholz 2012), emotional labor (Van Kleaf 2015), and so many other iterations of exploitation under digital capitalism, it boggles the mind a bit to see an embrace of the Mechanical Turk as a model for the future university.

    While it may be true that humanities education is in crisis – that it lacks funding, that its methods don’t connect with students, that it increasingly must justify its existence on economic grounds – it is unclear that any of these aspects of the crisis are attributable to a lack of engagement with the potentials of digital media, or the recognition that humans are evolving with our computers. All of these crises are just as plausibly attributable to what, among many others, Chandra Mohanty identified ten years ago as the emergence of the corporate university, and the concomitant transformation of the mission of the university from one of fostering democratic discourse to one of maximizing capital (Mohanty 2003). In other words, we might as easily attribute the crisis to the tightening command that contemporary capitalist institutions have over the logic of the university.

    Humanities departments are underfunded precisely because they cannot – almost by definition – justify their existence on monetary grounds. When students are not only acculturated, but are compelled by financial realities and debt, to understand the university as a credentialing institution capable of guaranteeing certain baseline waged occupations – then it is no surprise that they are uninterested in “close reading” of texts. Or, rather, it might be true that students’ “hyperreading” is a consequence of their cognitive evolution with machines. But it is also just as plausibly a consequence of the fact that students often are working full time jobs while taking on full time (or more) course loads. They do not have the time or inclination to read long, difficult texts closely. They do not have the time or inclination because of the consolidating paradigm around what labor, and particularly their labor, is worth. Why pay for a researcher when you can get a hobbyist to do it for free? Why pay for a humanities line when Google and Wikipedia can deliver everything an institution might need to know?

    In a political economy in which Amazon’s reduction of human employees to algorithmically-managed meat wagons is increasingly diagrammatic and “innovative” in industries from service to criminal justice to education, the proposals Hayles is making to ensure the future of the university seem more fifth columnary that emancipatory.

    This stance also evacuates much-needed context from what are otherwise thoroughly interesting, well-crafted arguments. This is particularly true of How We Think’s engagement with Lev Manovich’s claims regarding narrative and database. Speaking reductively, in The Language of New Media (MIT Press, 2001), Manovich argued that under there are two major communicative forms: narrative and database. Narrative, in his telling, is more or less linear, and dependent on human agency to be sensible. Novels and films, despite many modernist efforts to subvert this, tend toward narrative. The database, as opposed to the narrative, arranges information according to patterns, and does not depend on a diachronic point-to-point communicative flow to be intelligible. Rather, the database exists in multiple temporalities, with the accumulation of data for rhizomatic recall of seemingly unrelated information producing improbable patterns of knowledge production. Historically, he argues, narrative has dominated. But with the increasing digitization of cultural output, the database will more and more replace narrative.

    Manovich’s dichotomy of media has been both influential and roundly criticized (not least by Manovich himself in Software Takes Command, Bloomsbury 2013) Hayles convincingly takes it to task for being reductive and instituting a teleology of cultural forms that isn’t borne out by cultural practice. Narrative, obviously, hasn’t gone anywhere. Hayles extends this critique by considering the distinctive ways space and time are mobilized by database and narrative formations. Databases, she argues, depend on interoperability between different software platforms that need to access the stored information. In the case of geographical information services and global positioning services, this interoperability depends on some sort of universal standard against which all information can be measured. Thus, Cartesian space and time are inevitably inserted into database logics, depriving them of the capacity for liveliness. That is to say that the need to standardize the units that measure space and time in machine-readable databases imposes a conceptual grid on the world that is creatively limiting. Narrative, on the other hand, does not depend on interoperability, and therefore does not have an absolute referent against which it must make itself intelligible. Given this, it is capable of complex and variegated temporalities not available to databases. Databases, she concludes, can only operate within spatial parameters, while narrative can represent time in different, more creative ways.

    As an expansion and corrective to Manovich, this argument is compelling. Displacing his teleology and infusing it with a critique of the spatio-temporal work of database technologies and their organization of cultural knowledge is crucial. Hayles bases her claim on a detailed and fascinating comparison between the coding requirements of relational databanks and object-oriented databanks. But, somewhat surprisingly, she takes these different programming language models and metonymizes them as social realities. Temporality in the construction of objects transmutes into temporality as a philosophical category. It’s unclear how this leap holds without an attendant sociopolitical critique. But it is impossible to talk about the cultural logic of computation without talking about the social context in which this computation emerges. In other words, it is absolutely true that the “spatializing” techniques of coders (like clustering) render data points as spatial within the context of the data bank. But it is not an immediately logical leap to then claim that therefore databases as a cultural form are spatial and not temporal.

    Further, in the context of contemporary data science, Hayles’s claims about interoperability are at least somewhat puzzling. Interoperability and standardized referents might be a theoretical necessity for databases to be useful, but the ever-inflating markets around “big data,” data analytics, insights, overcoming data siloing, edge computing, etc, demonstrate quite categorically that interoperability-in-general is not only non-existent, but is productively non-existent. That is to say, there are enormous industries that have developed precisely around efforts to synthesize information generated and stored across non-interoperable datasets. Moreover, data analytics companies provide insights almost entirely based on their capacity to track improbably data patterns and resonances across unlikely temporalities.

    Far from a Cartesian world of absolute space and time, contemporary data science is a quite posthuman enterprise in committing machine learning to stretch, bend and strobe space and time in order to generate the possibility of bankable information. This is both theoretically true in the sense of setting algorithms to work sorting, sifting and analyzing truly incomprehensible amounts of data and materially true in the sense of the massive amount of capital and labor that is invested in building, powering, cooling, staffing and securing data centers. Moreover, the amount of data “in the cloud” has become so massive that analytics companies have quite literally reterritorialized information– particularly trades specializing in high frequency trading, which practice “co- location,” locating data centers geographically closer   the sites from which they will be accessed in order to maximize processing speed.

    Data science functions much like financial derivatives do (Martin 2015). Value in the present is hedged against the probable future spatiotemporal organization of software and material infrastructures capable of rendering a possibly profitable bundling of information in the immediate future. That may not be narrative, but it is certainly temporal. It is a temporality spurred by the queer fluxes of capital.

    All of which circles back to the title of the book. Hayles sets out to explain How We Think. A scholar with such an impeccable track record for pathbreaking analyses of the relationship of the human to technology is setting a high bar for herself with such a goal. In an era in which (in no small part due to her work) it is increasingly unclear who we are, what thinking is or how it happens, it may be an impossible bar to meet. Hayles does an admirable job of trying to inject new paradigms into a narrow academic debate about the future of the humanities. Ultimately, however, there is more resting on the question than the book can account for, not least the livelihoods and futures of her current and future colleagues.
    _____

    R Joshua Scannell is a PhD candidate in sociology at the CUNY Graduate Center. His current research looks at the political economic relations between predictive policing programs and urban informatics systems in New York City. He is the author of Cities: Unauthorized Resistance and Uncertain Sovereignty in the Urban World (Paradigm/Routledge, 2012).

    Back to the essay
    _____

    Patricia T. Clough. 2008. “The Affective Turn.” Theory Culture and Society 25(1) 1-22

    N. Katherine Hayles. 2002. Writing Machines. Cambridge: MIT Press

    N. Katherine Hayles. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press

    Catherine Malabou. 2008. What Should We Do with Our Brain? New York: Fordham University Press

    Lev Manovich. 2001. The Language of New Media. Cambridge: MIT Press.

    Lev Manovich. 2009. Software Takes Command. London: Bloomsbury

    Randy Martin. 2015. Knowledge LTD: Toward a Social Logic of the Derivative. Philadelphia: Temple University Press

    Chandra Mohanty. 2003. Feminism Without Borders: Decolonizing Theory, Practicing Solidarity. Durham: Duke University Press.

    Trebor Scholz, ed. 2012. Digital Labor: The Internet as Playground and Factory. New York: Routledge

    Bernard Stiegler. 1998. Technics and Time, 1: The Fault of Epimetheus. Palo Alto: Stanford University Press

    Kara Van Cleaf. 2015. “Of Woman Born to Mommy Blogged: The Journey from the Personal as Political to the Personal as Commodity.” Women’s Studies Quarterly 43(3/4) 247-265

    Back to the essay