b2o

  • Sharrona Pearl — In the Shadow of the Valley (Review of Anna Wiener, Uncanny Valley)

    Sharrona Pearl — In the Shadow of the Valley (Review of Anna Wiener, Uncanny Valley)

    a review of Anna Wiener, Uncanny Valley: A Memoir (Macmillan, 2020)

    by Sharrona Pearl

    ~

    Uncanny Valley, the latest, very well-publicized memoir of Silicon Valley apostasy, is, for sure, a great read.  Anna Wiener writes beautiful words that become sentences that become beautiful paragraphs and beautiful chapters.  The descriptions are finely wrought, and if not quite cinematic than very, very visceral.  While it is a wry and tense and sometimes stressful story, it’s also exactly what it says it is: a memoir.  It’s the story of her experiences.  It captures a zeitgeist – beautifully, and with nuance and verve and life. It highlights contradictions and complications and confusions: hers, but also of Silicon Valley culture itself.  It muses upon them, and worries them, and worries over them.  But it doesn’t analyze them and it certainly doesn’t solve them, even if you get the sense that Wiener would quite like to do so.  That’s okay.  Solving the problems exposed by Silicon Valley tech culture and tech capitalism is quite a big ask.

    Wiener’s memoir tells the story of her accidental immersion into, and gradual (too gradual?) estrangement from, essentially, Big Tech.  A newly minted graduate from a prestigious small liberal arts college (of course), Wiener was living in Brooklyn (of course) while working as an underpaid assistant in a small literary agency (of course.) “Privileged and downwardly mobile,” as she puts it, Wiener was just about getting by with some extra help from her parents, embracing being perpetually broke as she party-hopped and engaged in some light drug use while rolling her eyes at all the IKEA furniture.  In as clear a portrait of Brooklyn as anything could be, Wiener’s friends spent 2013 making sourdough bread near artisan chocolate shops while talking on their ironic flip phones.  World-weary at 24, Wiener decides to shake things up and applies for a job at a Manhattan-based ebook startup.  It’s still about books, she rationalizes, so the startup part is almost beside the point.  Or maybe, because it’s still about books, the tech itself can be used for good.  Of course, neither of these things turn out to be true for either this startup, or tech itself.  Wiener quickly discovers (and so do her bosses) that she’s just not the right fit.  So she applies for another tech job instead.  This time in the Bay Area.  Why not?  She’d gotten a heady dose of the optimism and opportunity of startup culture, and they offered her a great salary.  It was a good decision, a smart and responsible and exciting decision, even as she was sad to leave the books behind.  But honestly, she’d done that the second she joined the first startup.  And in a way, the entire memoir is Wiener figuring that out.

    Maybe Wiener’s privilege (alongside generational resources and whiteness) is living in a world where you don’t have to worry about Silicon Valley even as it permeates everything.  She and her friends were being willfully ignorant in Brooklyn; it turns out, as Wiener deftly shows us, you can be willfully ignorant from the heart of Silicon Valley too.  Wiener lands a job at one startup and then, at some point, takes a pay cut to work at another whose culture is a better fit.  “Culture” does a lot of work here to elide sexism, harassment, surveillance, and violation of privacy.  To put it another way: bad stuff is going on around Wiener, at the very companies she works for, and she doesn’t really notice or pay attention…so we shouldn’t either.  Even though she narrates these numerous and terrible violations clearly and explicitly, we don’t exactly clock them because they aren’t a surprise.  We already knew.  We don’t care.  Or we already did the caring part and we’ve moved on.

    If 2013 feels both too early and too late for sourdough (weren’t people making bread in the 1950s because they had to?  And in 2020 because of COVID?) that’s a bit like the book itself.  Surely the moment for Silicon Valley Seduction and Cessation was the early 2000s?  And surely our disillusionment from the surveillance of Big Tech and the loss of privacy didn’t happen until after 2016? (Well, if you pay attention to the timeline in the book, that’s when it happened for Wiener too).  I was there for the bubble in the early aughts.  How could anyone not know what to expect?  Which isn’t to say that this memoir isn’t a gripping and illustrative mise-en-scène.  It’s just that in the era of Coded Bias and Virginia Eubanks and Safiya Noble and Meredith Broussard and Ruha Benjamin and Shoshana Zuboff… didn’t we already know that Big Tech was Bad?  When Wiener has her big reveal in learning from her partner Noah that “we worked in a surveillance company,” it’s more like: well, duh.  (Does it count as whistleblowing if it isn’t a secret?)

    But maybe that wasn’t actually the big reveal of the book.  Maybe the point was that Wiener did already know, she just didn’t quite realize how seductive power is, how pervasive an all-encompassing a culture can be, and how easy distinctions between good and bad don’t do much for us in the totalizing world of tech.  She wants to break that all down for us.  The memoir is kind of Tech Tales for Lit Critics, which is distinct from Tech for Dummies ™ because maybe the critics are the smart ones in the end.  The story is for “us;” Wiener’s tribe of smart and idealistic and disaffected humanists.  (Truly us, right dear readers?)  She makes it clear that even as she works alongside and with an army of engineers, there is always an us and them.  (Maybe partly because really, she works for the engineers, and no matter what the company says everyone knows what the hierarchy is.)  The “us” are the skeptics and the “them” are the cult believers except that, as her weird affectation of never naming any tech firms (“an online superstore; a ride-hailing app; a home-sharing platform; the social network everyone loves to hate,”) we are all in the cult in some way, even if we (“we”) – in Wiener’s Brooklyn tribe forever no matter where we live – half-heartedly protest. (For context: I’m not on Facebook and I don’t own a cell phone but PLEASE follow me on twitter @sharronapearl).

    Wiener uses this “NDA language” throughout the memoir.  At first it’s endearing – imagine a world in which we aren’t constantly name-checking Amazon and AirBnB.  Then its addicting – when I was grocery shopping I began to think of my local Sprouts as “a West-Coast transplant fresh produce store.”  Finally, it’s annoying – just say Uber, for heaven’s sake!  But maybe there’s a method to it: these labels makes the ubiquity of these platforms all the more clear, and forces us to confront just how very integrated into our lives they all are.  We are no different from Wiener; we all benefit from surveillance.

    Sometimes the memoir feels a bit like stunt journalism, the tech take on The Year of Living Biblically or Running the Books.  There’s a sense from the outset that Wiener is thinking “I’ll take the job, and if I hate it I can always write about it.”  And indeed she did, and indeed she does, now working as the tech and start-up correspondent for The New Yorker.  (Read her articles: they’re terrific.)  But that’s not at all a bad thing: she tells her story well, with self-awareness and liveliness and a lot of patience in her sometimes ironic and snarky tone.  It’s exactly what it we imagine it to be when we see how the sausage is made: a little gross, a lot upsetting, and still really quite interesting.

    If Wiener feels a bit old before her time (she’s in her mid-twenties during her time in tech, and constantly lamenting how much younger all her bosses are) it’s both a function of Silicon Valley culture and its veneration of young male cowboys, and her own affectations.  Is any Brooklyn millennial ever really young?  Only when it’s too late.  As a non-engineer and a woman, Wiener is quite clear that for Silicon Valley, her time has passed.  Here is when she is at her most relatable in some ways: we have all been outsiders, and certainly many of would be in that setting.  At the same time, at 44 with three kids, I feel a bit like telling this sweet summer child to take her time.  And that much more will happen to her than already has.  Is that condescending?  The tone brings it out in me.  And maybe I’m also a little jealous: I could do with having made a lot of money in my 20s on the road to disillusionment with power and sexism and privilege and surveillance.  It’s better – maybe – than going down that road without making a lot of money and getting to live in San Francisco.  If, in the end, I’m not quite sure what the point of her big questions are, it’s still a hell of a good story.  I’m waiting for the movie version on “the streaming app that produces original content and doesn’t release its data.”

    _____

    Sharrona Pearl (@SharronaPearl) is a historian and theorist of the body and face.  She has written many articles and two monographs: About Faces: Physiognomy in Nineteenth-Century Britain (Harvard University Press, 2010) and Face/On: Face Transplants and the Ethics of the Other (University of Chicago Press, 2017). She is Associate Professor of Medical Ethics at Drexel University.

    Back to the essay

  • Zachary Loeb — Burn It All (Review of Mullaney, Peters, Hicks and Philip, eds., Your Computer Is on Fire)

    Zachary Loeb — Burn It All (Review of Mullaney, Peters, Hicks and Philip, eds., Your Computer Is on Fire)

    a review of Thomas S. Mullaney, Benjamin Peters, Mar Hicks and Kavita Philip, eds., Your Computer Is on Fire (MIT Press, 2021)

    by Zachary Loeb

    ~

    It often feels as though contemporary discussions about computers have perfected the art of talking around, but not specifically about, computers. Almost every week there is a new story about Facebook’s malfeasance, but usually such stories say little about the actual technologies without which such conduct could not have happened. Stories proliferate about the unquenchable hunger for energy that cryptocurrency mining represents, but the computers eating up that power are usually deemed less interesting than the currency being mined. Debates continue about just how much AI can really accomplish and just how soon it will be able to accomplish even more, but the public conversation winds up conjuring images of gleaming terminators marching across a skull-strewn wasteland instead of rows of servers humming in an undisclosed location. From Zoom to dancing robots, from Amazon to the latest Apple Event, from misinformation campaigns to activist hashtags—we find ourselves constantly talking about computers, and yet seldom talking about computers.

    All of the aforementioned specifics are important to talk about. If anything, we need to be talking more about Facebook’s malfeasance, the energy consumption of cryptocurrencies, the hype versus the realities of AI, Zoom, dancing robots, Amazon, misinformation campaigns, and so forth. But we also need to go deeper. Case in point, though it was a very unpopular position to take for many years, it is now a fairly safe position to say that “Facebook is a problem;” however, it still remains a much less acceptable position to suggest that “computers are a problem.” At a moment in which it has become glaringly obvious that tech companies have politics, there still remains a common sentiment that computers are neutral. And thus such a view can comfortably disparage Bill Gates and Jeff Bezos and Sundar Pichai and Mark Zuckerberg for the ways in which they have warped the potential of computing, while still holding out hope that computing can be a wonderful emancipatory tool if it can just be put in better hands.

    But what if computers are themselves, at least part of, the problem? What if some of our present technological problems have their roots deep in the history of computing, and not just in the dorm room where Mark Zuckerberg first put together FaceSmash?

    These are the sorts of troubling and provocative questions with which the essential new book Your Computer Is on Fire engages. It is a volume that recognizes that when we talk about computers, we need to actually talk about computers. A vital intervention into contemporary discussions about technology, this book wastes no energy on carefully worded declarations of fealty to computers and the Internet, there’s a reason why the book is not titled Your Computer Might Be on Fire but Your Computer Is on Fire.

    The editors of the volume are quite upfront about the confrontational stance of the volume, Thomas Mullaney opens the book by declaring that “Humankind can no longer afford to be lulled into complacency by narratives of techno-utopianism or technoneutrality” (4). This is a point that Mullaney drives home as he notes that “the time for equivocation is over” before emphasizing that despite its at moments woebegone tonality, the volume is not “crafted as a call of despair but as a call to arms” (8). While the book sets out to offer a robust critique of computers, Mar Hicks highlights that the editors and contributors of the book shall do this in a historically grounded way, which includes a vital awareness that “there are almost always red flags and warning signs before a disaster, if one cares to look” (14). Though unfortunately many of those who attempted to sound the alarm about the potential hazards of computing were either ignored or derided as technophobes. Where Mullaney had described the book as “a call to arms,” Hicks describes what sorts of actions this call may entail: “we have to support workers, vote for regulation, and protest (or support those protesting) widespread harms like racist violence” (23). And though the focus is on collective action, Hicks does not diminish the significance of individual ethical acts, noting powerfully (in words that may be particularly pointed at those who work for the big tech companies): “Don’t spend your life as a conscientious cog in a terribly broken system” (24).

    Your Computer Is on Fire begins like a political manifesto; as the volume proceeds the contributors maintain the sense of righteous fury. In addition to introductions and conclusions, the book is divided into three sections: “Nothing is Virtual” wherein contributors cut through the airy talking points to bring ideas about computing back to the ground; “This is an Emergency” sounds the alarm on many of the currently unfolding crises in and around computing; and “Where Will the Fire Spread” turns a prescient gaze towards trajectories to be mindful of in the swiftly approaching future. Hicks notes, “to shape the future, look to the past” (24), and this is a prompt that the contributors take up with gusto as they carefully demonstrate how the outlines of our high-tech society were drawn long before Google became a verb.

    Drawing attention to the physicality of the Cloud, Nathan Ensmenger begins the “Nothing is Virtual” section by working to resituate “the history of computing within the history of industrialization” (35). Arguing that “The Cloud is a Factory,” Ensmenger digs beneath the seeming immateriality of the Cloud metaphor to extricate the human labor, human agendas, and environmental costs that get elided when “the Cloud” gets bandied about. The role of the human worker hiding behind the high-tech curtain is further investigated by Sarah Roberts, who explores how many of the high-tech solutions that purport to use AI to fix everything, are relying on the labor of human beings sitting in front of computers. As Roberts evocatively describes it, the “solutionist disposition toward AI everywhere is aspirational at its core” (66), and this desire for easy technological solutions covers up challenging social realities. While the Internet is often hailed as an American invention, Benjamin Peters discusses the US ARPANET alongside the ultimately unsuccessful network attempts of the Soviet OGAS and Chile’s Cybersyn, in order to show how “every network history begins with a history of the wider word” (81), and to demonstrate that networks have not developed by “circumventing power hierarchies” but by embedding themselves into those hierarchies (88). Breaking through the emancipatory hype surrounding the Internet, Kavita Philip explores the ways in which the Internet materially and ideologically reifies colonial logics, of dominance and control, demonstrating how “the infrastructural internet, and our cultural stories about it, are mutually constitutive.” (110). Mitali Thakor brings the volume’s first part to a close, with a consideration of how the digital age is “dominated by the feeling of paranoia” (120), by discussing the development and deployment of sophisticated surveillance technologies (in this case, for the detection of child pornography).

    “Electronic computing technology has long been an abstraction of political power into machine form” (137), these lines from Mar Hicks eloquently capture the leitmotif that plays throughout the chapters that make up the second part of the volume. Hicks’ comment comes from an exploration of the sexism that has long been “a feature, not a bug” (135) of the computing sector, with particular consideration of the ways in which sexist hiring and firing practices undermined the development of England’s computing sector. Further exploring how the sexism of today’s tech sector has roots in the development of the tech sector, Corinna Schlombs looks to the history of IBM to consider how that company suppressed efforts by workers to organize by framing the company as a family—albeit one wherein father still knew best. The biases built into voice recognition technologies (such as Siri) are delved into by Halcyon Lawrence who draws attention to the way that these technologies are biased towards those with accents, a reflection of the lack of diversity amongst those who design these technologies. In discussing robots, Safiya Umoja Noble explains how “Robots are the dreams of their designers, catering to the imaginaries we hold about who should do what in our societies” (202), and thus these robots reinscribe particular viewpoints and biases even as their creators claim they are creating robots for good. Shifting away from the flashiest gadgets of high-tech society, Andrea Stanton considers the cultural logics and biases embedded in word processing software that treat the demands of languages that are not written left to write as somehow aberrant. Considering how much of computer usage involves playing games, Noah Wardrip-Fruin argues that the limited set of video game logics keeps games from being about very much—a shooter is a shooter regardless of whether you are gunning down demons in hell or fanatics in a flooded ruin dense with metaphors.

    Oftentimes hiring more diverse candidates is hailed as the solution to the tech sector’s sexism and racism, but as Janet Abbate notes in the first chapter of the “Where Will the Fire Spread?” section, this approach generally attempts to force different groups to fit into Silicon Valley’s warped view of what attributes make for a good programmer. Abbate contends that equal representation will not be enough “until computer work is equally meaningful for groups who do not necessarily share the values and priorities that currently dominate Silicon Valley” (266). While computers do things to society, they also perform specific technical functions, and Ben Allen comments on source code to show the power that programmers have to insert nearly undetectable hacks into the systems they create. Returning to the question of code as empowerment, Sreela Sarkar discusses a skills training class held in Seelampur (near New Delhi), to show that “instead of equalizing disparities, IT-enabled globalization has created and further heightened divisions of class, caste, gender, religion, etc.” (308). Turning towards infrastructure, Paul Edwards considers how the speed with which platforms have developed to become infrastructure has been much swifter than the speed with which older infrastructural systems were developed, which he explores by highlighting three examples in various African contexts (FidoNet, M-Pesa, and Free Basiscs). And Thomas Mullaney closes out the third section with a consideration of the way that the QWERTY keyboard gave rise to pushback and creative solutions from those who sought to type in non-Latin scripts.

    Just as two of the editors began the book with a call to arms, so too the other two editors close the book with a similar rallying cry. In assessing the chapters that had come before, Kavita Philip emphasizes that the volume has chosen “complex, contradictory, contingent explanations over just-so stories.” (364) The contributors, and editors, have worked with great care to make it clear that the current state of computers was not inevitable—that things currently are the way they are does not mean they had to be that way, or that they cannot be changed. Eschewing simplistic solutions, Philip notes that language, history, and politics truly matter to our conversations about computing, and that as we seek for the way ahead we must be cognizant of all of them. In the book’s final piece, Benjamin Peters sets the computer fire against the backdrop of anthropogenic climate change and the COVID-19 pandemic, noting the odd juxtaposition between the progress narratives that surround technology and the ways in which “the world of human suffering has never so clearly appeared on the brink of ruin” (378). Pushing back against a simple desire to turn things off, Peters notes that “we cannot return the unasked for gifts of new media and computing” (380). Though the book has clearly been about computers, truly wrestling with the matters must force us to reflect on what it is that we really talk about when we talk about computers, and it turns out that “the question of life becomes how do not I but we live now?” (380)

    It is a challenging question, and it provides a fitting end to a book that challenges many of the dominant public narratives surrounding computers. And though the book has emphasized repeatedly how important it is to really talk about computers, this final question powers down the computer to force us to look at our own reflection in the mirrored surface of the computer screen.

    Yes, the book is about computers, but more than that it is about what it has meant to live with these devices—and what it might mean to live differently with them in the future.

    *

    With the creation of Your Computer Is on Fire the editors (Hicks, Mullaney, Peters, and Philip) have achieved an impressive feat. The volume is timely, provocative, wonderfully researched, filled with devastating insights, and composed in such a way as to make the contents accessible to a broad audience. It might seem a bit hyperbolic to suggest that anyone who has used a computer in the last week should read this book, but anyone who has used a computer in the last week should read this book. Scholars will benefit from the richly researched analysis, students will enjoy the forthright tone of the chapters, and anyone who uses computers will come away from the book with a clearer sense of the way in which these discussions matter for them and the world in which they live.

    For what this book accomplishes so spectacularly is to make it clear that when we think about computers and society it isn’t sufficient to just think about Facebook or facial recognition software or computer skills courses—we need to actually think about computers. We need to think about the history of computers, we need to think about the material aspects of computers, we need to think about the (oft-unseen) human labor that surrounds computers, we need to think about the language we use to discuss computers, and we need to think about the political values embedded in these machines and the political moments out of which these machines emerged. And yet, even as we shift our gaze to look at computers more critically, the contributors to Your Computer Is on Fire continually remind the reader that when we are thinking about computers we need to be thinking about deeper questions than just those about machines, we need to be considering what kind of technological world we want to live in. And moreover we need to be thinking about who is included and who is excluded when the word “we” is tossed about casually.

    Your Computer Is on Fire is simultaneously a book that will make you think, and a good book to think with. In other words, it is precisely the type of volume that is so desperately needed right now.

    The book derives much of its power from the willingness on the parts of the contributors to write in a declarative style. In this book criticisms are not carefully couched behind three layers of praise for Silicon Valley, and odes of affection for smartphones, rather the contributors stand firm in declaring that there are real problems (with historical roots) and that we are not going to be able to address them by pledging fealty to the companies that have so consistently shown a disregard for the broader world. This tone results in too many wonderful turns of phrase and incendiary remarks to be able to list all of them here, but the broad discussion around computers would be greatly enhanced with more comments like Janet Abbate’s “We have Black Girls Code, but we don’t have ‘White Boys Collaborate’ or ‘White Boys Learn Respect.’ Why not, if we want to nurture the full set of skills needed in computing?” (263) While critics of technology often find themselves having to argue from a defensive position, Your Computer Is on Fire is a book that almost gleefully goes on the offense.

    It almost seems like a disservice to the breadth of contributions to the volume to try to sum up its core message in a few lines, or to attempt to neatly capture the key takeaways in a few sentences. Nevertheless, insofar as the book has a clear undergirding position, beyond the titular idea, it is the one eloquently captured by Mar Hicks thusly:

    High technology is often a screen for propping up idealistic progress narratives while simultaneously torpedoing meaningful social reform with subtle and systemic sexism, classism, and racism…The computer revolution was not a revolution in any true sense: it left social and political hierarchies untouched, at times even strengthening them and heightening inequalities. (152)

    And this is the matter with which each contributor wrestles, as they break apart the “idealistic progress narratives” to reveal the ways that computers have time and again strengthened the already existing power structures…even if many people get to enjoy new shiny gadgets along the way.

    Your Computer Is on Fire is a jarring assessment of the current state of our computer dependent societies, and how they came to be the way they are; however, in considering this new book it is worth bearing in mind that it is not the first volume to try to capture the state of computers in a moment in time. That we find ourselves in the present position, is unfortunately a testament to decades of unheeded warnings.

    One of the objectives that is taken up throughout Your Computer Is on Fire is to counter the techno-utopian ideology that never so much dies as much as it shifts into the hands of some new would-be techno-savior wearing a crown of 1s and 0s. However, even as the mantle of techno-savior shifts from Mark Zuckerberg to Elon Musk, it seems that we may be in a moment when fewer people are willing to uncritically accept the idea that technological progress is synonymous with social progress. Though, if we are being frank, adoring faith in technology remains the dominant sentiment (at least in the US). Furthermore, this isn’t the first moment when a growing distrust and dissatisfaction with technological forces has risen, nor is this the first time that scholars have sought to speak out. Therefore, even as Your Computer is on Fire provides fantastic accounts of the history of computing, it is worthwhile to consider where this new vital volume fits within the history of critiques of computing. Or, to frame this slightly differently, in what ways is the 21st century critique of computing, different from the 20th century critique of computing?

    In 1979 the MIT Press published the edited volume The Computer Age: A Twenty Year View. Edited by Michael Dertouzos and Joel Moses, that book brought together a variety of influential figures from the early history of computing including J.C.R. Licklider, Herbert Simon, Marvin Minsky, and many others. The book was an overwhelmingly optimistic affair, and though the contributors anticipated that the mass uptake of computers would lead to some disruptions, they imagined that all of these changes would ultimately be for the best. Granted, the book was not without a critical voice. The computer scientist turned critic, Joseph Weizenbaum was afforded a chapter in a quarantined “Critiques” section from which to cast doubts on the utopian hopes that had filled the rest of the volume. And though Weizenbaum’s criticisms were presented, the book’s introduction politely scoffed at his woebegone outlook, and Weizenbaum’s chapter was followed by not one but two barbed responses, which ensured that his critical voice was not given the last word. Any attempt to assess The Computer Age at this point will likely say as much about the person doing the assessing as about the volume itself, and yet it would take a real commitment to only seeing the positive sides of computers to deny that the volume’s disparaged critic was one of its most prescient contributors.

    If The Computer Age can be seen as a reflection of the state of discourse surrounding computers in 1979, than Your Computer Is on Fire is a blazing demonstration of how greatly those discussions have changed by 2021. This is not to suggest that the techno-utopian mindset that so infused The Computer Age no longer exists. Alas, far from it.

    As the contributors to Your Computer Is on Fire make clear repeatedly, much of the present discussion around computing is dominated by hype and hopes. And a consideration of those conversations in the second half of the twentieth century reveals that hype and hope were dominant forces then as well. Granted, for much of that period (arguably until the mid-1980s and not really taking off until the 1990s), computers remained technologies with which most people had relatively little direct interaction. The mammoth machines of the 1960s and 1970s were not all top-secret (though some certainly were), but when social critics warned about computers in the 50s, 60s, and 70s they were not describing machines that had become ubiquitous—even if they warned that those machines would eventually become so. Thus, when Lewis Mumford warned in 1956, that:

    In creating the thinking machine, man has made the last step in submission to mechanization; and his final abdication before this product of his own ingenuity has given him a new object of worship: a cybernetic god. (Mumford, 173)

    It is somewhat understandable that his warning would be met with rolled eyes and impatient scoffs. For “the thinking machine” at that point remained isolated enough from most people’s daily lives that the idea that this was “a new object of worship” seemed almost absurd. Though he continued issuing dire predictions about computers, by 1970 when Mumford wrote of the development of “computer dominated society” this warning could still be dismissed as absurd hyperbole. And when Mumford’s friend, the aforementioned Joseph Weizenbaum, laid out a blistering critique of computers and the “artificial intelligentsia” in 1976 those warnings were still somewhat muddled as the computer remained largely out of sight and out of mind for large parts of society. Of course, these critics recognized that this “cybernetic god” had not as of yet become the new dominant faith, but they issued such warnings out of a sense that this was the direction in which things were developing.

    Already by the 1980s it was apparent to many scholars and critics that, despite the hype and revolutionary lingo, computers were primarily retrenching existing power relations while elevating the authority of a variety of new companies. And this gave rise to heated debates about how (and if) these technologies could be reclaimed and repurposed—Donna Haraway’s classic Cyborg Manifesto emerged out of those debates. By the time of 1990’s “Neo-Luddite Manifesto,” wherein Chellis Glendinning pointed to “computer technologies” as one of the types of technologies the Neo-Luddites were calling to be dismantled, the computer was becoming less and less an abstraction and more and more a feature of many people’s daily work lives. Though there is not space here to fully develop this argument, it may well be that the 1990s represent the decade in which many people found themselves suddenly in a “computer dominated society.”  Indeed, though Y2K is unfortunately often remembered as something of a hoax today, delving back into what was written about that crisis as it was unfolding makes it clear that in many sectors Y2K was the moment when people were forced to fully reckon with how quickly and how deeply they had become highly reliant on complex computerized systems. And, of course, much of what we know about the history of computing in those decades of the twentieth century we owe to the phenomenal research that has been done by many of the scholars who have contributed chapters to Your Computer Is on Fire.

    While Your Computer Is on Fire provides essential analyses of events from the twentieth century, as a critique it is very much a reflection of the twenty-first century. It is a volume that represents a moment in which critics are no longer warning “hey, watch out, or these computers might be on fire in the future” but in which critics can now confidently state “your computer is on fire.” In 1956 it could seem hyperbolic to suggest that computers would become “a new object of worship,” by 2021 such faith is on full display. In 1970 it was possible to warn of the threat of “computer dominated society,” by 2021 that “computer dominated society” has truly arrived. In the 1980s it could be argued that computers were reinforcing dominant power relations, in 2021 this is no longer a particularly controversial position. And perhaps most importantly, in 1990 it could still be suggested that computer technologies should be dismantled, but by 2021 the idea of dismantling these technologies that have become so interwoven in our daily lives seems dangerous, absurd, and unwanted. Your Computer Is on Fire is in many ways an acknowledgement that we are now living in the type of society about which many of the twentieth century’s technological critics warned. In the book’s final conclusion, Benjamin Peters pushes back against “Luddite self-righteousness” to note that “I can opt out of social networks; many others cannot” (377), and it is the emergence of this moment wherein the ability to “opt out” has itself become a privilege is precisely the sort of danger about which so many of the last century’s critics were so concerned.

    To look back at critiques of computers made throughout the twentieth century is in many ways a fairly depressing activity. For it reveals that many of those who were scorned as “doom mongers” had a fairly good sense of what computers would mean for the world. Certainly, some will continue to mock such figures for their humanism or borderline romanticism, but they were writing and living in a moment when the idea of living without a smartphone had not yet become unthinkable. As the contributors to this essential volume make clear, Your Computer Is on Fire, and yet too many of us still seem to believe that we are wearing asbestos gloves, and that if we suppress the flames of Facebook we will be able to safely warm our toes on our burning laptop.

    What Your Computer Is on Fire achieves so masterfully is to remind its readers that the wired up society in which they live was not inevitable, and what comes next is not inevitable either. And to remind them that if we are going to talk about what computers have wrought, we need to actually talk about computers. And yet the book is also a discomforting testament to a state of affairs wherein most of us simply do not have the option of swearing off computers. They fill our homes, they fill our societies, they fill our language, and they fill our imaginations. Thus, in dealing with this fire a first important step is to admit that there is a fire, and to stop absentmindedly pouring gasoline on everything. As Mar Hicks notes:

    Techno-optimist narratives surrounding high-technology and the public good—ones that assume technology is somehow inherently progressive—rely on historical fictions and blind spots that tend to overlook how large technological systems perpetuate structures of dominance and power already in place. (137)

    And as Kavita Philip describes:

    it is some combination of our addiction to the excitement of invention, with our enjoyment of individualized sophistications of a technological society, that has brought us to the brink of ruin even while illuminating our lives and enhancing the possibilities of collective agency. (365)

    Historically rich, provocatively written, engaging and engaged, Your Computer Is on Fire is a powerful reminder that when it is properly controlled fire can be useful, but when fire is allowed to rage out of control it turns everything it touches to ash. This book is not only a must read, but a must wrestle with, a must think with, and a must remember. After all, the “your” in the book’s title refers to you.

    Yes, you.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2o Review Digital Studies section.

    Back to the essay

    Works Cited

    • Lewis Mumford. The Transformations of Man. New York: Harper and Brothers, 1956.

     

     

     

     

     

  • Zachary Loeb — General Ludd in the Long Seventies (Review of Matt Tierney, Dismantlings)

    Zachary Loeb — General Ludd in the Long Seventies (Review of Matt Tierney, Dismantlings)

    a review of Matt Tierney, Dismantlings: Words Against Machines in the American Long Seventies (Cornell University Press, 2019)

    by Zachary Loeb

    ~

    The guy said, “If machinery
    makes you so happy
    go buy yourself
    a Happiness Machine.”
    Then he realized:
    They were trying to do
    exactly that.

    – Kenneth Burke, “Routine for a Stand-Up Comedian” (15)

    A sledgehammer is a fairly versatile tool. You can use it do destroy things, you can use it to build things, and in some cases you can use it to destroy things so that you can build things. Granted, it remains a rather heavy and fairly blunt tool, it is not particularly well suited for fine detail work requiring a high degree of precision. Which is, likely, one of the reasons why those who are famed for wielding sledgehammers often wind up being characterized as being just as blunt and unsubtle as the heavy instruments they swung.

    And, perhaps, no group has been more closely associated with sledgehammers, than the Luddites. Those early 19th century skilled crafts workers who took up arms to defend their communities and their livelihoods from the “obnoxious machines” being introduced by their employers. Though the tactic of machine breaking as a form of protest has a lengthy history that predates (and post-dates) the Luddites, it is a tactic that has come to be bound up with the name of the followers of the mysterious General Ludd. Despite the efforts of writers and thinkers to rescue the Luddite’s legacy from “the enormous condescension of posterity” (Thompson, 12), the term “Luddite” today generally has less to do with a specific historical group and has instead largely become an epithet to be hurled at anyone who dares question the gospel of technological progress. Yet, as the second decade of the twenty-first century comes to a close, it may well be that “Luddite” has lost some of its insulting sting against the backdrop of metastasizing tech giants, growing mountains of toxic e-waste, and an ecological crisis that owes much to an unquestioned faith in the benefits of technology.

    General Ludd may well get the last laugh.

    That the Luddites have lingered so fiercely in the public imagination is a testament to the fact that the Luddites, and the actions for which they are remembered, are good to think with. Insofar as one can talk about Luddism it represents less a coherent body of thought created by the Luddites themselves, and more the attempt by later scholars, critics, artists, and activists to try to make sense of what is usable from the Luddite legacy. And it is this effort to think through and think with, that Matt Tierney explores in his phenomenal book Dismantlings: Words Against Machines in the American Long Seventies. While the focus of Dismantlings, as its title makes clear, is on the “long seventies” (the years from 1965 to 1980) the book represents an important intervention in current discussions and debates around the impacts of technology on society. Just as the various figures Tierney discussed turned their thinking (to varying extents) back to the Luddites, so too the book argues is it worth revisiting the thinking and writing on the matter from the long seventies. This is not a book on the historical Luddites, instead this book is a vital contribution to attempts to theorize what Luddism might mean, and how we are to confront the various technological challenges facing us today.

    Largely remembered for occurrences including the Vietnam War, the Civil Rights movement, the space race, and a general tone of social upheaval – the long seventies also represented a period when technological questions were gaining prominence. With thinkers such as Marshall McLuhan, Buckminster Fuller, Norbert Wiener, and Stewart Brand all putting forth visions of the way that the new consumer technologies would remake society: creating “global villages” or giving rise to a perception of all of humanity as passengers on “spaceship earth.” Yet they were hardly the only figures contemplating technology in that period, and many of the other visions that emerged aimed to directly challenge some of the assumptions and optimism of the likes of McLuhan and Fuller. In the long seventies, the question of what would come next was closely entwined with an evaluation of what had come before, indeed “the breaking of retrogressive notions of technology coupled with the breaking of retrogressive technologies…undergoes a period of vital activity during the Long Seventies in the poems, fictions, and activist speech of what was then called cyberculture,” (15). Granted, this was a “breaking” that generally had more to do with theorizing than with actual machine smashing. Instead it could more accurately be seen as “dismantling,” the careful taking apart so that the functioning can be more fully understood and evaluated. Yet it is a thinking that, importantly, occurred against a recognition that the world was, as Norbert Wiener observed, “the world of Belsen and Hiroshima” (8). To make sense of the resistant narratives towards technology in the long seventies it is necessary to engage critically with the terminology of the period, and thus Tierney’s book represents a sort of conceptual “counterlexicon,” to do just that.

    As anyone who knows about the historical Luddites can attest, they did not hate technology (as such). Rather they were opposed to particular machines being used in a particular way at a particular place and time. And it is a similar attitude towards Luddism (not as an opposition to all technology, but as an understanding that technology has social implications) that Tierney discusses in the long seventies. Luddism here comes to represent “a gradual relinquishing of machines whose continued use would contravene ethical principles” (30), and this attitude is found in Langdon Winner’s concept of “epistemological Luddism” (as discussed in his book Autonomous Technology) and in the poetry of Audre Lorde. While Lorde’s line “for the master’s tools will never dismantle the master’s house” continues to be well known by activists, the question of “tools” can also be engaged with quite literally. Approached with a mind towards Luddism, Lorde’s remarks can be seen as indicating that it is not only that “the master’s house” must be dismantled but “the master’s tools” as well – and Lorde’s writing suggests poetry as a key tool for the dismantler. The version of Luddism that emerges in the late seventies represents a “sort of relinquishing” it “is not about machine-smashing at all” (47), instead it entails a careful work of examining machines to determine which are worth keeping.

    The attitudes towards technology of the long seventies were closely entwined with a sense of the world as made seemingly smaller and more connected thanks to the new technologies of the era. A certain strand of thinking in this period, exemplified by McLuhan’s “global village” or Fuller’s “Spaceship Earth,” achieved great popular success even as reactionary racist and nativist notions lurked just below the surface of the seeming technological optimism of those concepts. Contrary to the “fatalistic acceptance of new technological constraints on life” (48), works by science fiction authors like Ursula Le Guin and Samuel R. Delaney presented a notion of “communion, as a collaborative process of making do” (51). Works like The Dispossessed (Le Guin) and Triton (Delaney), presented readers with visions, and questions, of “real coexistence…not the passage but the sharing of a moment” (63). In contrast to the “technological Messianism” (74) of the likes of Fuller and McLuhan, the “communion” based works by the likes of Le Guin and Delaney focused less on exuberance for the machines themselves but instead sought to critically engage with what types of coexistence such machines would and could genuinely facilitate.

    Coined by Alice Mary Hilton, in 1963, the idea of “cyberculture” did not originally connote the sort of blissed-out-techno-optimism that the term evokes today. Rather it was meant to be “an alternative to the global village and the one-town world, and an insistence on collective action in a world not only of Belsen and Hiroshima but also of ongoing struggles toward decolonization, sexual and gender autonomy, and racial justice” (12). Thus, “cyberculture” (and cybernetics more generally) may represent one of the alternative pathways along which technological society could have developed. What “cyberculture” represented was not an exuberant embrace of all things “cyber,” but an attempt to name and thereby open a space for protest, not “against thinking machines” but which would “interrupt the advancing consensus that such machines had shrunk the globe” (81). These concepts achieved further maturation in the Ad Hoc Committee’s “Triple Revolution Manifesto” (from 1964), which sought to link an emancipatory political program to advances in new technology, linking “cybernation to a decrease in capitalist, racist, and militarist violence” (85). Seizing upon an earnest belief that the technological ethics could guide new technological developments towards just ends, “cyberculture” also imagined that such tools could supplant scarcity with abundance.

    What “cyberculture” based thinking consists of is a sort of theoretical imagining, which is why a document like a manifesto represents such an excellent example of “cyberculture” in practice. It is a sort of “distortion” that recognizes how “the fates of militarism, racism, and cybernation have only ever been knotted together” and “thus calls for imaginative practices, whether literary or activist, for cutting through the knot” (95). This is the sort of theorizing that can be seen in Martin Luther King, Jr.’s commentary on how science and technology had made of “this world a neighborhood” without yet making “of it a brotherhood” (96). The technological ethics of the advocates of “cyberculture” could be the tools with which to make “it a brotherhood” without discarding all of the tools that had made it first “a neighborhood.” The risks and opportunities of new technological forms were also commented upon in works like Shulamith Firestone’s Dialectic of Sex wherein she argued that women needed to seize and guide these technologies. Blending analysis of what is with a program for what could be, Firestone’s work shows “that if other technologies are possible, then other social practices, even practices that are rarely considered in relation to new technology, may be possible too” (105).

    For some, in the long seventies, challenging machinery still took on a destructive form. Though this often entailed a sort of “revolutionary suicide” which represented an attempt to “prevent the becoming-machine of subjugated human bodies and selves” (113). A refusal to become a machine oneself, and a refusal to allow oneself to become fodder for the machine. Such a self-destructive act flows from the Pynchon-esque tragic recognition of a growing consensus “that nothing can be done to oppose” the new machines (122). Such woebegone dejection is in contrast to other attitudes that sought to not only imagine but to also construct new tools that would put the people and community first. John Mohawk, of the Haudenosaunee Confederacy of Mohawk, Oneida, Onondaga, Cayuga, and Seneca people gave voice to this in his theorizing of “liberation technology.” As Mohawk explained at a UN session, “Decentralized technologies that meet the needs of the people those technologies serve will necessarily give life to a different kind of political structure, and it is safe to predict that the political structure that results will be anticolonial in nature” (127). The search for such alternative technologies suggested a framework in which what was needed was “machines to suit the community, or else no machines at all” (129) – a position that countered the technological abundance hoped for by “cyberculture” with an appeal for technologies of subsistence. After all, this was the world of Belsen and Hiroshima, “a world of new and barely understood technologies” (149), in such a world “where the very skin of the planet is a ledger of technological misapplications” (154) it is wise to proceed with caution and humility.

    The long seventies present a fascinating kaleidoscope of visions of technologies, how to live with them, how to select them, and how to think about them. What makes the long seventies so worthy of revisiting is that they and the present moment are both “seized with a critical discourse about technology, and by a popular social upheaval in which new social movements emerge, grow, and proliferate” (5). Luddism may be routinely held up as a foolish reaction, but “by breaking apart certain machines, we can learn to use them better, or never use them again. By dissecting certain technocentric cultural logics, we can likewise challenge or reject them” (162). That the Luddites are so constantly vilified may ultimately be a signal of their dangerous power, insofar as they show that people need not passively sit and accept everything that is sold to them as technological progress. Dismantling represents a politics “not as machine hating, but as a way to protect life against a large=scale regimentation and policing of security, labor, time, and community” (166).

    To engage in the fraught work of technological critique is to open oneself up to being labeled a Luddite (with the term being hurled as an epithet), to accusations of complicity in the very systems you are critiquing, and to a realization that many people simply don’t want to listen to their smartphone habits being criticized. Yet the various conceptual frameworks that can be derived from a consideration of “words against machines in the American long seventies” provide “tactics that might be repeated or emulated, if nostalgia and cynicism do not bar the way” (172). Such concepts present a method of pushing back at the “yes, but” logic which riddles so many discussions of technology today – conversations in which the downsides are acknowledged (the “yes”), yet where the counter is always offered that perhaps there’s still a way to use those technologies correctly (the “but”).

    In contrast to the comfortable rut of “yes, but” Tierney’s book argues for dismantling, wherein “to dismantle is to set aside the dithering of yes, but and to try instead the hard work of critique” (175).

    Running through many of the thinkers, writers, and activists detailed in Dismantlings is a genuine attempt to come to terms with the ways in which new technological forces are changing society. Though many of these individuals responded to such changes not by picking up hammers, but by turning to writing, this activity was always couched in a sense that the shifts afoot truly mattered. Agitated by the roaring clangor of the machines of their day, these figures from the long seventies were looking at the machines of their moment in order to consider what would need to be done to construct a different future. And they did this while looking askance at the more popular techno-utopian visions of the future being promulgated in their day. Writing of the historic Luddites, the historian David Noble commented that, “the Luddites were perhaps the last people in the West to perceive technology in the present tense and to act upon that perception” (Noble, 7), and it may be tempting to suggest that the various figures cataloged in Dismantlings were too focused on the future to have acted upon technology in their present. Nevertheless, as Tierney notes, “the present does not precede the future; rather the future (like its past) distorts and neighbors the present” (173) – the Luddites may have acted in the present, but their eyes were also on the future. It is worth remembering that we do not make sense of the technologies around us solely by what they mean now, but by what we think they will mean for the future.

    While Dismantlings provides a “counterlexicon” drawn from the writing/thinking/acting of a range of individuals in the late seventies, there is something rather tragic about reading these thoughts two decades into the twenty-first century. After all, readers of Dismantlings find themselves in what would have been the future to these late seventies thinkers. And, to be blunt, the world of today seems more in line with those thinkers’ fears for the future than with their hopes. An “epistemological Luddism” has not been used to carefully evaluate which tools to keep and which to discard, “communion” has not become a guiding principle, and “cyberculture” has drifted away from Hiton’s initial meaning to become a stand-in for a sort of uncritical techno-utopianism. The “master’s tools” have expanded to encompass ever more powerful tools, and the “master’s house” appears sturdier than ever – worse still many of us may have become so enamored by some of “the master’s tools” that we have started to entertain delusions that these are actually our tools. To a certain extent, Dismantlings stands as a reminder of a range of individuals who tried to warn us that we would wind up in the mess in which we find ourselves. Those who are equipped with such powers of perception are often mocked and derided in their own time, but looking back at them with hindsight one can get a discomforting sense of just how prescient they truly were.

    Matt Tierney’s Dismantlings: Words Against Machines in the American Long Seventies is a remarkable book. It is also a difficult book. Difficult not because of impenetrable theoretical prose (the writing is clear and crisp), but because it is always challenging to go back and confront the warnings that were ignored. At a moment when headlines are filled with sordid tales of the malfeasance of the tech behemoths, and increasingly terrifying news of the state of the planet, it is both reassuring and infuriating to recognize that it did not have to be this way. True, these long seventies figures did not specifically warn about Facebook, and climate change was not the term they used to speak of environmental degradation – but it’s doubtful that many of these figures would be particularly surprised by either occurrence.

    As a contribution to scholarship, Dismantlings represents a much needed addition to the literature on the long seventies – particularly the literature that considers technology in that period. While much of the present literature (much of it excellent) dealing with those years has tended to focus on the hippies who fell in love with their computers, Tierney’s book is a reminder of those who never composed poems of praise for their machines. After all, not everyone believed that the computer would be an emancipatory technology. This book brings together a wide assortment of figures and draws useful connections between them that will hopefully rescue many a name from obscurity. And even those names that can hardly be called obscure appear in a new light when viewed through the lenses that Tierney develops in this book. While readers may be familiar with names like Lorde, Le Guin, Delaney, and Pynchon – Tierney makes it clear that there is much to be gained by reading Hilton, Mohawk, Firestone, and revisiting the “Triple Revolution Manifesto.”

    Tierney also offers a vital intervention into ongoing discussions over the meaning of Luddism. While it may be fair to say that such discussions are occurring amongst a rather small group of people, it is a passionate debate nevertheless. Tierney avoids re-litigating the history of the original Luddites, and his timeline cuts off before the emergence of the Neo-Luddites, but his book provides valuable insight into the transformations the idea of Luddism went through in the long seventies. Granted, Luddism does not always appear to be a term that was being embraced by the figures in Tierney’s history. Certainly, Winner developed the concept of “epistemological Luddism,” and Pynchon is still remembered for his “Is it O.K. to Be a Luddite?” op-ed, but many of those who spoke about dismantling did not don the mask, or pick up the hammer, of General Ludd. Thus, this book is a clear attempt not to restate others’ views on Luddism, but to freshly theorize the idea. Drawing on his long seventies sources, Tierney writes that:

    Luddism is not the destruction of all machines. And neither is it the hatred of machines as such. Like cyberculture, it is another word for dismantling. Luddism is the performative breaking of machines that limit species expression and impede planetary survival. (13)

    This is a robust and loaded definition of Luddism. While it clearly moves Luddism towards a practice instead of simply a descriptor for particular historical actors, it also presents Luddism as a constructive (as opposed to destructive) process. There are several aspects of Tierney’s definition that deserve particular attention. First, by also evoking “cyberculture” (referring to Hilton’s ethically grounded notion when she coined the term), Tierney demonstrates that Luddism is not the only word or tactic for dismantling. Second, by evoking “the performative breaking,” Tierney moves Luddism away from the blunt force of hammers and towards the more difficult work of critical evaluation. Lastly, by linking Luddism to “species expression and…planetary survival,” Tierney highlights that even if this Luddism is not “the hatred of machines as such” it still entails the recognition that there are some machines that should be hated – and that should be taken apart. It’s the sort of message that you can imagine many people getting behind, even as one can anticipate the choruses of “yes, but” that would be sure to greet this.

    Granted, even though Tierney considers a fair number of manifestos of a revolutionary sort, Dismantlings is not a new Luddite manifesto (though it might be a Luddite lexicon). While Tierney writes of the various figures he analyzes with empathy and affection, he also writes with a certain weariness. After all, as was noted earlier, we are currently living in the world about which these critics tried to warn us. And therefore Tierney can note, “if no political overturning followed the literary politics of cyberculture and Luddism in their own moment, then certainly none will follow them now” (25). Nevertheless, Tierney couches these dour comments in the observation that, “even as a revolution fails, its failure fuels common feeling without which subsequent revolutions cannot succeed” (25). At the very least the assorted thinkers and works described in Dismantlings provide a rich resource to those in the present who are concerned about “species expression” and “planetary survival.” Indeed, those advocating to break up the tech companies or pushing for the Green New Deal can learn a great deal by revisiting the works discussed in Dismantlings.

    Nevertheless, it feels as though there are some key characters missing from Dismantlings. To be clear this point is not meant to detract from Tierney’s excellent and worthwhile book. Furthermore, it must be noted that devotees of particular theorists and social critics tend to have a strong “why isn’t [the theorist/social critic I am devoted to] discussed more in here!?” reaction to works. Nevertheless, there were certain figures who seemed to be oddly missing from Dismantlings. Reflecting on the types of machines against which figures in the long seventies were reacting, Tierney writes of “the war machine, the industrial machine, the computer, and the machines of state are all connected” (4). And it was the dangerous connection of all of these that the social critic Lewis Mumford sought to describe in his theorizing of “the megamachine” – theorizing which he largely did in his two volume Myth of the Machine (which was published in the long seventies). Though Mumford’s idea of “technic” eras is briefly mentioned early in Dismantlings, his broader thinking that touches directly on the core areas of Dismantlings are not remarked on. Several figures who were heavily influenced by Mumford’s work appear in Dismantlings (notably Bookchin and Roszak), and Mumford’s thought could have certainly bolstered some of the books arguments. Mumford, after all, saw himself as a bit of an anti-McLuhan – and in evaluating thinkers who were concerned with what technology meant for “species expression” and “planetary survival” Mumford deserves more attention. Given the overall thrust of Dismantlings it also might have been interesting to see Erich Fromm’s The Revolution of Hope: toward a humanized technology and Ivan Illich’s Tools for Conviviality discussed. Granted, these comments are not meant as attacks on Tierney’s excellent book – they are simply an observation by an avowed Mumford partisan.

    To fully appreciate why the thoughts from the long seventies still matter today it may be useful to consider a line from one of Mumford’s early works. As Mumford wrote, in 1931, “every generation revolts against its fathers and makes friends with its grandfathers” (Mumford, 1). To a certain extent, Dismantlings is an argument for those currently invested in debates around technology to revisit “and make friends” with earlier generations of critics. There is much to be gained from such a move. Notable here is a shift in an evaluation of dangers. Throughout Dismantlings Tierney returns frequently to Wiener’s line that “this is the world of Belsen and Hiroshima” – and without meaning to be crass this is an understanding of the world that has somewhat receded into the past as the memory of those events becomes enshrined in history books. Yet for the likes of Wiener and many of the other individuals discussed in Dismantlings, “Belsen and Hiroshima” were not abstractions or distant memories – they were not the crimes that could be consigned to the past. Rather they were bleak reminders of the depths to which humanity could sink, and the way in which science and technology could act as a weight to drag humanity even deeper. Today’s world is the world of climate change, border walls, and surveillance capitalism – but it is still “the world of Belsen and Hiroshima.”

    There is much that needs to be dismantled, and not much time in which to do that work.

    The lessons from the long seventies are those that we are still struggling to reckon with today, including the recognition that in order to fully make sense of the machines around us it may be necessary to dismantle many of them. Of course, “not everything should be dismantled, but many things should be and some things must be, even if we don’t know where to begin” (163).

    Tierney’s book does not provide an easy answer, but it does show where we should begin.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • Lewis Mumford. The Brown Decades. New York: Dover Books, 1971.
    • David F. Noble. Progress Without People. Toronto: Between the Lines, 1995.
    • E.P. Thompson. The Making of the English Working Class. New York: Vintage Books, 1966.
  • Zachary Loeb — Flamethrowers and Fire Extinguishers (Review of Jeff Orlowski, dir., The Social Dilemma)

    Zachary Loeb — Flamethrowers and Fire Extinguishers (Review of Jeff Orlowski, dir., The Social Dilemma)

    a review of Jeff Orlowski, dir., The Social Dilemma (Netflix/Exposure Labs/Argent Pictures, 2020)

    by Zachary Loeb

    ~

    The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!

    – Joseph Weizenbaum (1976)

    Why did you last look at your smartphone? Did you need to check the time? Was picking it up a conscious decision driven by the need to do something very particular, or were you just bored? Did you turn to your phone because its buzzing and ringing prompted you to pay attention to it? Regardless of the particular reasons, do you sometimes find yourself thinking that you are staring at your phone (or other computerized screens) more often than you truly want? And do you ever feel, even if you dare not speak this suspicion aloud, that your gadgets are manipulating you?

    The good news is that you aren’t just being paranoid, your gadgets were designed in such a way as to keep you constantly engaging with them. The bad news is that you aren’t just being paranoid, your gadgets were designed in such a way as to keep you constantly engaging with them. What’s more, on the bad news front, these devices (and the platforms they run) are constantly sucking up information on you and are now pushing and prodding you down particular paths. Furthermore, alas more bad news, these gadgets and platforms are not only wreaking havoc on your attention span they are also undermining the stability of your society. Nevertheless, even though there is ample cause to worry, the new film The Social Dilemma ultimately has good news for you: a collection of former tech-insiders is starting to speak out! Sure, many of these individuals are the exact people responsible for building the platforms that are currently causing so much havoc—but they meant well, they’re very sorry, and (did you hear?) they meant well.

    Directed by Jeff Orlowski, and released to Netflix in early September 2020, The Social Dilemma is a docudrama that claims to provide a unsparing portrait of what social media platforms have wrought. While the film is made up of a hodgepodge of elements, at the core of the work are a series of interviews with Silicon Valley alumni who are concerned with the direction in which their former companies are pushing the world. Most notable amongst these, the film’s central character to the extent it has one, is Tristan Harris (formerly a design ethicists at Google, and one of the cofounders of The Center for Humane Technology) who is not only repeatedly interviewed but is also shown testifying before the Senate and delivering a TED style address to a room filled with tech luminaries. This cast of remorseful insiders is bolstered by a smattering of academics, and non-profit leaders, who provide some additional context and theoretical heft to the insiders’ recollections. And beyond these interviews the film incorporates a fictional quasi-narrative element depicting the members of a family (particularly its three teenage children) as they navigate their Internet addled world—with this narrative providing the film an opportunity to strikingly dramatize how social media “works.”

    The Social Dilemma makes some important points about the way that social media works, and the insiders interviewed in the film bring a noteworthy perspective. Yet beyond the sad eyes, disturbing animations, and ominous music The Social Dilemma is a piece of manipulative filmmaking on par with the social media platforms it critiques. While presenting itself as a clear-eyed expose of Silicon Valley, the film is ultimately a redemption tour for a gaggle of supposedly reformed techies wrapped in an account that is so desperate to appeal to “both sides” that it is unwilling to speak hard truths.

    The film warns that the social media companies are not your friends, and that is certainly true, but The Social Dilemma is not your friend either.

    The Social Dilemma

    As the film begins the insiders introduce themselves, naming the companies where they had worked, and identifying some of the particular elements (such as the “like” button) with which they were involved. Their introductions are peppered with expressions of concern intermingled with earnest comments about how “Nobody, I deeply believe, ever intended any of these consequences,” and that “There’s no one bad guy.” As the film transitions to Tristan Harris rehearsing for the talk that will feature later in the film, he comments that “there’s a problem happening in the tech industry, and it doesn’t have a name.” After recounting his personal awakening, whilst working at Google, and his attempt to spark a serious debate about these issues with his coworkers, the film finds “a name” for the “problem” Harris had alluded to: “surveillance capitalism.” The thinker who coined that term, Shoshana Zuboff, appears to discuss this concept which captures the way in which Silicon Valley thrives not off of users’ labor but off of every detail that can be sucked up about those users and then sold off to advertisers.

    After being named, “surveillance capitalism” hovers in the explanatory background as the film considers how social media companies constantly pursue three goals: engagement (to keep you coming back), growth (to get you to bring in more users), and advertising (to get better at putting the right ad in front of your eyes, which is how the platforms make money). The algorithms behind these platforms are constantly being tweaked through A/B testing, with every small improvement being focused around keeping users more engaged. Numerous problems emerge: designed to be addictive, these platforms and devices claw at users’ attention; teenagers (especially young ones) struggle as their sense of self-worth becomes tied to “likes;” misinformation spreads rapidly in an information ecosystem wherein the incendiary gets more attention than the true; and the slow processes of democracy struggle to keep up with the speed of technology. Though the concerns are grave, and the interviewees are clearly concerned, the tonality is still one of hopefulness; the problem here is not really social media, but “surveillance capitalism,” and if “surveillance capitalism” can be thwarted then the true potential of social media can be attained. And the people leading that charge against “surveillance capitalism”? Why, none other than the reformed insiders in the film.

    While the bulk of the film consists of interviews, and news clips, the film is periodically interrupted by a narrative in which a family with three teenage children is shown. The Mother (Barbara Gehring) and Step-Father (Chris Grundy) are concerned with their children’s social media usage, even as they are glued to their own devices. As for the children: the oldest Cassandra (Kara Hayward) is presented as skeptical towards social media, the youngest Isla (Sophia Hammons) Is eager for online popularity, and the middle child Ben (Skyler Gisondo) eventually falls down the rabbit hole of recommended conspiratorial content. As the insiders, and academics, talk about the various dangers of social media the film shifts to the narrative to dramatize these moments – thus a discussion of social media’s impact on young teenagers, particularly girls, cuts to Isla being distraught after an insulting comment is added to one of the images she uploads. Cassandra (that name choice can’t be a coincidence) is presented as most in line with the general message of the film and the character refers to Jaron Lanier as a “genius” and in another sequence is shown reading Zuboff’s The Age of Surveillance Capitalism. Yet the member of the family the film dwells on the most is almost certainly Ben. For the purposes of dramatizing how an algorithm works, the film repeatedly returns to a creepy depiction of the Advertising, Engagement, and Growth Ais (all played by Vincent Kartheiser) as they scheme to get Ben to stay glued to his phone. Beyond the screens, the world in the narrative is being rocked by a strange protest movement calling itself “The Extreme Center” – whose argument seems to be that both sides can’t be trusted – and Ben eventually gets wrapped up in their message. The family’s narrative concludes with Ben and Cassandra getting arrested at a raucous rally held by “The Extreme Center,” sitting handcuffed on the ground and wondering how it is that this could have happened.

    To the extent that The Social Dilemma builds towards a conclusion, it is the speech that Harris gives (before an audience that includes many of the other interviewees in the film). And in that speech, and the other comments made around it, the point that is emphasized is that Silicon Valley must get away from “surveillance capitalism.” It must embrace “humane technology” that seeks to empower users not entangle them. Emphasizing that, despite how things have turned out, that “I don’t think these guys set out to be evil” the various insiders double-down on their belief in high-tech’s liberatory potential. Contrasting rather unflattering imagery of Mark Zuckerberg (without genuinely calling him out) testifying with images of Steve Jobs in his iconic turtleneck, the film claims “the idea of humane technology, that’s where Silicon Valley got its start.” And before the credits roll, Harris seems to speak for his fellow insiders as he notes “we built these things, and we have a responsibility to change it.” For those who found the film unsettling, and who are confused by exactly what they are meant to do if they are not part of Harris’s “we,” the film offers some straightforward advice. Drawing on their own digital habits, the insiders recommend: turning off notifications, never watching a recommended video, opting for a less-invasive search engine, trying to escape your content bubble, keeping your devices out of your bedroom, and being a critical consumer of information.

    It is a disturbing film, and it is constructed so as to unsettle the viewer, but it still ends on a hopeful note: reform is possible, and the people in this film are leading that charge. The problem is not social media as such, but what the ways in which “surveillance capitalism” has thwarted what social media could really be. If, after watching The Social Dilemma, you feel concerned about what “surveillance capitalism” has done to social media (and you feel prepared to make some tweaks in your social media use) but ultimately trust that Silicon Valley insiders are on the case—then the film has succeeded in its mission. After all, the film may be telling you to turn off Facebook notifications, but it doesn’t recommend deleting your account.

    Yet one of the points the film makes is that you should not accept the information that social media presents to you at face value. And in the same spirit, you should not accept the comments made by oh-so-remorseful Silicon Valley insiders at face value either. To be absolutely clear: we should be concerned about the impacts of social media, we need to work to rein in the power of these tech companies, we need to be willing to have the difficult discussion about what kind of society we want to live in…but we should not believe that the people who got us into this mess—who lacked the foresight to see the possible downsides in what they were building—will get us out of this mess. If these insiders genuinely did not see the possible downsides of what they were building, than they are fools who should not be trusted. And if these insiders did see the possible downsides, continued building these things anyways, and are now pretending that they did not see the downsides, than they are liars who definitely should not be trusted.

    It’s true, arsonists know a lot about setting fires, and a reformed arsonist might be able to give you some useful fire safety tips—but they are still arsonists.

    There is much to be said about The Social Dilemma. Indeed, anyone who cares about these issues (unfortunately) needs to engage with The Social Dilemma if for no other reason than the fact that this film will be widely watched, and will thus set much of the ground on which these discussions take place. Therefore, it is important to dissect certain elements of the film. To be clear, there is a lot to explore in The Social Dilemma—a book or journal issue could easily be published in which the docudrama is cut into five minute segments with academics and activists being each assigned one segment to comment on. While there is not the space here to offer a frame by frame analysis of the entire film, there are nevertheless a few key segments in the film which deserve to be considered. Especially because these key moments capture many of the film’s larger problems.

    “when bicycles showed up”

    A moment in The Social Dilemma that perfectly, if unintentionally, sums up many of the major flaws with the film occurs when Tristan Harris opines on the history of bicycles. There are several problems in these comments, but taken together these lines provide you with almost everything you need to know about the film. As Harris puts it:

    No one got upset when bicycles showed up. Right? Like, if everyone’s starting to go around on bicycles, no one said, ‘Oh, my God, we’ve just ruined society. [chuckles] Like, bicycles are affecting people. They’re pulling people away from their kids. They’re ruining the fabric of democracy. People can’t tell what’s true.’ Like we never said any of that stuff about a bicycle.

    Here’s the problem, Harris’s comments about bicycles are wrong.

    They are simply historically inaccurate. Some basic research into the history of bicycles that looks at the ways that people reacted when they were introduced would reveal that many people were in fact quite “upset when bicycles showed up.” People absolutely were concerned that bicycles were “affecting people,” and there were certainly some who were anxious about what these new technologies meant for “the fabric of democracy.” Granted, that there were such adverse reactions to the introduction of bicycles should not be seen as particularly surprising, because even a fairly surface-level reading of the history of technology reveals that when new technologies are introduced they tend to be met not only with excitement, but also with dread.

    Yet, what makes Harris’s point so interesting is not just that he is wrong, but that he is so confident while being so wrong. Smiling before the camera, in what is obviously supposed to be a humorous moment, Harris makes a point about bicycles that is surely one that will stick with many viewers—and what he is really revealing is that he needs to take some history classes (or at least do some reading). It is genuinely rather remarkable that this sequence made it into the final cut of the film. This was clearly an expensive production, but they couldn’t have hired a graduate student to watch the film and point out “hey, you should really cut this part about bicycles, it’s wrong”? It is hard to put much stock in Harris, and friends, as emissaries of technological truth when they can’t be bothered to do basic research.

    That Harris speaks so assuredly about something which he is so wrong about gets at one of the central problems with the reformed insiders of The Social Dilemma. Though these are clearly intelligent people (lots of emphasis is placed on the fancy schools they attended), they know considerably less than they would like the viewers to believe. Of course, one of the ways that they get around this is by confidently pretending they know what they’re talking about, which manifests itself by making grandiose claims about things like bicycles that just don’t hold up. The point is not to mock Harris for this mistake (though it really is extraordinary that the segment did not get cut), but to make the following point: if Harris, and his friends, had known a bit more about the history of technology, and perhaps if they had a bit more humility about what they don’t know, perhaps they would not have gotten all of us into this mess.

    A point that is made by many of the former insiders interviewed for the film is that they didn’t know what the impacts would be. Over and over again we hear some variation of “we meant well” or “we really thought we were doing something great.” It is easy to take such comments as expressions of remorse, but it is more important to see such comments as confessions of that dangerous mixture of hubris and historical/social ignorance that is so common in Silicon Valley. Or, to put it slightly differently, these insiders really needed to take some more courses in the humanities. You know how you could have known that technologies often have unforeseen consequences? Study the history of technology. You know how you could have known that new media technologies have jarring political implications? Read some scholarship from media studies. A point that comes up over and over again in such scholarly work, particularly works that focus on the American context, is that optimism and enthusiasm for new technology often keeps people (including inventors) from seeing the fairly obvious risks—and all of these woebegone insiders could have known that…if they had only been willing to do the reading. Alas, as anyone who has spent time in a classroom knows, a time honored way of covering up for the fact that you haven’t done the reading is just to speak very confidently and hope that your confidence will successfully distract from the fact that you didn’t do the reading.

    It would be an exaggeration to claim “all of these problems could have been prevented if these people had just studied history!” And yet, these insiders (and society at large) would likely be better able to make sense of these various technological problems if more people had an understanding of that history. At the very least, such historical knowledge can provide warnings about how societies often struggle to adjust to new technologies, can teach how technological progress and social progress are not synonymous, can demonstrate how technologies have a nasty habit of biting back, and can make clear the many ways in which the initial liberatory hopes that are attached to a technology tend to fade as it becomes clear that the new technology has largely reinscribed a fairly conservative status quo.

    At the very least, knowing a bit more about the history of technology can keep you from embarrassing yourself by confidently making claiming that “we never said any of that stuff about a bicycle.”

    “to destabilize”

    While The Social Dilemma expresses concern over how digital technologies impact a person’s body, the film is even more concerned about the way these technologies impact the body politic. A worry that is captured by Harris’s comment that:

    We in the tech industry have created the tools to destabilize and erode the fabric of society.

    That’s quite the damning claim, even if it is one of the claims in the film that probably isn’t all that controversial these days. Though many of the insiders in the film pine nostalgically for those idyllic days from ten years ago when much of the media and the public looked so warmly towards Silicon Valley, this film is being released at a moment when much of that enthusiasm has soured. One of the odd things about The Social Dilemma is that politics are simultaneously all over the film, and yet politics in the film are very slippery. When the film warns of looming authoritarianism: Bolsanaro gets some screen time, Putin gets some ominous screen time—but though Trump looms in the background of the film he’s pretty much unseen and unnamed. And when US politicians do make appearances we get Marco Rubio and Jeff Flake talking about how people have become too polarized and Jon Tester reacting with discomfort to Harris’s testimony. Of course, in the clip that is shown, Rubio speaks some pleasant platitudes about the virtues of coming together…but what does his voting record look like?

    The treatment of politics in The Social Dilemma comes across most clearly in the narrative segment, wherein much attention is paid to a group that calls itself “The Extreme Center.” Though the ideology of this group is never made quite clear, it seems to be a conspiratorial group that takes as its position that “both sides are corrupt” – rejecting left and right it therefore places itself in “the extreme center.” It is into this group, and the political rabbit hole of its content, that Ben falls in the narrative – and the raucous rally (that ends in arrests) in the narrative segment is one put on by the “extreme center.” It may appear that “the extreme center” is just a simple storytelling technique, but more than anything else it feels like the creation of this fictional protest movement is really just a way for the film to get around actually having to deal with real world politics.

    The film includes clips from a number of protests (though it does not bother to explain who these people are and why they are protesting), and there are some moments when various people can be heard specifically criticizing Democrats or Republicans. But even as the film warns of “the rabbit hole” it doesn’t really spend much time on examples. Heck, the first time that the words “surveillance capitalism” get spoken in the film are in a clip of Tucker Carlson. Some points are made about “pizzagate” but the documentary avoids commenting on the rapidly spreading QAnon conspiracy theory. And to the extent that any specific conspiracy receives significant attention it is the “flat earth” conspiracy. Granted, it’s pretty easy to deride the flat earthers, and in focusing on them the film makes a very conscious decision to not focus on white supremacist content and QAnon. Ben falls down the “extreme center” rabbit hole, and it may well be that the reason why the filmmakers have him fall down this fictional rabbit hole is so that they don’t have to talk about the likelihood that (in the real world) he would likely fall down a far-right rabbit hole. But The Social Dilemma doesn’t want to make that point, after all, in the political vision it puts forth the problem is that there is too much polarization and extremism on both sides.

    The Social Dilemma clearly wants to avoid taking sides. And in so doing demonstrates the ways in which Silicon Valley has taken sides. After all, to focus so heavily on polarization and the extremism of “both sides” just serves to create a false equivalency where none exists. But, the view that “the Trump administration has mismanaged the pandemic” and the view that “the pandemic is a hoax” – are not equivalent. The view that “climate change is real” and “climate change is a hoax” – are not equivalent. People organizing for racial justice and people organizing because they believe that Democrats are satanic cannibal pedophiles – are not equivalent. The view that “there is too much money in politics” and the view that “the Jews are pulling the strings” – are not equivalent. Of course, to say that these things “are not equivalent” is to make a political judgment, but by refusing to make such a judgment The Social Dilemma presents both sides as being equivalent. There are people online who are organizing for the cause of racial justice, and there are white-supremacists organizing online who are trying to start a race war—those causes may look the same to an algorithm, and they may look the same to the people who created those algorithms, but they are not the same.

    You cannot address the fact that Facebook and YouTube have become hubs of violent xenophobic conspiratorial content unless you are willing to recognize that Facebook and YouTube actively push violent xenophobic conspiratorial content.

    It is certainly true that there are activist movements from the left and the right organizing online at the moment, but when you watch a movie trailer on YouTube the next recommended video isn’t going to be a talk by Angela Davis.

    “it’s the critics”

    Much of the content of The Social Dilemma is unsettling, and the film makes it clear that change is necessary. Nevertheless, the film ends on a positive note. Pivoting away from gloominess, the film shows the rapt audience nodding as Harris speaks of the need for “humane technology,” and this assembled cast of reformed insiders is presented as proof that Silicon Valley is waking up to the need to take responsibility. Near the film’s end, Jaron Lanier hopefully comments that:

    it’s the critics that drive improvement. It’s the critics who are the true optimists.

    Thus, the sense that is conveyed at the film’s close is that despite the various worries that had been expressed—the critics are working on it, and the critics are feeling good.

    But, who are the critics?

    The people interviewed in the film, obviously.

    And that is precisely the problem. “Critic” is something of a challenging term to wrestle with as it doesn’t necessarily take much to be able to call yourself, or someone else, a critic. Thus, the various insiders who are interviewed in the film can all be held up as “critics” and can all claim to be “critics” thanks to the simple fact that they’re willing to say some critical things about Silicon Valley and social media. But what is the real content of the criticisms being made? Some critics are going to be more critical than others, so how critical are these critics? Not very.

    The Social Dilemma is a redemption tour that allows a bunch of remorseful Silicon Valley insiders to rebrand themselves as critics. Based on the information provided in the film it seems fairly obvious that a lot of these individuals are responsible for causing a great deal of suffering and destruction, but the film does not argue that these men (and they are almost entirely men) should be held accountable for their deeds. The insiders have harsh things to say about algorithms, they too have been buffeted about by nonstop nudging, they are also concerned about the rabbit hole, they are outraged at how “surveillance capitalism” has warped technological possibilities—but remember, they meant well, and they are very sorry.

    One of the fascinating things about The Social Dilemma is that in one scene a person will proudly note that they are responsible for creating a certain thing, and then in the next scene they will say that nobody is really to blame for that thing. Certainly not them, they thought they were making something great! The insiders simultaneously want to enjoy the cultural clout and authority that comes from being the one who created the like button, while also wanting to escape any accountability for being the person who created the like button. They are willing to be critical of Silicon Valley, they are willing to be critical of the tools they created, but when it comes to their own culpability they are desperate to hide behind a shield of “I meant well.” The insiders do a good job of saying remorseful words, and the camera catches them looking appropriately pensive, but it’s no surprise that these “critics” should feel optimistic, they’ve made fortunes utterly screwing up society, and they’ve done such a great job of getting away with it that now they’re getting to elevate themselves once again by rebranding themselves as “critics.”

    To be a critic of technology, to be a social critic more broadly, is rarely a particularly enjoyable or a particularly profitable undertaking. Most of the time, if you say anything critical about technology you are mocked as a Luddite, laughed at as a “prophet of doom,” derided as a technophobe, accused of wanting everybody to go live in caves, and banished from the public discourse. That is the history of many of the twentieth century’s notable social critics who raised the alarm about the dangers of computers decades before most of the insiders in The Social Dilemma were born. Indeed, if you’re looking for a thorough retort to The Social Dilemma you cannot really do better than reading Joseph Weizenbaum’s Computer Power and Human Reason—a book which came out in 1976. That a film like The Social Dilemma is being made may be a testament to some shifting attitudes towards certain types of technology, but it was not that long ago that if you dared suggest that Facebook was a problem you were denounced as an enemy of progress.

    There are many phenomenal critics speaking out about technology these days. To name only a few: Safiya Noble has written at length about the ways that the algorithms built by companies like Google and Facebook reinforce racism and sexism; Virginia Eubanks has exposed the ways in which high-tech tools of surveillance and control are first deployed against society’s most vulnerable members; Wendy Hui Kyong Chun has explored how our usage of social media becomes habitual; Jen Schradie has shown the ways in which, despite the hype to the contrary, online activism tends to favor right-wing activists and causes; Sarah Roberts has pulled back the screen on content moderation to show how much of the work supposedly being done by AI is really being done by overworked and under-supported laborers; Ruha Benjamin has made clear the ways in which discriminatory designs get embedded in and reified by technical systems; Christina Dunbar-Hester has investigated the ways in which communities oriented around technology fail to overcome issues of inequality; Sasha Costanza-Chock has highlighted the need for an approach to design that treats challenging structural inequalities as the core objective, not an afterthought; Morgan Ames expounds upon the “charisma” that develops around certain technologies; and Meredith Broussard has brilliantly inveighed against the sort of “technochauvinist” thinking—the belief that technology is the solution to every problem—that is so clearly visible in The Social Dilemma. To be clear, this list of critics is far from all-inclusive. There are numerous other scholars who certainly could have had their names added here, and there are many past critics who deserve to be named for their disturbing prescience.

    But you won’t hear from any of those contemporary critics in The Social Dilemma. Instead, viewers of the documentary are provided with a steady set of mostly male, mostly white, reformed insiders who were unable to predict that the high-tech toys they built might wind up having negative implications.

    It is not only that The Social Dilemma ignores most of the figures who truly deserve to be seen as critics, but that by doing so what The Social Dilemma does is set the boundaries for who gets to be a critic and what that criticism can look like. The world of criticism that The Social Dilemma sets up is one wherein a person achieves legitimacy as a critic of technology as a result of having once been a tech insider. Thus what the film does is lay out, and then set about policing the borders of, what can pass for acceptable criticism of technology. This not only limits the cast of critics to a narrow slice of mostly white mostly male insiders, it also limits what can be put forth as a solution. You can rest assured that the former insiders are not going to advocate for a response that would involve holding the people who build these tools accountable for what they’ve created. On the one hand it’s remarkable that no one in the film really goes after Mark Zuckerberg, but many of these insiders can’t go after Zuckerberg—because any vitriol they direct at him could just as easily be directed at them as well.

    It matters who gets to be deemed a legitimate critic. When news networks are looking to have a critic on it matters whether they call Tristan Harris or one of the previously mentioned thinkers, when Facebook does something else horrendous it matters whether a newspaper seeks out someone whose own self-image is bound up in the idea that the company means well or someone who is willing to say that Facebook is itself the problem. When there are dangerous fires blazing everywhere it matters whether the voices that get heard are apologetic arsonists or firefighters.

    Near the film’s end, while the credits play, as Jaron Lanier speaks of Silicon Valley he notes “I don’t hate them. I don’t wanna do any harm to Google or Facebook. I just want to reform them so they don’t destroy the world. You know?” And these comments capture the core ideology of The Social Dilemma, that Google and Facebook can be reformed, and that the people who can reform them are the people who built them.

    But considering all of the tangible harm that Google and Facebook have done, it is far past time to say that it isn’t enough to “reform” them. We need to stop them.

    Conclusion: On “Humane Technology”

    The Social Dilemma is an easy film to criticize. After all, it’s a highly manipulative piece of film making, filled with overly simplified claims, historical inaccuracies, conviction lacking politics, and a cast of remorseful insiders who still believe Silicon Valley’s basic mythology. The film is designed to scare you, but it then works to direct that fear into a few banal personal lifestyle tweaks, while convincing you that Silicon Valley really does mean well. It is important to view The Social Dilemma not as a genuine warning, or as a push for a genuine solution, but as part of a desperate move by Silicon Valley to rehabilitate itself so that any push for reform and regulation can be captured and defanged by “critics” of its own choosing.

    Yet, it is too simple (even if it is accurate) to portray The Social Dilemma as an attempt by Silicon Valley to control both the sale of flamethrowers and fire extinguishers. Because such a focus keeps our attention pinned to Silicon Valley. It is easy to criticize Silicon Valley, and Silicon Valley definitely needs to be criticized—but the bright-eyed faith in high-tech gadgets and platforms that these reformed insiders still cling to is not shared only by them. The people in this film blame “surveillance capitalism” for warping the liberatory potential of Internet connected technologies, and many people would respond to this by pushing back on Zuboff’s neologism to point out that “surveillance capitalism” is really just “capitalism” and that therefore the problem is really that capitalism is warping the liberatory potential of Internet connected technologies. Yes, we certainly need to have a conversation about what to do with Facebook and Google (dismantle them). But at a certain point we also need to recognize that the problem is deeper than Facebook and Google, at a certain point we need to be willlng to talk about computers.

    The question that occupied many past critics of technology was the matter of what kinds of technology do we really need? And they were clear that this was a question that was far too important to be left to machine-worshippers.

    The Social Dilemma responds to the question of “what kind of technology do we really need?” by saying “humane technology.” After all, the organization The Center for Humane Technology is at the core of the film, and Harris speaks repeatedly of “humane technology.” At the surface level it is hard to imagine anyone saying that they disapprove of the idea of “humane technology,” but what the film means by this (and what the organization means by this) is fairly vacuous. When the Center for Humane Technology launched in 2018, to a decent amount of praise and fanfare, it was clear from the outset that its goal had more to do with rehabilitating Silicon Valley’s image than truly pushing for a significant shift in technological forms. Insofar as “humane technology” means anything, it stands for platforms and devices that are designed to be a little less intrusive, that are designed to try to help you be your best self (whatever that means), that try to inform you instead of misinform you, and that make it so that you can think nice thoughts about the people who designed these products. The purpose of “humane technology” isn’t to stop you from being “the product,” it’s to make sure that you’re a happy product. “Humane technology” isn’t about deleting Facebook, it’s about renewing your faith in Facebook so that you keep clicking on the “like” button. And, of course, “humane technology” doesn’t seem to be particularly concerned with all of the inhumanity that goes into making these gadgets possible (from mining, to conditions in assembly plants, to e-waste). “Humane technology” isn’t about getting Ben or Isla off their phones, it’s about making them feel happy when they click on them instead of anxious. In a world of empowered arsonists, “humane technology” seeks to give everyone a pair of asbestos socks.

    Many past critics also argued that what was needed was to place a new word before technology – they argued for “democratic” technologies, or “holistic” technologies, or “convivial” technologies, or “appropriate” technologies, and this list could go on. Yet at the core of those critiques was not an attempt to salvage the status quo but a recognition that what was necessary in order to obtain a different sort of technology was to have a different sort of society. Or, to put it another way, the matter at hand is not to ask “what kind of computers do we want?” but to ask “what kind of society do we want?” and to then have the bravery to ask how (or if) computers really fit into that world—and if they do fit, how ubiquitous they will be, and who will be responsible for the mining/assembling/disposing that are part of those devices’ lifecycles. Certainly, these are not easy questions to ask, and they are not pleasant questions to mull over, which is why it is so tempting to just trust that the Center for Humane Technology will fix everything, or to just say that the problem is Silicon Valley.

    Thus as the film ends we are left squirming unhappily as Netflix (which has, of course, noted the fact that we watched The Social Dilemma) asks us to give the film a thumbs up or a thumbs down – before it begins auto-playing something else.

    The Social Dilemma is right in at least one regard, we are facing a social dilemma. But as far as the film is concerned, your role in resolving this dilemma is to sit patiently on the couch and stare at the screen until a remorseful tech insider tells you what to do.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to Calculation. New York: WH Freeman & Co.
  • R. Joshua Scannell — Architectures of Managerial Triumphalism (Review of Benjamin Bratton, The Stack: On Software and Sovereignty)

    R. Joshua Scannell — Architectures of Managerial Triumphalism (Review of Benjamin Bratton, The Stack: On Software and Sovereignty)

    A review of Benjamin Bratton, The Stack: On Software and Sovereignty (MIT Press Press, 2016)

    by R. Joshua Scannell

    The Stack

    Benjamin Bratton’s The Stack: On Software and Sovereignty is an often brilliant and regularly exasperating book. It is a diagnosis of the epochal changes in the relations between software, sovereignty, climate, and capital that underwrite the contemporary condition of digital capitalism and geopolitics.  Anybody who is interested in thinking through the imbrication of digital technology with governance ought to read The Stack. There are many arguments that are useful or interesting. But reading it is an endeavor. Sprawling out across 502 densely packed pages, The Stack is nominally a “design brief” for the future. I don’t know that I understand that characterization, no matter how many times I read this tome.

    The Stack is chockablock with schematic abstractions. They make sense intuitively or cumulatively without ever clearly coming into focus. This seems to be a deliberate strategy. Early in the book, Bratton describes The Stack–the titular “accidental megastructure” of “planetary computation” that has effectively broken and redesigned, well, everything–as “a blur.” He claims that

    Only a blur provides an accurate picture of what is going on now and to come…Our description of a system in advance of its appearance maps what we can see but cannot articulate, on the one hand, versus what we know to articulate but cannot yet see, on the other. (14)

    This is also an accurate description of the prevailing sensation one feels working through the text. As Ian Bogost wrote in his review of The Stack for Critical Inquiry, reading the book feels “intense—meandering and severe but also stimulating and surprising. After a while, it was also a bit overwhelming. I’ll take the blame for that—I am not necessarily built for Bratton’s level and volume of scholarly intensity.” I agree on all fronts.

    Bratton’s inarguable premise is that the various computational technologies that collectively define the early decades of the 21st century—smart grids, cloud platforms, mobile apps, smart cities, the Internet of Things, automation—are not analytically separable. They are often literally interconnected but, more to the point, they combine to produce a governing architecture that has subsumed older calculative technologies like the nation state, the liberal subject, the human, and the natural. Bratton calls this “accidental megastructure” The Stack.

    Bratton argues that The Stack is composed of six “layers,” the earth, the cloud, the city, the address, the interface, and the user. They all indicate more or less what one might expect, but with a counterintuitive (and often Speculative Realist) twist. The earth is the earth but is also a calculation machine. The cloud is “the cloud” but as a chthonic structure of distributed networks and nodal points that reorganize sovereign power and body forth quasi-feudal corporate sovereignties. The City is, well, cities, but not necessarily territorially bounded, formally recognized, or composed of human users. Users are also usually not human. They’re just as often robots or AI scripts. Really they can be anything that works up and down the layers, interacting with platforms (which can be governments) and routed through addresses (which are “every ‘thing’ that can be computed” including “individual units of life, loaded shipping containers, mobile devices, locations of datum in databases, input and output events and enveloped entities of all size and character” [192], etc.).

    Each layer is richly thought through and described, though it’s often unclear whether the “layer” in question is “real” or a useful conceptual envelope or both or neither. That distinction is generally untenable, and Bratton would almost certainly reject the dichotomy between the “real” and the “metaphorical.” But it isn’t irrelevant for this project. He argues early on that, contra Marxist thought that understands the state metaphorically as a machine, The Stack is a “machine-as-the-state.” That’s both metaphorical and not. There really are machines that exert sovereign power, and there are plenty of humans in state apparatuses that work for machines. But there aren’t, really, machines that are states. Right?

    Moments like these, when The Stack’s concepts productively destabilize given categories (like the state) that have never been coherent enough to justify their power are when the book is at its most compelling. And many of the counterintuitive moves that Bratton makes start and end with real, important insights. For instance, the insistence on the absolute materiality, and the absolute earthiness of The Stack and all of its operations leads Bratton to a thoroughgoing and categorical rejection of the prevailing “idiot language” that frames digital technology as though it exists in a literal “cloud,” or some sort of ethereal “virtual” that is not coincident with the “real” world. Instead, in The Stack, every point of contact between every layer is a material event that transduces and transforms everything else. To this end, he inverts Latour’s famous dictum that there is no global, only local. Instead, The Stack as planetary megastructure means that there is only global. The local is a dead letter. This is an anthropocene geography in which an electron, somewhere, is always firing because a fossil fuel is burning somewhere else. But it is also a post-anthropocene geography because humans are not The Stack’s primary users. The planet itself is a calculation machine, and it is agnostic about human life. So, there is a hybrid sovereignty: The Stack is a “nomos of the earth” in which humans are an afterthought.

    A Design for What?

    Bratton is at his conceptual best when he is at his weirdest. Cyclonopedic (Negarestani 2008) passages in which the planet slowly morphs into something like HP Lovecraft and HR Geiger’s imaginations fucking in a Peter Thiel fever dream are much more interesting (read: horrifying) than the often perfunctory “real life” examples from “real world” geopolitical trauma, like “The First Sino-Google War of 2009.” But this leads to one of the most obvious shortcomings of the text. It is supposedly a “design brief,” but it’s not clear what or who it is a design brief for.

    For Bratton, design

    means the structuring of the world in reaction to an accelerated decay and in projective anticipation of a condition that is now only the ghostliest of a virtual present tense. This is a design for accommodating (or refusing to accommodate) the post-whatever-is-melting-into-air and prototyping for pre-what-comes-next: a strategic, groping navigation (however helpless) of the punctuations that bridge between these two. (354)

    Design, then, and not theory, because Bratton’s Stack is a speculative document. Given the bewildering and potentially apocalyptic conditions of the present, he wants to extrapolate outwards. What are the heterotopias-to-come? What are the constraints? What are the possibilities? Sounding a familiar frustration with the strictures of academic labor, he argues that this moment requires something more than diagnosis and critique. Rather,

    the process by which sovereignty is made more plural becomes a matter of producing more than discoursing: more about pushing, pulling, clicking, eating, modeling, stacking, prototyping, subtracting, regulating, restoring, optimizing, leaving alone, splicing, gardening and evacuating than about reading, examining, insisting, rethinking, reminding, knowing full-well, enacting, finding problematic, and urging. (303)

    No doubt. And, not that I don’t share the frustration, but I wonder what a highly technical, 500-page diagnosis of the contemporary state of software and sovereignty published and distributed by an academic press and written for an academic audience is if not discoursing? It seems unlikely that it can serve as a blueprint for any actually-existing power brokers, even though its insights are tremendous. At the risk of sounding cynical, calling The Stack a “design brief” seems like a preemptive move to liberate Bratton from having to seriously engage with the different critical traditions that work to make sense of the world as it is in order to demand something better. This allows for a certain amount of intellectual play that can sometimes feel exhilarating but can just as often read as a dodge—as a way of escaping the ethical and political stakes that inhere in critique.

    That is an important elision for a text that is explicitly trying to imagine the geopolitics of the future. Bratton seems to pose The Stack from a nebulous “Left” position that is equally disdainful of the sort of “Folk Politics” that Srnicek and Williams (2015) so loathe and the accelerationist tinge of the Speculative Realists with whom he seems spiritually aligned. This sense of rootlessness sometimes works in Bratton’s favor. There are long stretches in which his cherry picking and remixing ideas from across a bewildering array of schools of thought yields real insights. But just as often, the “design brief” characterization seems to be a way out of thinking the implications of the conjuncture through to their conclusion. There is a breeziness about how Bratton poses futures-as-thought-experiments that is troubling.

    For instance, in thinking through the potential impacts of the capacity to measure planetary processes in real time, Bratton suggests that producing a sensible world is not only a process of generalizing measurement and representation. He argues that

    the sensibility of the world might be distributed or organized, made infrastructural, and activated to become part of how the landscape understands itself and narrates itself. It is not only a diagnostic image then; it is a tool for geo-politics in formation, emerging from the parametric multiplication and algorithmic conjugation of our surplus projections of worlds to come, perhaps in mimetic accordance with one explicit utopian conception or another, and perhaps not. Nevertheless, the decision between what is and is not governable may arise as much from what the model computational image cannot do as much as what it can. (301, emphasis added)

    Reading this, I wanted to know: What explicit utopian project is he thinking about? What are the implications of it going one way and not another? Why mimetic? What does the last bit about what is and is not governable mean? Or, more to the point: who and what is going to get killed if it goes one way and not another? There are a great many instances like this over the course of the book. At the precise moment where analysis might inform an understanding of where The Stack is taking us, Bratton bows out. He’s set down the stakes, and given a couple of ideas about what might happen. I guess that’s what a design brief is meant to do.

    Another example, this time concerning the necessity of geoengineering for solving what appears to be an ever-more-imminent climatic auto-apocalypse:

    The good news is that we know for certain that short-term “geoengineering” is not only possible but in a way inevitable, but how so? How and by whom does it go, and unfortunately for us the answer (perhaps) must arrive before we can properly articulate the question. For the darker scenarios, macroeconomics completes its metamorphosis into ecophagy, as the discovery of market failures becomes simultaneously the discovery of limits of planetary sinks (e.g., carbon, heat, waste, entropy, populist politics) and vice versa; The Stack becomes our dakhma. The shared condition, if there is one, is the mutual unspeakability and unrecognizability that occupies the seat once reserved for Kantian cosmopolitanism, now just a pre-event reception for a collective death that we will actually be able to witness and experience. (354, emphasis added)

    Setting aside the point that it is not at all clear to me that geoengineering is an inevitable or even appropriate (Crist 2017) way out of the anthropocene (or capitalocene? (Moore 2016)) crisis, if the answer for “how and by whom does it go” is to arrive before the question can be properly articulated, then the stack-to-come starts looking a lot like a sort of planetary dictatorship of, well of who? Google? Mark Zuckerberg? In-Q-Tel? Y Combinator? And what exactly is the “populist politics” that sits in the Latourian litany alongside carbon, heat, waste, and entropy as a full “planetary sink”? Does that mean Trump, and all the other globally ascendant right wing “populists?” Or does it mean “populist politics” in the Jonathan Chait sense that can’t differentiate between left and right and therefore sees both political projects as equally dismissible? Does populism include any politics that centers the needs and demands of the public? What are the commitments in this dichotomy? I suppose The Stack wouldn’t particularly care about these sorts of questions. But a human writing a 500-page playbook so that other humans might better understand the world-to-come might be expected to. After all, a choice between geoengineering or collective death might be what the human population of the planet is facing (and for most of the planet’s species, and for a great many of the planet’s human societies, already eliminated or dragged down the road towards it during the current mass extinction, there is no choice), but such a binary doesn’t make for much of a design spec.

    One final example, this time on what the political subject of the stack-to-come ought to look like:

    We…require, as I have laid out, a redefinition of the political subject in relation to the real operations of the User, one that is based not on homo economicus, parliamentary liberalism, poststructuralist linguistic reduction, or the will to secede into the moral safety of individual privacy and withdrawn from coercion. Instead, this definition should focus on composing and elevating sites of governance from the immediate, suturing interfacial material between subjects, in the stitches and the traces and the folds of interaction between bodies and things at a distance, congealing into different networks demanding very different kinds of platform sovereignty.

    If “poststructuralist linguistic reduction” is on the same plane as “parliamentary liberalism” or “homo economicus” as one among several prevailing ideas of the contemporary “political subject,” then I am fairly certain that we are in the realm of academic “theory” rather than geopolitical “design.” The more immediate point is that I do understand what the terms that we ought to abandon mean, and agree that they need to go. But I don’t understand what the redefined political subject looks like. Again, if this is “theory,” then that sort of hand waving is unfortunately often to be expected. But if it’s a design brief—even a speculative one—for the transforming nature of sovereignty and governance, then I would hope for some more clarity on what political subjectivity looks like in The Stack-To-Come.

    Or, and this is really the point, I want The Stack to tell me something more about how The Stack participates in the production and extractable circulation of populations marked for death and debility (Puar 2017). And I want to know what, exactly, is so conceptually radical about pointing out that human beings are not at the center of the planetary systems that are driving transformations in geopolitics and sovereignty. After all, hasn’t that been exactly the precondition for the emergence of The Stack? This accidental megastructure born out of the ruthless expansions of digitally driven capitalism is not just working to transform the relationship between “human” and sovereignty. The condition of its emergence is precisely that most planetary homo sapiens are not human, and are therefore disposable and disposited towards premature death. The Stack might be “our” dhakma, if we’re speaking generically as a sort of planetary humanism that cannot but be read as white—or, more accurately, “capacitated.” But the systematic construction of human stratification along lines of race, gender, sex, and ability as precondition for capitalist emergence freights the stack with a more ancient, and ignored, calculus: that of the logistical work that shuttles humans between bodies, cargo, and capital. It is, in other words, the product of an older planetary death machine: what Fred Moten and Stefano Harney (2013) call the “logistics in the hold” that makes The Stack hum along.

    The tenor of much of The Stack is redolent of managerial triumphalism. The possibility of apocalypse is always minimized. Bratton offers, a number of times, that he’s optimistic about the future. He is disdainful of the most stringent left critics of Silicon Valley, and he thinks that we’ll probably be able to trust to our engineers and institutions to work out The Stack’s world-destroying kinks. He sounds invested, in other words, in a rhetorical-political mode of thought that, for now, seems to have died on November 9, 2016. So it is not surprising that Bratton opens the book with an anecdote about Hillary Clinton’s vision of the future of world governance.

    The Stack begins with a reference to then-Secretary of State Clinton’s 2013 farewell address to the Council on Foreign Relations. In that speech, Clinton argued that the future of international governance requires a “new architecture for this new world, more Frank Gehry than formal Greek.” Unlike the Athenian Agora, which could be held up by “a few strong columns,” contemporary transnational politics is too complicated to rely on stolid architecture, and instead must make use of the type of modular assemblage that “at first might appear haphazard, but in fact, [is] highly intentional and sophisticated” that makes Gehry famous. Bratton interprets her argument as a “half-formed question, what is the architecture of the emergent geopolitics of this software society? What alignments, components, foundations, and apertures?” (Bratton 2016, 13).

    For Clinton, future governance must make a choice between Gehry and Agora. The Gehry future is that of the seemingly “haphazard” but “highly intentional and sophisticated” interlocking treaties, non-governmental organizations, super and supra-state technocratic actors working together to coordinate the disparate interests of states and corporations in the service of the smooth circulation of capital across a planetary logistics network. On the other side, a world order held up by “a few strong pillars”—by implication the status quo after the collapse of the Soviet Union, a transnational sovereign apparatus anchored by the United States. The glaring absence in this dichotomy is democracy—or rather its assumed subsumption into American nationalism. Clinton’s Gehry future is a system of government whose machinations are by design opaque to those that would be governed, but whose beneficence is guaranteed by the good will of the powerful. The Agora—the fountainhead of slaveholder democracy—is metaphorically reduced to its pillars, particularly the United States and NATO. Not unlike ancient Athens, it’s democracy as empire.

    There is something darkly prophetic of the collapse of the Clintonian world vision, and perversely apposite in Clinton’s rhetorical move to supplant as the proper metaphor for future government Gehry for the Agora. It is unclear why a megalomaniacal corporate starchitecture firm that robs public treasuries blind and facilitates tremendous labor exploitation ought to be the future for which the planet strives.

    For better or for worse, The Stack is a book about Clinton. As a “design brief,” it works from a set of ideas about how to understand and govern the relationship between software and sovereignty that were strongly intertwined with the Clinton-Obama political project. That means, abysmally, that it is now also about Trump. And Trump hangs synechdochally over theoretical provocations for what is to be done now that tech has killed the nation-state’s “Westphalian Loop.” This was a knotty question when the book went to press in February 2016 and Gehry seemed ascendant. Now that the Extreme Center’s (Ali 2015) project of tying neoliberal capitalism to non-democratic structures of technocratic governance appears to be collapsing across the planet, Clinton’s “half-formed question” is even knottier. If we’re living through the demise of the Westphalian nation state, then it’s sounding one hell of a murderous death rattle.

    Gehry or Agora?

    In the brief period between July 21st and November 8 2016, when the United States’ cognoscenti convinced itself that another Clinton regime was inevitable, there was a neatly ordered expectation of how “pragmatic” future governance under a prolonged Democratic regime would work. In the main, the public could look forward to another eight years sunken in a “Gehry-like” neoliberal surround subtended by the technocratic managerialism of the Democratic Party’s right edge. And, while for most of the country and planet, that arrangement didn’t portend much to look forward to, it was at least not explicitly nihilistic in its outlook. The focus on management, and on the deliberate dismantling of the nation state as the primary site of governance in favor of the mesh of transnational agencies and organizations that composed 21st century neoliberalism’s star actants meant that a number of questions about how the world would be arranged were left unsettled.

    By end of election week, that future had fractured. The unprecedented amateurishness, decrypted racism, and incomparable misogyny of the Trump campaign portended an administration that most thought couldn’t, or at the very least shouldn’t, be trusted with the enormous power of the American executive. This stood in contrast to Obama, and (perhaps to a lesser extent) to Clinton, who were assumed to be reasonable stewards. This paradoxically helps demonstrate just how much the “rule of law” and governance by administrative norms that theoretically underlie the liberal national state had already deteriorated under Obama and his immediate predecessors—a deterioration that was in many ways made feasible by the innovations of the digital technology sector. As many have pointed out, the command-and-control prerogatives that Obama claimed for the expansion of executive power depended essentially on the public perception of his personal character.

    The American people, for instance, could trust planetary drone warfare because Obama claimed to personally vet our secret kill list, and promised to be deliberate and reasonable about its targets. Of course, Obama is merely the most publicly visible part of a kill-chain that puts this discretionary power over life and death in the hands of the executive. The kill-chain is dependent on the power of, and sovereign faith in, digital surveillance and analytics technologies. Obama’s kill-chain, in short, runs on the capacities of an American warfare state—distributed at nodal points across the crust of the earth, and up its Van Allen belts—to read planetary chemical, territorial, and biopolitical fluxes and fluctuations as translatable data that can be packet switched into a binary apparatus of life and death. This is the calculus that Obama conjures when he defines those mobile data points that concatenate into human beings as as “baseball cards” that constitute a “continuing, imminent threat to the American people.” It is the work of planetary sovereignty that rationalizes and capacitates the murderous “fix” and “finish” of the drone program.

    In other words, Obama’s personal aura and eminent reasonableness legitimated an essentially unaccountable and non-localizable network of black sites and black ops (Paglen 2009, 2010) that loops backwards and forwards across the drone program’s horizontal regimes of national sovereignty and vertical regimes of cosmic sovereignty. It is, to use Clinton’s framework, a very Frank Gehry power structure. Donald Trump’s election didn’t transform these power dynamics. Instead, his personal qualities made the work of planetary computation in the service of sovereign power to kill suddenly seem dangerous or, perhaps better: unreasonable. Whether President Donald Trump would be so scrupulous as his predecessor in determining the list of humans fit for eradication was (formally speaking) a mystery, but practically a foregone conclusion. But in both presidents’ cases, the dichotomies between global and local, subject and sovereign, human and non-human that are meant to underwrite the nation state’s rights and responsibilities to act are fundamentally blurred.

    Likewise, Obama’s federal imprimatur transformed the transparently disturbing decision to pursue mass distribution of privately manufactured surveillance technology – Taser’s police-worn body cameras, for instance – as a reasonable policy response to America’s dependence on heavily armed paramilitary forces to maintain white supremacy and crush the poor. Under Obama and Eric Holder, American liberals broadly trusted that digital criminal justice technologies were crucial for building a better, more responsive, and more responsible justice system. With Jeff Sessions in charge of the Department of Justice, the idea that the technologies that Obama’s Presidential Task Force on 21st Century Policing lauded as crucial for achieving the “transparency” needed to “build community trust” between historically oppressed groups and the police remained plausible instruments of progressive reform suddenly seemed absurd. Predictive policing, ubiquitous smart camera surveillance, and quantitative risk assessments sounded less like a guarantee of civil rights and more like a guarantee of civil rights violations under a president that lauds extrajudicial police power. Trump goes out of his way to confirm these civil libertarian fears, such as when he told Long Island law enforcement that “laws are stacked against you. We’re changing those laws. In the meantime, we need judges for the simplest thing — things that you should be able to do without a judge.”

    But, perhaps more to the point, the rollout of these technologies, like the rollouts of the drone program, formalized a transformation in the mechanics of sovereign power that had long been underway. Stripped of the sales pitch and abstracted from the constitutional formalism that ordinarily sets the parameters for discussions of “public safety” technologies, what digital policing technologies do is flatten out the lived and living environment into a computational field. Police-worn body cameras quickly traverse the institutional terrain from a tool meant to secure civil rights against abusive officers into an artificially intelligent weapon that flags facial structures that match with outstanding warrants, that calculates changes in enframed bodily comportment to determine imminent threat to the officer-user, and that captures the observed social field as  data privately owned by the public safety industry’s weapons manufacturers. Sovereignty, in this case, travels up and down a Stack of interoperative calculative procedures, with state sanction and human action just another data point in the proper administration of quasi-state violence. After all, it is Axon (formerly Taser), and not a government that controls the servers that their body cams draw on to make real-time assessments of human danger. The state sanctions a human officer’s violence, but the decision-making apparatus that situates the violence is private, and inhuman. Inevitably, the drone war and carceral capitalism collapse into one another, as drones are outfitted with AI designed to identify crowd “violence” from the sky, a vertical parallax to pair with the officer-user’s body worn camera.

    Trump’s election seemed to show with a clarity that had hitherto been unavailable for many that wedding the American security apparatus’ planetary sovereignty to twenty years of unchecked libertarian technological triumphalism (even, or especially if in the service of liberal principles like disruption, innovation, efficiency, transparency, convenience, and generally “making the world a better place”) might, in fact, be dangerous. When the Clinton-Obama project collapsed, its assumption that the intertwining of private and state sector digital technologies inherently improves American democracy and economy, and increases individual safety and security looked absurd. The shock of Trump’s election, quickly and self-servingly blamed on Russian agents and Facebook, transformed Silicon Valley’s broadly shared Prometheanism into interrogations into the industry’s infrastructural corrosive toxicity, and its deleterious effect on the liberal national state.  If tech would ever come to Jesus, the end of 2016 would have had to be the moment. It did not.

    A few days after Trump won election I found myself a fly on the wall in a meeting with mid-level executives for one of the world’s largest technology companies (“The Company”). We were ostensibly brainstorming how to make The Cloud a force for “global good,” but Trump’s ascendancy and all its authoritarian implications made the supposed benefits of cloud computing—efficiency, accessibility, brain-shattering storage capacity—suddenly terrifying. Instead of setting about the dubious task of imagining how a transnational corporation’s efforts to leverage the gatekeeping power over access to the data of millions, and the private control over real-time identification technology (among other things) into heavily monetized semi-feudal quasi-sovereign power could be Globally Good, we talked about Trump.

    The Company’s reps worried that, Peter Thiel excepted, tech didn’t have anybody near enough to Trump’s miasmatic fog to sniff out the administration’s intentions. It was Clinton, after all, who saw the future in global information systems. Trump, as we were all so fond of pointing out, didn’t even use a computer. Unlike Clinton, the extent of Trump’s mania for surveillance and despotism was mysterious, if predictable. Nobody knew just how many people of color the administration had in its crosshairs, and The Company reps suggested that the tech world wasn’t sure how complicit it wanted to be in Trump’s explicitly totalitarian project. The execs extemporized on how fundamental the principles of democratic and republican government were to The Company, how committed they were to privacy, and how dangerous the present conjuncture was. As the meeting ground on, reason slowly asphyxiated on a self-evidently implausible bait hook: that it was now both the responsibility and appointed role of American capital, and particularly of the robber barons of Platform Capitalism (Srnicek 2016), to protect Americans from the fascistic grappling of American government. Silicon Valley was going to lead the #resistance against the very state surveillance and overreach that it capacitated, and The Company would lead Silicon Valley. That was the note on which the meeting adjourned.

    That’s not how things have played out. A month after that meeting, on December 14, 2016, almost all of Silicon Valley’s largest players sat down at Trump’s technology roundtable. Explaining themselves to an aghast (if credulous) public, tech’s titans argued that it was their goal to steer the new chief executive of American empire towards a maximally tractable gallimaufry of power. This argument, plus over one hundred companies’ decision to sign an amici curiae brief opposing Trump’s first attempt at a travel ban aimed at Muslims, seemed to publicly signal that Silicon Valley was prepared to #resist the most high-profile degradations of contemporary Republican government. But, in April 2017, Gizmodo inevitably reported that those same companies that appointed themselves the front line of defense against depraved executive overreach in fact quietly supported the new Republican president before he took office. The blog found that almost every major concern in the Valley donated tremendously to the Trump administration’s Presidential Inaugural Committee, which was impaneled to plan his sparsely attended inaugural parties. The Company alone donated half a million dollars. Only two tech firms donated more. It seemed an odd way to #resist.

    What struck me during the meeting was how weird it was that executives honestly believed a major transnational corporation would lead the political resistance against a president committed to the unfettered ability of American capital to do whatever it wants. What struck me afterward was how easily the boundaries between software and sovereignty blurred. The Company’s executives assumed, ad hoc, that their operation had the power to halt or severely hamper the illiberal policy priorities of government. By contrast, it’s hard to imagine mid-level General Motors executives imagining that they have the capacity or responsibility to safeguard the rights and privileges of the republic. Except in an indirect way, selling cars doesn’t have much to do with the health of state and civil society. But state and civil society is precisely what Silicon Valley has privatized, monetized, and re-sold to the public. But even “state and civil society” is not quite enough. What Silicon Valley endeavors to produce is, pace Bratton, a planetary simulation as prime mover. The goal of digital technology conglomerates is not only to streamline the formal and administrative roles and responsibilities of the state, or to recreate the mythical meeting houses of the public sphere online. Platform capital has as its target the informational infrastructure that makes living on earth seem to make sense, to be sensible. And in that context, it’s commonsensical to imagine software as sovereignty.

    And this is the bind that will return us to The Stack. After one and a half relentless years of the Trump presidency, and a ceaseless torrent of public scandals concerning tech companies’ abuse of power, the technocratic managerial optimism that underwrote Clinton’s speech has come to a grinding halt. For the time being, at least, the “seemingly haphazard yet highly intentional and sophisticated” governance structures that Clinton envisioned are not working as they have been pitched. At the same time, the cavalcade of revelations about the depths that technology companies plumb in order to extract value from a polluted public has led many to shed delusions about the ethical or progressive bona fides of an industry built on a collective devotion to Ayn Rand. Silicon Valley is happy to facilitate authoritarianism and Nazism, to drive unprecedented crises of homelessness, to systematically undermine any glimmer of dignity in human labor, to thoroughly toxify public discourse, to entrench and expand carceral capitalism so long as doing so expands the platform, attracts advertising and venture capital, and increases market valuation. As Bratton points out, that’s not a particularly Californian Ideology. It’s The Stack, both Gehry and Agora.

    _____

    R. Joshua Scannell holds a PhD in Sociology from the CUNY Graduate Center. He teaches sociology and women’s, gender, and sexuality studies at Hunter College, and is currently researching the political economic relations between predictive policing programs and urban informatics systems. He is the author of Cities: Unauthorized Resistance and Uncertain Sovereignty in the Urban World (Paradigm/Routledge, 2012).

    Back to the essay

    _____

    Works Cited

    • Ali, Tariq. 2015. The Extreme Center: A Warning. London: Verso
    • Crist, Eileen. 2016. “On the Poverty of Our Nomenclature.” In Anthropocene or Capitalocene: Nature, History, and the Crisis of Capitalism, edited by Jason W. Moore, 14-33. Oakland: PM Press
    • Harney, Stefano, and Fred Moten. 2013. The Undercommons: Fugitive Planning and Black Study. Brooklyn: Autonomedia.
    • Moore, Jason W. 2016. “Anthropocene or Capitolocene? Nature, History, and the Crisis of Capitalism.” In Anthropocene or Capitalocene: Nature, History, and the Crisis of Capitalism, edited by Jason W. Moore, 1-13. Oakland: PM Press
    • Negarestani, Reza. 2008. Cyclonopedia: Complicity with Anonymous Materials. Melbourne: re.press
    • Paglen, Trevor. 2009. Blank Spots on the Map: The Dark Geography of the Pentagon’s Secrert World. Boston: Dutton Adult
    • Paglen, Trevor. 2010. Invisible: Covert Operations and Classified Landscapes. Reading: Aperture Press
    • Puar, Jasbir. 2017. The Right to Maim: Debility, Capacity, Disability. Durham: Duke University Press
    • Srnicek, Nick. 2016. Platform Capitalism. Boston: Polity Press
    • Srnicek, Nick, and Alex Williams. 2016. Inventing the Future: Postcapitalism and a World Without Work. London: Verso.
  • Zachary Loeb — From Megatechnic Bribe to Megatechnic Blackmail: Mumford’s ‘Megamachine’ After the Digital Turn

    Zachary Loeb — From Megatechnic Bribe to Megatechnic Blackmail: Mumford’s ‘Megamachine’ After the Digital Turn

    Zachary Loeb

    Without even needing to look at the copyright page, an aware reader may be able to date the work of a technology critic simply by considering the technological systems, or forms of media, being critiqued. Unfortunately, in discovering the date of a given critique one may be tempted to conclude that the critique itself must surely be dated. Past critiques of technology may be read as outdated curios, can be considered as prescient warnings that have gone unheeded, or be blithely disregarded as the pessimistic braying of inveterate doomsayers. Yet, in the case of Lewis Mumford, even though his activity peaked by the mid-1970s, it would be a mistake to deduce from this that his insights are of no value to the world of today. Indeed, when it comes to the “digital turn,” it is a “turn” in the road which Mumford saw coming.

    It would be reductive to simply treat Mumford as a critic of technology. His body of work includes literary analysis, architectural reviews, treatises on city planning, iconoclastic works of history, impassioned calls to arms, and works of moral philosophy (Mumford 1982; Miller 1989; Blake 1990; Luccarelli 1995; Wojtowicz 1996). Leo Marx described Mumford as “a generalist with strong philosophic convictions,” one whose body of work represents the steady unfolding of “a single view of reality, a comprehensive historical, moral, and metaphysical—one might say cosmological—doctrine” (L. Marx 1990: 167). In the opinion of the literary scholar Charles Molesworth, Mumford is an “axiologist with a clear social purpose: he wants to make available to society a better and fuller set of harmoniously integrated values” (Molesworth 1990: 241), while Christopher Lehmann-Haupt caricatured Mumford as “perhaps our most distinguished flagellator,” and Lewis Croser denounced him as a “prophet of doom” who “hates almost all modern ideas and modern accomplishments without discrimination” (Mendelsohn 1994: 151-152). Perhaps Mumford is captured best by Rosalind Williams, who identified him alternately as an “accidental historian” (Williams 1994: 228) and as a “cultural critic” (Williams 1990: 44) or by Don Ihde who referred to him as an “intellectual historian” (Ihde 1993; 96). As for Mumford’s own views, he saw himself in the mold of the prophet Jonah, “that terrible fellow who keeps on uttering the very words you don’t want to hear, reporting the bad news and warning you that it will get even worse unless you yourself change your mind and alter your behavior” (Mumford 1979: 528).

    Therefore, in the spirit of this Jonah let us go see what is happening in Ninevah after the digital turn. Drawing upon Mumford’s oeuvre, particularly the two volume The Myth of the Machine, this paper investigates similarities between Mumford’s concept of “the megamachine” and the post digital-turn technological world. In drawing out these resonances, I pay particular attention to the ways in which computers featured in Mumford’s theorizing of the “megamachine” and informed his darkening perception. In addition I expand upon Mumford’s concept of “the megatechnic bribe” to argue that, after the digital-turn, what takes place is a move from “the megatechnic bribe” towards what I term “megatechnic blackmail.”

    In a piece provocatively titled “Prologue for Our Times,” which originally appeared in The New Yorker in 1975, Mumford drolly observed: “Even now, perhaps a majority of our countrymen still believe that science and technics can solve all human problems. They have no suspicion that our runaway science and technics themselves have come to constitute the main problem the human race has to overcome” (Mumford 1975: 374). The “bad news” is that more than forty years later a majority may still believe that.

    Towards “The Megamachine”

    The two-volume Myth of the Machine was not Mumford’s first attempt to put forth an overarching explanation of the state of the world mixing cultural criticism, historical analysis, and free-form philosophizing; he had previously attempted a similar feat with his Renewal of Life series.

    Mumford originally planned the work as a single volume, but soon came to realize that this project was too ambitious to fit within a single book jacket (Miller 1989, 299). The Renewal of Life ultimately consisted of four volumes: Technics and Civilization (1934), The Culture of Cities (1938), The Condition of Man (1944), and The Conduct of Life (1951)—of which Technics and Civilization remains the text that has received the greatest continued attention. A glance at the nearly twenty-year period encompassed in the writing of these four books should make it obvious that they were written during a period of immense change and upheaval in the world and this certainly impacted the shape and argument of these books. These books fall evenly on opposite sides of two events that were to have a profound influence on Mumford’s worldview: the 1944 death of his son Geddes on the Italian front during World War II, and the dropping of atomic bombs on Hiroshima and Nagasaki in 1945.

    The four books fit oddly together and reflect Mumford’s steadily darkening view of the world—a pendulous swing from hopefulness to despair (Blake 1990, 286-287). With the Renewal of Life, Mumford sought to construct a picture of the sort of “whole” which could develop such marvelous potential, but which was so morally weak that it wound up using that strength for destructive purposes. Unwelcome though Mumford’s moralizing may have been, it was an attempt, albeit from a tragic perspective (Fox 1990), to explain why things were the way that they were, and what steps needed to be taken for positive change to occur. That the changes that were taking place were those which, in Mumford’s estimation, were for the worse propelled him to develop concepts like “the megamachine” and the “megatechnic bribe” to explain the societal regression he was witnessing.

    By the time Mumford began work on The Renewal of Life he had already established himself as a prominent architectural critic and public intellectual. Yet he remained outside of any distinct tradition, school, or political ideology. Mumford was an iconoclastic thinker whose ethically couched regionalist radicalism, influenced by the likes of Ebenezer Howard, Thorstein Veblen, Peter Kropotkin and especially Patrick Geddes, placed him at odds with liberals and socialists alike in the early decades of the twentieth century (Blake 1990, 198-199). For Mumford the prevailing progressive and radical philosophies had been buried amongst the rubble of World War I and he felt that a fresh philosophy was needed, one that would find in history the seeds for social and cultural renewal, and Mumford thought himself well-equipped to develop such a philosophy (Miller 1989, 298-299). Mumford was hardly the first in his era to attempt such a synthesis (Lasch 1991): by the time Mumford began work on The Renewal of Life, Oswald Spengler had already published a grim version of such a new philosophy (300). Indeed, there is something of a perhaps not-accidental parallel between Spengler’s title The Decline of the West and Mumford’s choice of The Renewal of Life as the title for his own series.

    In Mumford’s estimation, Spengler’s work was “more than a philosophy of history” it was “a work of religious consolation” (Mumford 1938, 218). The two volumes of The Decline of the West are monuments to Prussian pessimism in which Spengler argues that cultures pass “from the organic to the inorganic, from spring to winter, from the living to the mechanical, from the subjectively conditioned to the objectively conditioned” (220). Spengler argued that this is the fate of all societies, and he believed that “the West” had entered into its winter. It is easy to read Spengler’s tracts as woebegone anti-technology dirges (Farrenkopf 2001, 110-112), or as a call for “Faustian man” (Western man) to assert dominance over the machine and wield it lest it be wielded against him (Herf 1984, 49-69); but Mumford observed that Spengler had “predicted, better than more hopeful philosophers, the disastrous downward course that modern civilization is now following” (Mumford 1938, 235). Spengler had been an early booster of the Nazi regime, if a later critic of it, and though Mumford criticized Spengler for the politics he helped unleash, Mumford still saw him as one with “much to teach the historian and the sociologist” (Mumford 1938, 227). Mumford was particularly drawn to, and influenced by, Spengler’s method of writing moral philosophy in the guise of history (Miller 1989, 301). And it may well be that Spengler’s woebegone example prompted Mumford to distance himself from being a more “hopeful” philosopher in his later writings. Nevertheless, where Spengler had gazed longingly towards the coming fall, Mumford, even in the grip of the megamachine, still believed that the fall could be avoided.

    Mumford concludes the final section of The Renewal of Life, called The Conduct of Life, with measured optimism, noting: “The way we must follow is untried and heavy with difficulty; it will test to the utmost our faith and our powers. But it is the way toward life, and those who follow it will prevail” (Mumford 1951, 292). Alas, as the following sections will demonstrate, Mumford grew steadily less confident in the prospects of “the way toward life,” and the rise of the computer only served to make the path more “heavy with difficulty.”

    The Megamachine

    The volumes of The Renewal of Life hardly had enough time to begin gathering dust, before Mumford was writing another work that sought to explain why the prophesized renewal had not come. In the two volumes of The Myth of the Machine Mumford revisits the themes from The Renewal of Life while advancing an even harsher critique and developing his concept of the “megamachine.” The idea of the megamachine has been taken up for its explanatory potential by many others beyond Mumford in a range of fields, it was drawn upon by some of his contemporary critics of technology (Fromm 1968; Illich 1973; Ellul 1980), has been commented on by historians and philosophers of technology (Hughes 2004; Jacoby 2005; Mitcham 1994; Segal 1994), has been explored in post-colonial thinking (Alvares 1988), and has sparked cantankerous disagreements amongst those seeking to deploy the term to advance political arguments (Bookchin 1995; Watson 1997). It is a term that shares certain similarities with other concepts that aim to capture the essence of totalitarian technological control such as Jacque Ellul’s “technique,” (Ellul 1967) and Neil Postman’s “technopoly” (Postman 1993). It is an idea that, as I will demonstrate, is still useful for describing, critiquing, and understanding contemporary society.

    Mumford first gestured in the direction of the megamachine in his 1964 essay “Authoritarian and Democratic Technics” (Mumford 1964). There Mumford argued that small scale technologies which require the active engagement of the human, that promote autonomy, and that are not environmentally destructive are inherently “democratic” (2-3); while large scale systems that reduce humans to mere cogs, that rely on centralized control and are destructive of planet and people, are essentially “authoritarian” (3-4). For Mumford, the rise of “authoritarian technics” was a relatively recent occurrence; however, by “recent” he had in mind “the fourth millennium B.C.” (3). Though Mumford considered “nuclear bombs, space rockets, and computers” all to be examples of contemporary “authoritarian technics” (5) he considered the first examples of such systems to have appeared under the aegis of absolute rulers who exploited their power and scientific knowledge for immense construction feats such as the building of the pyramids. As those endeavors had created “complex human machines composed of specialized, standardized, replaceable, interdependent parts—the work army, the military army, the bureaucracy” (3). In drawing out these two tendencies, Mumford was clearly arguing in favor of “democratic technics,” but he moved away from these terms once he coined the neologism “megamachine.”

    Like the Renewal of Life before it, The Myth of the Machine was originally envisioned as a single book (Mumford 1970, xi). The first volume of the two represents something of a rewriting of Technics and Civilization, but gone from Technics and Human Development is the optimism that had animated the earlier work. By 1959 Mumford had dismissed of Technics and Civilization as “something of a museum piece” wherein he had “assumed, quite mistakenly, that there was evidence for a weakening of faith in the religion of the machine” (Mumford 1934, 534). As Mumford wrote The Myth of the Machine he found himself looking at decades of so-called technological progress and seeking an explanation as to why this progress seemed to primarily consist of mountains of corpses and rubble.

    With the rise of kingship, in Mumford’s estimation, so too came the ability to assemble and command people on a scale that had been previously unknown (Mumford 1967, 188). This “machine” functioned by fully integrating all of its components to complete a particular goal and “when all the components, political and economic, military, bureaucratic and royal, must be included” what emerges is “the megamachine” and along with it “megatechnics” (188-189). It was a structure in which, originally, the parts were not made of steel, glass, stone or copper but flesh and blood—though each human component was assigned and slotted into a position as though they were a cog. While the fortunes of the megamachine ebbed and flowed for a period, Mumford saw the megamachine as becoming resurgent in the 1500s as faith in the “sun god” came to be replaced by the “divine king” exploiting new technical and scientific knowledge (Mumford 1970: 28-50). Indeed, in assessing the thought of Hobbes, Mumford goes so far as to state “the ultimate product of Leviathan was the megamachine, on a new and enlarged model, one that would completely neutralize or eliminate its once human parts” (100).

    Unwilling to mince words, Mumford had started The Myth of the Machine by warning that with the “new ‘megatechnics’ the dominant minority will create a uniform, all-enveloping, super-planetary structure, designed for automatic operation” in which “man will become a passive, purposeless, machine-conditioned animal” (Mumford 1967, 3). Writing at the close of the 1960s, Mumford observed that the impossible fantasies of the controllers of the original megamachines were now actual possibilities (Mumford 1970, 238). The rise of the modern megamachine was the result of a series of historic occurrences: the French revolution which replaced the power of the absolute monarch with the power of the nation state; World War I wherein scientists and scholars were brought into service of the state whilst moderate social welfare programs were introduced to placate the masses (245); and finally the emergence of tools of absolute control and destructive power such as the atom bomb (253). Figures like Stalin and Hitler were not exceptions to the rule of the megamachine but only instances that laid bare “the most sinister defects of the ancient megamachine” its violent, hateful and repressive tendencies (247).

    Even though the power of the megamachine may make it seem that resistance is futile, Mumford was no defeatist. Indeed, The Pentagon of Power ends with a gesture towards renewal that is reminiscent of his argument in The Conduct of Life—albeit with a recognition that the state of the world had grown steadily more perilous. A core element of Mumford’s arguments is that the megamachine’s power was reliant on the belief invested in it (the “myth”), but if such belief in the megamachine could be challenged, so too could the megamachine itself (Miller 1989, 156). The Pentagon of Power met with a decidedly mixed reaction: it was selected as a main selection by the Book-of-the-Month-Club and The New Yorker serialized much of the argument about the megamachine (157). Yet, many of the reviewers of the book denounced Mumford for his pessimism; it was in a review of the book in the New York Times that Mumford was dubbed “our most distinguished flagellator” (Mendelsohn 1994, 151-154). And though Mumford chafed at being dubbed a “prophet of doom” (Segal 1994, 149) it is worth recalling that he liked to see himself in the mode of that “prophet of doom” Jonah (Mumford 1979).

    After all, even though Mumford held out hope that the megamachine could be challenged—that the Renewal of Life could still beat back The Myth of the Machine—he glumly acknowledged that the belief that the megamachine was “absolutely irresistible” and “ultimately beneficent…still enthralls both the controllers and the mass victims of the megamachine today” (Mumford 1967, 224). Mumford described this myth as operating like a “magical spell,” but as the discussion of the megatechnic bribe will demonstrate, it is not so much that the audience is transfixed as that they are bought off. Nevertheless, before turning to the topic of the bribe and blackmail, it is necessary to consider how the computer fit into Mumford’s theorizing of the megamachine.

    The Computer and the Megamachine

    Five years after the publication of The Pentagon of Power, Mumford was still claiming that “the Myth of the Machine” was “the ultimate religion of our seemingly rational age” (Mumford 1975, 375). While it is certainly fair to note that Mumford’s “today” is not our today, it would be foolhardy to merely dismiss the idea of the megamachine as anachronistic moralizing. And to credit the megamachine for its full prescience and continued utility, it is worth closely reading the text to consider the ways in which Mumford was writing about the computer—before the digital turn.

    Writing to his friend, the British garden city advocate Frederic J. Osborn, Mumford noted: “As to the megamachine, the threat that it now offers turns out to be even more frightening, thanks to the computer, than even I in my most pessimistic moments had ever suspected. Once fully installed our whole lives would be in the hands of those who control the system…no decision from birth to death would be left to the individual” (M. Hughes 1971, 443). It may be that Mumford was merely engaging in a bit of hyperbolic flourish in referring to his view of the computer as trumping his “most pessimistic moments,” but Mumford was no stranger (or enemy) of pessimistic moments. Mumford was always searching for fresh evidence of “renewal,” his deepening pessimism points to the types of evidence he was actually finding.  In constructing a narrative that traced the origins of the megamachine across history Mumford had been hoping to show “that human nature is biased toward autonomy and against submission to technology,” (Miller 1990, 157) but in the computer Mumford saw evidence pointing in the opposite direction.

    In assessing the computer, Mumford drew a contrast between the basic capabilities of the computers of his day and the direction in which he feared that “computerdom” was moving (Mumford 1970, plate 6).  Computers to him were not simply about controlling “the mechanical process” but also “the human being who once directed it” (189). Moving away from historical antecedents like Charles Babbage, Mumford emphasized Norbert Wiener’s attempt to highlight human autonomy and he praised Wiener’s concern for the tendency on the part of some technicians to begin to view the world only in terms of the sorts of data that computers could process (189). Mumford saw some of the enthusiasm for the computer’s capability as being rather “over-rated” and he cited instances—such as the computer failure in the case of the Apollo 11 moon landing—as evidence that computers were not quite as all-powerful as some claimed (190). In the midst of a growing ideological adoration for computers, Mumford argued that their “life-efficiency and adaptability…must be questioned” (190). Mumford’s critiquing of computers can be read as an attempt on his part to undermine the faith in computers when such a belief was still in its nascent cult state—before it could become a genuine world religion.

    Mumford does not assume a wholly dismissive position towards the computer. Instead he takes a stance toward it that is similar to his position towards most forms of technology: its productive use “depends upon the ability of its human employers quite literally to keep their own heads, not merely to scrutinize the programming but to reserve the right for ultimate decision” (190). To Mumford, the computer “is a big brain in its most elementary state: a gigantic octopus, fed with symbols instead of crabs,” but just because it could mimic some functions of the human mind did not mean that the human mind should be discarded (Mumford 1967: 29). The human brain was for Mumford infinitely more complex than a computer could be, and even where computers might catch up in terms of quantitative comparison, Mumford argued that the human brain would always remain superior in qualitative terms (39). Mumford had few doubts about the capability of computers to perform the functions for which they had been programmed, but he saw computers as fundamentally “closed” systems whereas the human mind was an “open” one; computers could follow their programs but he did not think they could invent new ones from scratch (Mumford 1970: 191). For Mumford the rise in the power of computers was linked largely to the shift away from the “old-fashioned” machines such as Babbage’s Calculating Engine—and towards the new digital and electric machines which were becoming smaller and more commonplace (188). And though Mumford clearly respected the ingenuity of scientists like Weiner, he amusingly suggested that “the exorbitant hopes for a computer dominated society” were really the result of “the ‘pecuniary-pleasure’ center” (191). While Mumford’s measured consideration of the computer’s basic functioning is important, what is of greater significance is his thinking regarding the computer’s place in the megamachine.

    Whereas much of Technics and Human Development focuses upon the development of the first megamachine, in The Pentagon of Power Mumford turns his focus to the fresh incarnation of the megamachine. This “new megamachine” was distinguished by the way in which it steadily did away with the need for the human altogether—now that there were plenty of actual cogs (and computers) human components were superfluous (258). To Mumford, scientists and scholars had become a “new priesthood” who had abdicated their freedom and responsibility as they came to serve the “megamachine” (268). But if they were the “priesthood” than who did they serve? As Mumford explained, in the command position of this new megamachine was to be found a new “ultimate ‘decision-maker’ and Divine King” and this figure had emerged in “a transcendent, electronic form” it was “the Central Computer” (273).

    Writing in 1970, before the rise of the personal computer or the smartphone, Mumford’s warnings about computers may have seemed somewhat excessive. Yet, in imagining the future of a “a computer dominated society” Mumford was forecasting that the growth of the computer’s power meant the consolidation of control by those already in power. Whereas the rulers of yore had dreamt of being all-seeing, with the rise of the computer such power ceased being merely a fantasy as “the computer turns out to be the Eye of the reinstated Sun God” capable of exacting “absolute conformity to his demands, because no secret can be hidden from him, and no disobedience can go unpunished” (274). And this “eye” saw a great deal: “In the end, no action, no conversation, and possibly in time no dream or thought would escape the wakeful and relentless eye of this deity: every manifestation of life would be processed into the computer and brought under its all-pervading system of control. This would mean, not just the invasion of privacy, but the total destruction of autonomy: indeed the dissolution of the human soul” (274-275). The mention of “the human soul” may be evocative of a standard bit of Mumfordian moralizing, but the rest of this quote has more to say about companies like Google and Facebook, as well as about the mass surveillance of the NSA than many things written since. Indeed, there is something almost quaint about Mumford writing of “no action” decades before social media made it so that an action not documented on social media is of questionable veracity. While the comment regarding “no conversation” seems uncomfortably apt in an age where people are cautioned not to disclose private details in front of their smart TVs and in which the Internet of Things populates people’s homes with devices that are always listening.

    Mumford may have written these words in the age of large mainframe computers but his comments on “the total destruction of autonomy” and the push towards “computer dominated society” demonstrate that he did not believe that the power of such machines could be safely locked away. Indeed, that Mumford saw the computer as an example of an “authoritarian technic” makes it highly questionable that he would have been swayed by the idea that personal computers could grant individuals more autonomy. Rather, as I discuss below, it is far more likely that he would have seen the personal computer as precisely the sort of democratic seeming gadget used to “bribe” people into accepting the larger “authoritarian” system. As it is precisely through the placing of personal computers in people’s homes, and eventually on their persons, that the megamachine is able to advance towards its goal of total control.

    The earlier incarnations of the megamachine had dreamt of the sort of power that became actually available in the aftermath of World War II thanks to “nuclear energy, electric communication, and the computer” (274). And finally the megamachine’s true goal became clear: “to furnish and process an endless quantity of data, in order to expand the role and ensure the domination of the power system” (275). In short, the ultimate purpose of the megamachine was to further the power and enhance the control of the megamachine itself. It is easy to see in this a warning about the dangers of “big data” many decades before that term had entered into common use. Aware of how odd these predictions may have sounded to his contemporaries, Mumford recognized that only a few decades earlier such ideas could have been dismissed of as just so much “satire,” but he emphasized that such alarming potentialities were now either already in existence or nearly within reach (275).

    In the twenty-first century, after the digital turn, it is easy to find examples of entities that fit the bill of the megamachine. It may, in fact, be easier to do this today than it was during Mumford’s lifetime. For one no longer needs to engage in speculative thinking to find examples of technologies that ensure that “no action” goes unnoticed. The handful of massive tech conglomerates that dominate the digital world today—companies like Google, Facebook, and Amazon—seem almost scarily apt manifestations of the megamachine. Under these platforms “every manifestation of life” gets “processed into the computer and brought under its all-pervading system of control,” whether it be what a person searches for, what they consider buying, how they interact with friends, how they express their likes, what they actually purchase, and so forth. And as these companies compete for data they work to ensure that nothing is missed by their “relentless eye[s].” Furthermore, though these companies may be technology firms they are like the classic megamachines insofar as they bring together the “political and economic, military, bureaucratic and royal.” Granted, today’s “royal” are not those who have inherited their thrones but those who owe their thrones to the tech empires at the heads of which they sit. While the status of these platform’s users, reduced as they are to cogs supplying an endless stream of data, further demonstrates the totalizing effects of the megamachine as it coordinates all actions to serve its purposes. And yet, Google, Facebook, and Amazon are not the megamachine, but rather examples of megatechnics; the megamachine is the broader system of which all of those companies are merely parts.

    Though the chilling portrait created by Mumford seems to suggest a definite direction, and a grim final destination, Mumford tried to highlight that such a future “though possible, is not determined, still less an ideal condition of human development” (276). Nevertheless, it is clear that Mumford saw the culmination of “the megamachine” in the rise of the computer and the growth of “computer dominated society.” Thus, “the megamachine” is a forecast of the world after “the digital turn.” Yet, the continuing strength of Mumford’s concept is based not only on the prescience of the idea itself, but in the way in which Mumford sought to explain how it is that the megamachine secures obedience to its strictures. It is to this matter that our attention, at last, turns.

    From the Megatechnic Bribe to Megatechnic Blackmail

    To explain how the megamachine had maintained its power, Mumford provided two answers, both of which avoid treating the megamachine as a merely “autonomous” force (Winner 1989, 108-109). The first explanation that Mumford gives is an explanation of the titular idea itself: “the ultimate religion of our seemingly rational age” which he dubbed ““the myth of the machine” (Mumford 1975, 375). The key component of this “myth” is “the notion that this machine was, by its very nature, absolutely irresistible—and yet, provided that one did not oppose it, ultimately beneficial” (Mumford 1967, 224) —once assembled and set into action the megamachine appears inevitable, and those living in megatechnic societies are conditioned from birth to think of the megamachine in such terms (Mumford 1970, 331).

    Yet, the second part of the myth is especially, if not more, important: it is not merely that the megamachine appears “absolutely irresistible” but that many are convinced that it is “ultimately beneficial.” This feeds into what Mumford described as “the megatechnic bribe,” a concept which he first sketched briefly in “Authoritarian and Democratic Technics” (Mumford 1964, 6) but which he fully developed in The Pentagon of Power (Mumford 1970, 330-334). The “bribe” functions by offering those who go along with it a share in the “perquisites, privileges, seductions, and pleasures of the affluent society” so long that is as they do not question or ask for anything different from that which is offered (330). And this, Mumford recognizes, is a truly tempting offer, as it allows its recipients to believe they are personally partaking in “progress” (331). After all, a “bribe” only really works if what is offered is actually desirable. But Mumford warns, once a people opt for the megamachine, once they become acclimated to the air-conditioned pleasure palace of the megatechnic bribe “no other choices will remain” (332).

    By means of this “bribe,” the megamachine is able to effect an elaborate bait and switch: one through which people are convinced that an authoritarian technic is actually a democratic one. For the bribe accepts “the basic principle of democracy, that every member of society should have a share in its goods,” (Mumford 1964, 6). Mumford did not deny the impressive things with which people were being bribed, but to see them as only beneficial required, in his estimation, a one-sided assessment which ignored “long-term human purposes and a meaningful pattern of life” (Mumford 1970, 333). It entailed confusing the interests of the megamachine with the interests of actual people. Thus, the problem was not the gadgets as such, but the system in which these things were created, produced, and the purposes for which they were disseminated: the problem was that the true purpose of these things was to incorporate people into the megamachine (334). The megamachine created a strange and hostile new world, but offered its denizens bribes to convince them that life in this world was actually a treat. Ruminating on the matter of the persuasive power of the bribe, Mumford wondered if democracy could survive after “our authoritarian technics consolidates its powers, with the aid of its new forms of mass control, its panoply of tranquilizers and sedatives and aphrodisiacs” (Mumford 1964, 7). And in typically Jonah-like fashion, Mumford balked at the very question, noting that in such a situation “life itself will not survive, except what is funneled through the mechanical collective” (7).

    If one chooses to take the framework of the “megatechnic bribe” seriously then it is easy to see it at work in the 21st century. It is the bribe that stands astride the dais at every gaudy tech launch, it is the bribe which beams down from billboards touting the slightly sleeker design of the new smartphone, it is the bribe which promises connection or health or beauty or information or love or even technological protection from the forces that technology has unleashed. The bribe is the offer of the enticing positives that distracts from the legion of downsides. And in all of these cases that which is offered is that which ultimately enhances the power of the megamachine. As Mumford feared, the values that wind up being transmitted across these “bribes,” though they may attempt a patina of concern for moral or democratic values, are mainly concerned with reifying (and deifying) the values of the system offering up these forms of bribery.

    Yet this reading should not be taken as a curmudgeonly rejection of technology as such, in keeping with Mumford’s stance, one can recognize that the things put on offer after the digital turn provide people with an impressive array of devices and platforms, but such niceties also seem like the pleasant distraction that masks and normalizes rampant surveillance, environmental destruction, labor exploitation, and the continuing concentration of wealth in a few hands. It is not that there is a total lack of awareness about the downsides of the things that are offered as “bribes,” but that the offer is too good to refuse. And especially if one has come to believe that the technological status quo is “absolutely irresistible” then it makes sense why one would want to conclude that this situation is “ultimately beneficial.” As Langdon Winner put it several decades ago, “the prevailing consensus seems to be that people love a life of high consumption, tremble at the thought that it might end, and are displeased about having to clean up the messes that technologies sometimes bring” (Winner 1986, 51), such a sentiment is the essence of the bribe.

    Nevertheless, it seems that more thought needs to be given to the bribe after the digital turn, the point after which the bribe has already become successful. The background of the Cold War may have provided a cultural space for Mumford’s skepticism, but, as Wendy Hui Kyong Chun has argued, with the technological advances around the Internet in the last decade of the twentieth century, “technology became once again the solution to political problems” (Chun 2006, 25). Therefore, in the twenty-first century it is not merely about bribery needing to be deployed as a means of securing loyalty to a system of control towards which there is substantial skepticism. Or, to put it slightly differently, at this point there are not many people who still really need to be convinced that they should use a computer. We no longer need to hypothesize about “computer dominated society,” for we already live there.  After all, the technological value systems about which Mumford was concerned have now gained significant footholds not only in the corridors of power, but in every pocket that contains a smart phone. It would be easy to walk through the library brimming with e-books touting the wonders of all that is digital and persuasively disseminating the ideology of the bribe, but such “sugar-coated soma pills”—to borrow a turn of phrase from Howard Segal (1994, 188)—serve more as examples of the continued existence of the bribe than as explanations of how it has changed.

    At the end of her critical history of social media, José Van Dijck (Van Dijck 2013, 174) offers what can be read as an important example of how the bribe has changed, when she notes that “opting out of connective media is hardly an option. The norm is stronger than the law.” On a similar note, Laura Portwood-Stacer in her study of Facebook abstention portrays the very act of not being on that social media platform as “a privilege in itself” —an option that is not available to all (Portwood-Stacer 2012, 14). In interviews with young people, Sherry Turkle has found many “describing how smartphones and social media have infused friendship with the Fear of Missing Out” (Turkle 2015, 145). Though smartphones and social media platforms certainly make up the megamachine’s ecosystem of bribes, what Van Dijck, Portwood-Stacer, and Turkle point to is an important shift in the functioning of the bribe. Namely, that today we have moved from the megatechnic bribe, towards what can be called “megatechnic blackmail.”

    Whereas the megatechnic bribe was concerned with assimilating people into the “new megamachine,” megatechnic blackmail is what occurs once the bribe has already been largely successful. This is not to claim that the bribe does not still function—for it surely does through the mountain of new devices and platforms that are constantly being rolled out—but, rather, that it does not work by itself. The bribe is what is at work when something new is being introduced, it is what convinces people that the benefits outweigh any negative aspects, and it matches the sense of “irresistibility” with a sense of “beneficence.” Blackmail, in this sense, works differently—it is what is at work once people become all too aware of the negative side of smartphones, social media, and the like. Megatechnic blackmail is what occurs once, as Van Dijck put it, “the norm” becomes “stronger than the law” as here it is not the promise of something good that draws someone in but the fear of something bad that keeps people from walking away.

    This puts the real “fear” in the “fear of missing out” which no longer needs to promise “use this platform because it’s great” but can instead now threaten “you know there are problems with this platform, but use it or you will not know what is going on in the world around you.” The shift from bribe to blackmail can further be seen in the consolidation of control in the hands of fewer companies behind the bribes—the inability of an upstart social network (a fresh bribe) to challenge the social network is largely attributable to the latter having moved into a blackmail position. It is no longer the case that a person, in a Facebook saturated society, has a lot to gain by joining the site, but that (if they have already accepted its bribe) they have a lot to lose by leaving it. The bribe secures the adoration of the early-adopters, and it convinces the next wave of users to jump on board, but blackmail is what ensures their fealty once the shiny veneer of the initial bribe begins to wear thin.

    Mumford had noted that in a society wherein the bribe was functioning smoothly, “the two unforgivable sins, or rather punishable vices, would be continence and selectivity” (Mumford 1970, 332) and blackmail is what keeps those who would practice “continence and selectivity” in check. As Portwood-Stacer noted, abstention itself may come to be a marker of performative privilege—to opt out becomes a “vice” available only to those who can afford to engage in it. To not have a smartphone, to not have a Facebook account, to not buy things on Amazon, or use Google, becomes either a signifier of one’s privilege or marks one as an outsider.

    Furthermore, choosing to renounce a particular platform (or to use it less) rarely entails swearing off the ecosystem of megatechnics entirely. As far as the megamachine is concerned, insofar as options are available and one can exercise a degree of “selectivity” what matters is that one is still selecting within that which is offered by the megamachine. The choice between competing systems of particular megatechnics is still a choice that takes place within the framework of the megamachine. Thus, Douglas Rushkoff’s call “program or be programmed” (Rushkoff 2010) appears less as a rallying cry of resistance, than as a quiet acquiescence: one can program, or one can be programmed, but what is unacceptable is to try to pursue a life outside of programs. Here the turn that seeks to rediscover the Internet’s once emancipatory promise in wikis, crowd-funding, digital currency, and the like speaks to a subtle hope that the problems of the digital day can be defeated by doubling down on the digital. From this technologically-optimistic view the problem with companies like Google and Facebook is that they have warped the anarchic promise, violated the independence, of cyberspace (Barlow 1996; Turner 2006); or that capitalism has undermined the radical potential of these technologies (Fuchs 2014; Srnicek and Williams 2015). Yet, from Mumford’s perspective such hopes and optimism are unwarranted. Indeed, they are the sort of democratic fantasies that serve to cover up the fact that the computer, at least for Mumford, was ultimately still an authoritarian technology. For the megamachine it does not matter if the smartphone with a Twitter app is used by the President or by an activist: either use is wholly acceptable insofar as both serve to deepen immersion in the “computer dominated society” of the megamachine.  And thus, as to the hope that megatechnics can be used to destroy the megamachine it is worth recalling Mumford’s quip, “Let no one imagine that there is a mechanical cure for this mechanical disease” (Mumford 1954, 50).

    In this situation the only thing worse than falling behind or missing out is to actually challenge the system itself, to practice or argue that others practice “continence and selectivity” leads to one being denounced as a “technophobe” or “Luddite.” That kind of derision fits well with Mumford’s observation that the attempt to live “detached from the megatechnic complex” to be “cockily independent of it, or recalcitrant to its demands, is regarded as nothing less than a form of sabotage” (Mumford 1970, 330). Minor criticisms can be permitted if they are of the type that can be assimilated and used to improve the overall functioning of the megamachine, but the unforgiveable heresy is to challenge the megamachine itself. It is acceptable to claim that a given company should be attempting to be more mindful of a given social concern, but it is unacceptable to claim that the world would actually be a better place if this company were no more. One sees further signs of the threat of this sort of blackmail at work in the opening pages of the critical books about technology aimed at the popular market, wherein the authors dutifully declare that though they have some criticisms they are not anti-technology. Such moves are not the signs of people merrily cooperating with the bribe, but of people recognizing that they can contribute to a kinder, gentler bribe (to a greater or lesser extent) or risk being banished to the margins as fuddy-duddies, kooks, environmentalist weirdos, or as people who really want everyone to go back to living in caves. The “myth of the machine” thrives on the belief that there is no alternative. One is permitted (in some circumstances) to say “don’t use Facebook” but one cannot say “don’t use the Internet.” Blackmail is what helps to bolster the structure that unfailingly frames the megamachine as “ultimately beneficial.”

    The megatechnic bribe dazzles people by muddling the distinction between, to use a comparison Mumford was fond of, “the goods life” and “the good life.” But megatechnic blackmail threatens those who grow skeptical of this patina of “the good life” that they can either settle for “the goods life” or they can look forward to an invisible life on the margins. Those who can’t be bribed are blackmailed. Thus it is no longer just that the myth of the machine is based on the idea that the megamachine is “absolutely irresistible” and “ultimately beneficial” but that it now includes the idea that to push back is “unforgivably detrimental.”

    Conclusion

    Of the various biblical characters from whom one can draw inspiration, Jonah is something of an odd choice for a public intellectual. After all, Jonah first flees from his prophetic task, sleeps in the midst of a perilous storm, and upon delivering the prophecy retreats to a hillside to glumly wait to see if the prophesized destruction will come. There is a certain degree to which Jonah almost seems disappointed that the people of Ninevah mend their ways and are forgiven by God. Yet some of Jonah’s frustrated disappointment flows from his sense that the whole ordeal was pointless—he had always known that God would forgive the people of Ninevah and not destroy the city. Given that, why did Jonah have to leave the comfort of his home in the first place? (JPS 1999, 1333-1337). Mumford always hoped to be proven wrong. As he put it in the very talk in which he introduced himself as Jonah, “I would die happy if I knew that on my tombstone could be written these words, ‘This man was an absolute fool. None of the disastrous things that he reluctantly predicted ever came to pass!’ Yes: then I could die happy” (Mumford 1979, 528). But those words do not appear on Mumford’s tombstone.

    Assessing whether Mumford was “an absolute fool” and whether any “of the disastrous things that he reluctantly predicted ever came to pass” is a tricky mire to traverse. For the way that one responds to that probably has as much to do with whether or not one shares Mumford’s outlook than with anything particular he wrote. During his lifetime Mumford had no shortage of critics who viewed him as a stodgy pessimist. But what is one to expect if one is trying to follow the example of Jonah? If you see yourself as “that terrible fellow who keeps on uttering the very words you don’t want to hear, reporting the bad news and warning you that it will get even worse unless you yourself change your mind and alter your behavior” (528) than you can hardly be surprised when many choose to dismiss you as a way of dismissing the bad news you bring.

    Yet, it has been the contention of this paper, that Mumford should not be ignored—and that his thought provides a good tool to think with after the digital turn. In his introduction to the 2010 edition of Mumford’s Technics and Civilization, Langdon Winner notes that it “openly challenged scholarly conventions of the early twentieth century and set the stage for decades of lively debate about the prospects for our technology-centered ways of living” (Mumford 2010, ix).  Even if the concepts from The Myth of the Machine have not “set the stage” for debate in the twenty-first century, the ideas that Mumford develops there can pose useful challenges for present discussions around “our technology-centered ways of living.” True, “the megamachine” is somewhat clunky as a neologism but as a term that encompasses the technical, political, economic, and social arrangements of a powerful system it seems to provide a better shorthand to capture the essence of Google or the NSA than many other terms. Mumford clearly saw the rise of the computer as the invention through which the megamachine would be able to fully secure its throne. At the same time, the idea of the “megatechnic bribe” is a thoroughly discomforting explanation for how people can grumble about Apple’s labor policies or Facebook’s uses of user data while eagerly lining up to upgrade to the latest model of iPhone or clicking “like” on a friend’s vacation photos. But in the present day the bribe has matured beyond a purely pleasant offer into a sort of threat that compels consent. Indeed, the idea of the bribe may be among Mumford’s grandest moves in the direction of telling people what they “don’t want to hear.” It is discomforting to think of your smartphone as something being used to “bribe” you, but that it is unsettling may be a result of the way in which that claim resonates.

    Lewis Mumford never performed a Google search, never made a Facebook account, never Tweeted or owned a smartphone or a tablet, and his home was not a repository for the doodads of the Internet of Things. But it is doubtful that he would have been overly surprised by any of them. Though he may have appreciated them for their technical capabilities he would have likely scoffed at the utopian hopes that are hung upon them. In 1975 Mumford wrote: “Behold the ultimate religion of our seemingly rational age—the Myth of the Machine! Bigger and bigger, more and more, farther and farther, faster and faster became ends in themselves, as expressions of godlike power; and empires, nations, trusts, corporations, institutions, and power-hungry individuals were all directed to the same blank destination” (Mumford 1975, 375).

    Is this assessment really so outdated today? If so, perhaps the stumbling block is merely the term “machine,” which had more purchase in the “our” of Mumford’s age than in our own. Today, that first line would need to be rewritten to read “the Myth of the Digital” —but other than that, little else would need to be changed.

    _____

    Zachary Loeb is a graduate student in the History and Sociology of Science department at the University of Pennsylvania. His research focuses on technological disasters, computer history, and the history of critiques of technology (particularly the work of Lewis Mumford). He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • Alvares, Claude. 1988. “Science, Colonialism, and Violence: A Luddite View” In Science, Hegemony and Violence: A Requiem for Modernity, edited by Ashis Nandy. Delhi: Oxford University Press.
    • Barlow, John Perry. 1996. “A Declaration of the Independence of Cyberspace” (Feb 8).
    • Blake, Casey Nelson. 1990. Beloved Community: The Cultural Criticism of Randolph Bourne, Van Wyck Brooks, Waldo Frank, and Lewis Mumford. Chapel Hill: The University of North Carolina Press.
    • Bookchin, Murray. 1995. Social Anarchism or Lifestyle Anarchism: An Unbridgeable Chasm. Oakland: AK Press.
    • Cowley, Malcolm and Bernard Smith, eds. 1938. Books That Changed Our Minds. New York: The Kelmscott Editions.
    • Ezrahi, Yaron, Mendelsohn, Everett, and Segal, Howard P., eds. 1994. Technology, Pessimism, and Postmodernism. Amherst: University of Massachusetts Press.
    • Ellul, Jacques. 1967. The Technological Society. New York: Vintage Books.
    • Ellul, Jacques. 1980. The Technological System. New York: Continuum.
    • Farrenkopf, John. 2001 Prophet of Decline: Spengler on World History and Politics. Baton Rouge: LSU Press.
    • Fox, Richard Wightman. 1990. “Tragedy, Responsibility, and the American Intellectual, 1925-1950” In Lewis Mumford: Public Intellectual, edited by Thomas P. Hughes, and Agatha C. Hughes. New York: Oxford University Press.
    • Fromm, Erich. 1968. The Revolution of Hope: Toward a Humanized Technology. New York: Harper & Row, Publishers.
    • Fuchs, Christian. 2014. Social Media: A Critical Introduction. Los Angeles: Sage.
    • Herf, Jeffrey. 1984. Reactionary Modernism: Technology, Culture, and Politics in Weimar and the Third Reich. Cambridge: Cambridge University Press.
    • Hughes, Michael (ed.) 1971. The Letters of Lewis Mumford and Frederic J. Osborn: A Transatlantic Dialogue, 1938-1970. New York: Praeger Publishers.
    • Hughes, Thomas P. and Agatha C. Hughes. 1990. Lewis Mumford: Public Intellectual. New York: Oxford University Press.
    • Hughes, Thomas P. 2004. Human-Built World: How to Think About Technology and Culture. Chicago: University of Chicago Press.
    • Hui Kyong Chun, Wendy. 2006. Control and Freedom. Cambridge: The MIT Press.
    • Ihde, Don. 1993. Philosophy of Technology: an Introduction. New York: Paragon House.
    • Jacoby, Russell. 2005 Picture Imperfect: Utopian Thought for an Anti-Utopian Age. New York: Columbia University Press.
    • JPS Hebrew-English Tanakh. 1999. Philadelphia: The Jewish Publication Society.
    • Lasch, Christopher. 1991. The True and Only Heaven: Progress and Its Critics. New York: W. W.Norton and Company.
    • Luccarelli, Mark. 1996. Lewis Mumford and the Ecological Region: The Politics of Planning. New York: The Guilford Press.
    • Marx, Leo. 1988. The Pilot and the Passenger: Essays on Literature, Technology, and Culture in the United States. New York: Oxford University Press.
    • Marx, Leo. 1990. “Lewis Mumford” Prophet of Organicism” In Lewis Mumford: Public Intellectual, edited by Thomas P. Hughes and Agatha C. Hughes. New York: Oxford University Press.
    • Marx, Leo. 1994. “The Idea of ‘Technology’ and Postmodern Pessimism.” In Does Technology Drive History? The Dilemma of Technological Determinism, edited by Merritt Roe Smith and Leo Marx. Cambridge: MIT Press.
    • Mendelsohn, Everett. 1994. “The Politics of Pessimism: Science and Technology, Circa 1968.” In Technology, Pessimism, and Postmodernism, edited by Yaron Ezrahi, Everett Mendelsohn, and Howard P. Segal. Amherst: University of Massachusetts Press.
    • Miller, Donald L. 1989. Lewis Mumford: A Life. New York: Weidenfeld and Nicolson.
    • Molesworth, Charles. 1990. “Inner and Outer: The Axiology of Lewis Mumford.” In Lewis Mumford: Public Intellectual, edited by Thomas P. Hughes and Agatha C. Hughes. New York: Oxford University Press.
    • Mitcham, Carl. 1994. Thinking Through Technology: The Path between Engineering and Philosophy. Chicago: University of Chicago Press.
    • Mumford, Lewis. 1926. “Radicalism Can’t Die.” The Jewish Daily Forward (English section, Jun 20).
    • Mumford, Lewis. 1934. Technics and Civilization. New York: Harcourt, Brace and Company.
    • Mumford, Lewis. 1938. The Culture of Cities. New York, Harcourt, Brace and Company.
    • Mumford, Lewis. 1944. The Condition of Man. New York, Harcourt, Brace and Company.
    • Mumford, Lewis. 1951. The Conduct of Life. New York, Harcourt, Brace and Company.
    • Mumford, Lewis. 1954. In the Name of Sanity. New York: Harcourt, Brace and Company.
    • Mumford, Lewis. 1959. “An Appraisal of Lewis Mumford’s Technics and Civilization (1934).” Daedalus 88:3 (Summer). 527-536.
    • Mumford, Lewis. 1962. The Story of Utopias. New York: Compass Books, Viking Press.
    • Mumford, Lewis. 1964. “Authoritarian and Democratic Technics.” Technology and Culture 5:1 (Winter). 1-8.
    • Mumford, Lewis. 1967. Technics and Human Development. Vol. 1 of The Myth of the Machine. Technics and Human Development. New York: Harvest/Harcourt Brace Jovanovich.
    • Mumford, Lewis. 1970. The Pentagon of Power. Vol. 2 of The Myth of the Machine. Technics and Human Development. New York: Harvest/Harcourt Brace Jovanovich.
    • Mumford, Lewis. 1975. Findings and Keepings: Analects for an Autobiography. New York, Harcourt, Brace and Jovanovich.
    • Mumford, Lewis. 1979. My Work and Days: A Personal Chronicle. New York: Harcourt, Brace, Jovanovich.
    • Mumford, Lewis. 1982. Sketches from Life: The Autobiography of Lewis Mumford. New York: The Dial Press.
    • Mumford, Lewis. 2010. Technics and Civilization. Chicago: The University of Chicago Press.
    • Portwood Stacer, Laura. 2012. “Media Refusal and Conspicuous Non-consumption: The Performative and Political Dimensions of Facebook Abstention.” New Media and Society (Dec 5).
    • Postman, Neil. 1993. Technopoly: The Surrender of Culture to Technology. New York: Vintage Books.
    • Rushkoff, Douglas. 2010. Program or Be Programmed. Berkeley: Soft Skull Books.
    • Segal, Howard P. 1994a. “The Cultural Contradictions of High Tech: or the Many Ironies of Contemporary Technological Optimism.” In Pessimism, and Postmodernism, edited by Yaron Ezrahi, Everett Mendelsohn, and Howard P. Segal. Amherst: University of Massachusetts Press.
    • Segal, Howard P. 1994b. Future Imperfect: The Mixed Blessings of Technology in America. Amherst: The University of Amherst Press.
    • Spengler, Oswald. 1932a. Form and Actuality. Vol. 1 of The Decline of the West. New York: Alfred K. Knopf.
    • Spengler, Oswald. 1932b. Perspectives of World-History. Vol. 2 of The Decline of the West. New York: Alfred K. Knopf.
    • Spengler, Oswald. 2002. Man and Technics: A Contribution to a Philosophy of Life. Honolulu: University Press of the Pacific.
    • Srnicek, Nick and Alex Williams. 2015. Inventing the Future: Postcapitalism and a World Without Work. New York: Verso Books.
    • Turkle, Sherry. 2015. Reclaiming Conversation: The Power of Talk in a Digital Age. New York: Penguin Press.
    • Turner, Fred. 2006. From Counterculture to Cyberculture: Stewart Brand, The Whole Earth Network and the Rise of Digital Utopianism. Chicago: The University of Chicago Press.
    • Van Dijck, José. 2013. The Culture of Connectivity. Oxford: Oxford University Press.
    • Watson, David. 1997. Against the Megamachine: Essays on Empire and Its Enemies. Brooklyn: Autonomedia.
    • Williams, Rosalind. 1990. “Lewis Mumford as a Historian of Technology in Technics and Civilization.” In Lewis Mumford: Public Intellectual, edited by Thomas P. Hughes and Agatha C. Hughes. New York: Oxford University Press.
    • Williams, Rosalind. 1994. “The Political and Feminist Dimensions of Technological Determinism.” In Does Technology Drive History? The Dilemma of Technological Determinism, edited by Merritt Roe Smith and Leo Marx. Cambridge: MIT Press.
    • Winner, Langdon. 1989. Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought. Cambridge: MIT Press.
    • Winner, Langdon. 1986. The Whale and the Reactor. Chicago: University of Chicago Press.
    • Wojtowicz, Robert. 1996. Lewis Mumford and American Modernism: Eutopian Themes for Architecture and Urban Planning. Cambridge: Cambridge University Press.

     

  • Richard Hill — States, Governance, and Internet Fragmentation (Review of Mueller, Will the Internet Fragment?)

    Richard Hill — States, Governance, and Internet Fragmentation (Review of Mueller, Will the Internet Fragment?)

    a review of Milton Mueller, Will the Internet Fragment? Sovereignty, Globalization and Cyberspace (Polity, 2017)

    by Richard Hill

    ~

    Like other books by Milton Mueller, Will the Internet Fragment? is a must-read for anybody who is seriously interested in the development of Internet governance and its likely effects on other walks of life.  This is true because, and not despite, the fact that it is a tract that does not present an unbiased view. On the contrary, it advocates a certain approach, namely a utopian form of governance which Mueller refers to as “popular sovereignty in cyberspace”.

    Mueller, Professor of Information Security and Privacy at Georgia Tech, is an internationally prominent scholar specializing in the political economy of information and communication.  The author of seven books and scores of journal articles, his work informs not only public policy but also science and technology studies, law, economics, communications, and international studies.  His books Networks and States: The Global Politics of Internet Governance (MIT Press, 2010) and Ruling the Root: Internet Governance and the Taming of Cyberspace (MIT Press, 2002) are acclaimed scholarly accounts of the global governance regime emerging around the Internet.

    Most of Will the Internet Fragment? consists of a rigorous analysis of what has been commonly referred to as “fragmentation,” showing that very different technological and legal phenomena have been conflated in ways that do not favour productive discussions.  That so-called “fragmentation” is usually defined as the contrary of the desired situation in which “every device on the Internet should be able to exchange data packets with any other device that is was willing to receive them” (p. 6 of the book, citing Vint Cerf).  But. as Mueller correctly points out, not all end-points of the Internet can reach all other end-points at all times, and there may be very good reasons for that (e.g. corporate firewalls, temporary network outages, etc.).  Mueller then shows how network effects (the fact that the usefulness of a network increases as it becomes larger) will tend to prevent or counter fragmentation: a subset of the network is less useful than is the whole.  He also shows how network effects can prevent the creation of alternative networks: once everybody is using a given network, why switch to an alternative that few are using?  As Mueller aptly points out (pp. 63-66), the slowness of the transition to IPv6 is due to this type of network effect.

    The key contribution of this book is that it clearly identifies the real question of interest to whose who are concerned about the governance of the Internet and its impact on much of our lives.  That question (which might have been a better subtitle) is: “to what extent, if any, should Internet policies be aligned with national borders?”  (See in particular pp. 71, 73, 107, 126 and 145).  Mueller’s answer is basically “as little as possible, because supra-national governance by the Internet community is preferable”.  This answer is presumably motivated by Mueller’s view that “ institutions shift power from states to society” (p. 116), which implies that “society” has little power in modern states.  But (at least ideally) states should be the expression of a society (as Mueller acknowledges on pp. 124 and 136), so it would have been helpful if Mueller had elaborated on the ways (and there are many) in which he believes states do not reflect society and in the ways in which so-called multi-stakeholder models would not be worse and would not result in a denial of democracy.

    Before commenting on Mueller’s proposal for supra-national governance, it is worth commenting on some areas where a more extensive discussion would have been warranted.  We note, however, that the book the book is part of a series that is deliberately intended to be short and accessible to a lay public.  So Mueller had a 30,000 word limit and tried to keep things written in a way that non-specialists and non-scholars could access.  This no doubt largely explains why he didn’t cover certain topics in more depth.

    Be that as it may, the discussion would have been improved by being placed in the long-term context of the steady decrease in national sovereignty that started in 1648, when sovereigns agreed in the Treaty of Westphalia to refrain from interfering in the religious affairs of foreign states, , and that accelerated in the 20th century.  And by being placed in the short-term context of the dominance by the USA as a state (which Mueller acknowledges in passing on p. 12), and US companies, of key aspects of the Internet and its governance.  Mueller is deeply aware of the issues and has discussed them in his other books, in particular Ruling the Root and Networks and States, so it would have been nice to see the topic treated here, with references to the end of the Cold War and what appears to be re-emergence of some sort of equivalent international tension (albeit not for the same reasons and with different effects at least for what concerns cyberspace).  It would also have been preferable to include at least some mention of the literature on the negative economic and social effects of current Internet governance arrangements.

     Will the Internet Fragment? Sovereignty, Globalization and Cyberspace (Polity, 2017)It is telling that, in Will the Internet Fragment?, Mueller starts his account with the 2014 NetMundial event, without mentioning that it took place in the context of the outcomes of the World Summit of the Information Society (WSIS, whose genesis, dynamics, and outcomes Mueller well analyzed in Networks and States), and without mentioning that the outcome document of the 2015 UN WSIS+10 Review reaffirmed the WSIS outcomes and merely noted that Brazil had organized NetMundial, which was, in context, an explicit refusal to note (much less to endorse) the NetMundial outcome document.

    The UN’s reaffirmation of the WSIS outcomes is significant because, as Mueller correctly notes, the real question that underpins all current discussions of Internet governance is “what is the role of states?,” and the Tunis Agenda states: “Policy authority for Internet-related public policy issues is the sovereign right of States. They have rights and responsibilities for international Internet-related public policy issues.”

    Mueller correctly identifies and discusses the positive externalities created by the Internet (pp. 44-48).  It would have been better if he had noted that there are also negative externalities, in particular regarding security (see section 2.8 of my June 2017 submission to ITU’s CWG-Internet), and that the role of states includes internalizing such externalities, as well as preventing anti-competitive behavior.

    It is also telling the Mueller never explicitly mentions a principle that is no longer seriously disputed, and that was explicitly enunciated in the formal outcome of the WSIS+10 Review, namely that offline law applies equally online.  Mueller does mention some issues related to jurisdiction, but he does not place those in the context of the fundamental principle that cyberspace is subject to the same laws as the rest of the world: as Mueller himself acknowledges (p. 145), allegations of cybercrime are judged by regular courts, not cyber-courts, and if you are convicted you will pay a real fine or be sent to a real prison, not to a cyber-prison.  But national jurisdiction is not just about security (p. 74 ff.), it is also about legal certainty for commercial dealings, such as enforcement of contracts.  There are an increasing number of activities that depend on the Internet, but that also depend on the existence of known legal regimes that can be enforced in national courts.

    And what about the tension between globalization and other values such as solidarity and cultural diversity?  As Mueller correctly notes (p. 10), the Internet is globalization on steroids.  Yet cultural values differ around the world (p. 125).  How can we get the benefits of both an unfragmented Internet and local cultural diversity (as opposed to the current trend to impose US values on the rest of the world)?

    While dealing with these issues in more depth would have complicated the discussion, it also would have made it more valuable, because the call for direct rule of the Internet by and for Internet users must either be reconciled with the principle that offline law applies equally online, or be combined with a reasoned argument for the abandonment of that principle.  As Mueller so aptly puts it (p. 11): “Internet governance is hard … also because of the mismatch between its global scope and the political and legal institutions for responding to societal problems.”

    Since most laws, and almost all enforcement mechanisms are national, the influence of states on the Internet is inevitable.  Recall that the idea of enforceable rules (laws) dates back to at least 1700 BC and has formed an essential part of all civilizations in history.  Mueller correctly posits on p. 125 that a justification for territorial sovereignty is to restrict violence (only the state can legitimately exercise it), and wonders why, in that case, the entire world does not have a single government.  But he fails to note that, historically, at times much of the world was subject to a single government (think of the Roman Empire, the Mongol Empire, the Holy Roman Empire, the British Empire), and he does not explore the possibility of expanding the existing international order (treaties, UN agencies, etc.) to become a legitimate democratic world governance (which of course it is not, in part because the US does not want it to become one).  For example, a concrete step in the direction of using existing governance systems has recently been proposed by Microsoft: a Digital Geneva Convention.

    Mueller explains why national borders interfere with certain aspects of certain Internet activities (pp. 104, 106), but national borders interfere with many activities.  Yet we accept them because there doesn’t appear to be any “least worst” alternative.  Mueller does acknowledge that states have power, and rightly calls for states to limit their exercise of power to their own jurisdiction (p. 148).  But he posits that such power “carries much less weight than one would think” (p. 150), without justifying that far-reaching statement.  Indeed, Mueller admits that “it is difficult to conceive of an alternative” (p. 73), but does not delve into the details sufficiently to show convincingly how the solution that he sketches would not result in greater power by dominant private companies (and even corpotocracy or corporatism), increasing income inequality, and a denial of democracy.  For example, without the power of state in the form of consumer protection measures, how can one ensure that private intermediaries would “moderate content based on user preferences and reports” (p. 147) as opposed to moderating content so as to maximize their profits?  Mueller assumes that there would be a sufficient level of competition, resulting in self-correcting forces and accountability (p. 129); but current trends are just the opposite: we see increasing concentration and domination in many aspects of the Internet (see section 2.11 of my June 2017 submission to ITU’s CWG-Internet) and some competition law authorities have found that some abuse of dominance has taken place.

    It seems to me that Mueller too easily concludes that “a state-centric approach to global governance cannot easily co-exist with a multistakeholder regime” (p. 117), without first exploring the nuances of multi-stakeholder regimes and the ways that they could interface with existing institutions, which include intergovernmental bodies as well as states.  As I have stated elsewhere: “The current arrangement for global governance is arguably similar to that of feudal Europe, whereby multiple arrangements of decision-making, including the Church, cities ruled by merchant-citizens, kingdoms, empires and guilds co-existed with little agreement as to which actor was actually in charge over a given territory or subject matter.  It was in this tangled system that the nation-state system gained legitimacy precisely because it offered a clear hierarchy of authority for addressing issues of the commons and provision of public goods.”

    Which brings us to another key point that Mueller does not consider in any depth: if the Internet is a global public good, then its governance must take into account the views and needs of all the world’s citizens, not just those that are privileged enough to have access at present.  But Mueller’s solution would restrict policy-making to those who are willing and able to participate in various so-called multi-stakeholder forums (apparently Mueller does not envisage a vast increase in participation and representation in these; p. 120).  Apart from the fact that that group is not a community in any real sense (a point acknowledged on p. 139), it comprises, at present, only about half of humanity, and even much of that half would not be able to participate because discussions take place primarily in English, and require significant technical knowledge and significant time commitments.

    Mueller’s path for the future appears to me to be a modern version of the International Ad Hoc Committee (IAHC), but Mueller would probably disagree, since he is of the view that the IAHC was driven by intergovernmental organizations.  In any case, the IAHC work failed to be seminal because of the unilateral intervention of the US government, well described in Ruling the Root, which resulted in the creation of ICANN, thus sparking discussions of Internet governance in WSIS and elsewhere.  While Mueller is surely correct when he states that new governance methods are needed (p. 127), it seems a bit facile to conclude that “the nation-state is the wrong unit” and that it would be better to rely largely on “global Internet governance institutions rooted in non-state actors” (p. 129), without explaining how such institutions would be democratic and representative of all of the word’s citizens.

    Mueller correctly notes (p. 150) that, historically, there have major changes in sovereignty: emergence and falls of empires, creation of new nations, changes in national borders, etc.  But he fails to note that most of those changes were the result of significant violence and use of force.  If, as he hopes, the “Internet community” is to assert sovereignty and displace the existing sovereignty of states, how will it do so?  Through real violence?  Through cyber-violence?  Through civil disobedience (e.g. migrating to bitcoin, or implementing strong encryption no matter what governments think)?  By resisting efforts to move discussions into the World Trade Organization? Or by persuading states to relinquish power willingly?  It would have been good if Mueller had addressed, at least summarily, such questions.

    Before concluding, I note a number of more-or-less minor errors that might lead readers to imprecise understandings of important events and issues.  For example, p. 37 states that “the US and the Internet technical community created a global institution, ICANN”: in reality, the leaders of the Internet technical community obeyed the unilateral diktat of the US government (at first somewhat reluctantly and later willingly) and created a California non-profit company, ICANN.  And ICANN is not insulated from jurisdictional differences; it is fully subject to US laws and US courts.  The discussion on pp. 37-41 fails to take into account the fact that a significant portion of the DNS, the ccTLDs, is already aligned with national borders, and that there are non-national telephone numbers; the real differences between the DNS and telephone numbers are that most URLs are non-national, whereas few telephone numbers are non-national; that national telephone numbers are given only to residents of the corresponding country; and that there is an international real-time mechanism for resolving URLs that everybody uses, whereas each telephone operator has to set up its own resolving mechanism for telephone numbers.  Page 47 states that OSI was “developed by Europe-centered international organizations”, whereas actually it was developed by private companies from both the USA (including AT&T, Digital Equipment Corporation, Hewlett-Packard, etc.) and Europe working within global standards organizations (IEC, ISO, and ITU), who all happen to have secretariats in Geneva, Switzerland; whereas the Internet was initially developed and funded by an arm of the US Department of Defence and the foundation of the WWW was initially developed in a European intergovernmental organization.  Page 100 states that “The ITU has been trying to displace or replace ICANN since its inception in 1998”; whereas a correct statement would be “While some states have called for the ITU to displace or replace ICANN since its inception in 1998, such proposals have never gained significant support and appear to have faded away recently.”  Not everybody thinks that the IANA transition was a success (p. 117), nor that it is an appropriate model for the future (pp. 132-135; 136-137), and it is worth noting that ICANN successfully withstood many challenges (p. 100) while it had a formal link to the US government; it remains to be seen how ICANN will fare now that it is independent of the US government.  ICANN and the RIR’s do not have a “‘transnational’ jurisdiction created through private contracts” (p. 117); they are private entities subject to national law and the private contracts in question are also subject to national law (and enforced by national authorities, even if disputes are resolved by international arbitration).  I doubt that it is a “small step from community to nation” (p. 142), and it is not obvious why anti-capitalist movements (which tend to be internationalist) would “end up empowering territorial states and reinforcing alignment” (p. 147), when it is capitalist movements that rely on the power of territorial states to enforce national laws, for example regarding intellectual property rights.

    Despite these minor quibbles, this book, and its references (albeit not as extensive as one would have hoped), will be a valuable starting point for future discussions of internet alignment and/or “fragmentation.” Surely there will be much future discussion, and many more analyses and calls for action, regarding what may well be one of the most important issues that humanity now faces: the transition from the industrial era to the information era and the disruptions arising from that transition.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2 Review Digital Studies magazine.

    Back to the essay

  • Anthony Galluzzo — Utopia as Method, Social Science Fiction, and the Flight From Reality (Review of Frase, Four Futures)

    Anthony Galluzzo — Utopia as Method, Social Science Fiction, and the Flight From Reality (Review of Frase, Four Futures)

    a review of Peter Frase, Four Futures: Life After Capitalism (Verso Jacobin Series, 2016)

    by Anthony Galluzzo

    ~

    Charlie Brooker’s acclaimed British techno-dystopian television series, Black Mirror, returned last year in a more American-friendly form. The third season, now broadcast on Netflix, opened with “Nosedive,” a satirical depiction of a recognizable near future when user-generated social media scores—on the model of Yelp reviews, Facebook likes, and Twitter retweets—determine life chances, including access to basic services, such as housing, credit, and jobs. The show follows striver Lacie Pound—played by Bryce Howard—who, in seeking to boost her solid 4.2 life score, ends up inadvertently wiping out all of her points, in the nosedive named by the episode’s title. Brooker offers his viewers a nightmare variation on a now familiar online reality, as Lacie rates every human interaction and is rated in turn, to disastrous result. And this nightmare is not so far from the case, as online reputational hierarchies increasingly determine access to precarious employment opportunities. We can see this process in today’s so-called sharing economy, in which user approval determines how many rides will go to the Uber driver, or if the room you are renting on Airbnb, in order to pay your own exorbitant rent, gets rented.

    Brooker grappled with similar themes during the show’s first season; for example, “Fifteen Million Merits” shows us a future world of human beings forced to spend their time on exercise bikes, presumably in order to generate power plus the “merits” that function as currency, even as they are forced to watch non-stop television, advertisements included. It is television—specifically a talent show—that offers an apparent escape to the episode’s protagonists. Brooker revisits these concerns—which combine anxieties regarding new media and ecological collapse in the context of a viciously unequal society—in the final episode of the new season, entitled “Hated in the Nation,” which features robotic bees, built for pollination in a world after colony collapse, that are hacked and turned to murderous use. Here is an apt metaphor for the virtual swarming that characterizes so much online interaction.

    Black Mirror corresponds to what literary critic Tom Moylan calls a “critical dystopia.” [1] Rather than a simple exercise in pessimism or anti-utopianism, Moylan argues that critical dystopias, like their utopian counterparts, also offer emancipatory political possibilities in exposing the limits of our social and political status quo, such as the naïve techno-optimism that is certainly one object of Brooker’s satirical anatomies. Brooker in this way does what Jacobin Magazine editor and social critic Peter Frase claims to do in his Four Futures: Life After Capitalism, a speculative exercise in “social science fiction” that uses utopian and dystopian science fiction as means to explore what might come after global capitalism. Ironically, Frase includes both online reputational hierarchies and robotic bees in his two utopian scenarios: one of the more dramatic, if perhaps inadvertent, ways that Frase collapses dystopian into utopian futures

    Frase echoes the opening lines of Marx and Engels’ Communist Manifesto as he describes the twin “specters of ecological catastrophe and automation” that haunt any possible post-capitalist future. While total automation threatens to make human workers obsolete, the global planetary crisis threatens life on earth, as we have known it for the past 12000 years or so. Frase contends that we are facing a “crisis of scarcity and a crisis of abundance at the same time,” making our moment one “full of promise and danger.” [2]

    The attentive reader can already see in this introductory framework the too-often unargued assumptions and easy dichotomies that characterize the book as a whole. For example, why is total automation plausible in the next 25 years, according to Frase, who largely supports this claim by drawing on the breathless pronouncements of a technophilic business press that has made similar promises for nearly a hundred years? And why does automation equal abundance—assuming the more egalitarian social order that Frase alternately calls “communism” or “socialism”—especially when we consider the  ecological crisis Frase invokes as one of his two specters? This crisis is very much bound to an energy-intensive technosphere that is already pushing against several of the planetary boundaries that make for a habitable planet; total automation would expand this same technosphere by several orders of magnitude, requiring that much more energy, materials, and  environmental sinks to absorb tomorrow’s life-sized iPhone or their corpses. Frase deliberately avoids these empirical questions—and the various debates among economists, environmental scientists and computer programmers about the feasibility of AI, the extent to which automation is actually displacing workers, and the ecological limits to technological growth, at least as technology is currently constituted—by offering his work as the “social science fiction” mentioned above, perhaps in the vein of Black Mirror. He distinguishes this method from futurism or prediction, as he writes, “science fiction is to futurism as social theory is to conspiracy theory.” [3]

    In one of his few direct citations, Frase invokes Marxist literary critic Fredric Jameson, who argues that conspiracy theory and its fictions are ideologically distorted attempts to map an elusive and opaque global capitalism: “Conspiracy, one is tempted to say, is the poor person’s cognitive mapping in the postmodern age; it is the degraded figure of the total logic of late capital, a desperate attempt to represent the latter’s system, whose failure is marked by its slippage into sheer theme and content.” [4] For Jameson, a more comprehensive cognitive map of our planetary capitalist civilization necessitates new forms of representation to better capture and perhaps undo our seemingly eternal and immovable status quo. In the words of McKenzie Wark, Jameson proposes nothing less than a “theoretical-aesthetic practice of correlating the field of culture with the field of political economy.” [5] And it is possibly with this “theoretical-aesthetic practice” in mind that Frase turns to science fiction as his preferred tool of social analysis.

    The book accordingly proceeds in the way of a grid organized around the coordinates “abundance/scarcity” and “egalitarianism/hierarchy”—in another echo of Jameson, namely his structuralist penchant for Greimas squares. Hence we get abundance with egalitarianism, or “communism,” followed by its dystopian counterpart, rentism, or hierarchical plenty in the first two futures; similarly, the final futures move from an equitable scarcity, or “socialism” to a hierarchical and apocalyptic “exterminism.” Each of these chapters begins with a science fiction, ranging from an ostensibly communist Star Trek to the exterminationist visions presented in Orson Scott Card’s Ender’s Game, upon which Frase builds his various future scenarios. These scenarios are more often than not commentaries on present day phenomena, such as 3D printers or the sharing economy, or advocacy for various measures, like a Universal Basic Income, which Frase presents as the key to achieving his desired communist future.

    With each of his futures anchored in a literary (or cinematic, or televisual) science fiction narrative, Frase’s speculations rely on imaginative literature, even as he avoids any explicit engagement with literary criticism and theory, such as the aforementioned work of  Jameson.  Jameson famously argues (see Jameson 1982, and the more elaborated later versions in texts such as Jameson 2005) that the utopian text, beginning with Thomas More’s Utopia, simultaneously offers a mystified version of dominant social relations and an imaginative space for rehearsing radically different forms of sociality. But this dialectic of ideology and utopia is absent from Frase’s analysis, where his select space operas are all good or all bad: either the Jetsons or Elysium.

    And, in a marked contrast with Jameson’s symptomatic readings, some science fiction is for Frase more equal than others when it comes to radical sociological speculation, as evinced by his contrasting views of George Lucas’s Star Wars and Gene Roddenberry’s Star Trek.  According to Frase, in “Star Wars, you don’t really care about the particularities of the galactic political economy,” while in Star Trek, “these details actually matter. Even though Star Trek and Star Wars might superficially look like similar tales of space travel and swashbuckling, they are fundamentally different types of fiction. The former exists only for its characters and its mythic narrative, while the latter wants to root its characters in a richly and logically structured social world.” [6]

    Frase here understates his investment in Star Trek, whose “structured social world” is later revealed as his ideal-type for a high tech fully automated luxury communism, while Star Wars is relegated to the role of the space fantasy foil. But surely the original Star Wars is at least an anticolonial allegory, in which a ragtag rebel alliance faces off against a technologically superior evil empire, that was intentionally inspired by the Vietnam War. Lucas turned to the space opera after he lost his bid to direct Apocalypse Now—which was originally based on Lucas’s own idea. According to one account of the franchise’s genesis, “the Vietnam War, which was an asymmetric conflict with a huge power unable to prevail against guerrilla fighters, instead became an influence on Star Wars. As Lucas later said, ‘A lot of my interest in Apocalypse Now carried over into Star Wars.” [7]

    Texts—literary, cinematic, and otherwise—often combine progressive and reactionary, utopian and ideological elements. Yet it is precisely the mixed character of speculative narrative that Frase ignores throughout his analysis, reducing each of his literary examples to unequivocally good or bad, utopian or dystopian, blueprints for “life after capitalism.” Why anchor radical social analysis in various science fictions while refusing basic interpretive argument? As with so much else in Four Futures, Frase uses assumption—asserting that Star Trek has one specific political valence or that total automation guided by advanced AI is an inevitability within 25 years—in the service of his preferred policy outcomes (and the nightmare scenarios that function as the only alternatives to those outcomes), while avoiding engagement with debates related to technology, ecology, labor, and the utopian imagination.

    Frase in this way evacuates the politically progressive and critical utopian dimensions from George Lucas’s franchise, elevating the escapist and reactionary dimensions that represent the ideological, as opposed to the utopian, pole of this fantasy. Frase similarly ignores the ideological elements of Roddenberry’s Star Trek: “The communistic quality of the Star Trek universe is often obscured because the films and TV shows are centered on the military hierarchy of Starfleet, which explores the galaxy and comes into conflict with alien races. But even this seems largely a voluntarily chosen hierarchy.” [8]

    Frase’s focus, regarding Star Trek, is almost entirely on the replicators  that can make something,  anything, from nothing, so that Captain Picard, from the eighties era series reboot, orders a “cup of Earl Grey, hot,” from one of these magical machines, and immediately receives Earl Grey, hot. Frase equates our present-day 3D printers with these same replicators over the course of all his four futures, despite the fact that unlike replicators, 3D printers require inputs: they do not make matter, but shape it.

    3D printing encompasses a variety of processes in which would-be makers create an image with a computer and CAD (computer aided design) software, which in turn provides a blueprint for the three-dimensional object to be “printed.” This requires either the addition of material—usually plastic—and the injection of that material into a mould.  The most basic type of 3D printing involves heating  “(plastic, glue-based) material that is then extruded through a nozzle. The nozzle is attached to an apparatus similar to a normal 2D ink-jet printer, just that it moves up and down, as well. The material is put on layer over layer. The technology is not substantially different from ink-jet printing, it only requires slightly more powerful computing electronics and a material with the right melting and extrusion qualities.” [9] This is still the most affordable and pervasive way to make objects with 3D printers—most often used to make small models and components. It is also the version of 3D printing that lends itself to celebratory narratives of post-industrial techno-artisanal home manufacture pushed by industry cheerleaders and enthusiasts alike. Yet, the more elaborate versions of 3D printing—“printing’ everything from complex machinery to  food to human organs—rely on the more complex and  expensive industrial versions of the technology that require lasers (e.g., stereolithography and selective laser sintering).  Frase espouses a particular left techno-utopian line that sees the end of mass production in 3D printing—especially with the free circulation of the programs for various products outside of our intellectual property regime; this is how he distinguishes his communist utopia from the dystopian rentism that most resembles our current moment,  with material abundance taken for granted. And it is this fantasy of material abundance and post-work/post-worker production that presumably appeals to Frase, who describes himself as an advocate of “enlightened Luddism.”

    This is an inadvertently ironic characterization, considering the extent to which these emancipatory claims conceal and distort the labor discipline imperative that is central to the shape and development of this technology, as Johan Söderberg argues, “we need to put enthusiastic claims for 3D printers into perspective. One claim is that laid-off American workers can find a new source of income by selling printed goods over the Internet, which will be an improvement, as degraded factory jobs are replaced with more creative employment opportunities. But factory jobs were not always monotonous. They were deliberately made so, in no small part through the introduction of the same technology that is expected to restore craftsmanship. ‘Makers’ should be seen as the historical result of the negation of the workers’ movement.” [10]

    Söderberg draws on the work of David Noble, who outlines how the numerical control technology central to the growth of post-war factory automation was developed specifically to de-skill and dis-empower workers during the Cold War period. Unlike Frase, both of these authors foreground those social relations, which include capital’s need to more thoroughly exploit and dominate labor, embedded in the architecture of complex megatechnical systems, from  factory automation to 3D printers. In collapsing 3D printers into Star Trek-style replicators, Frase avoids these questions as well as the more immediately salient issue of resource constraints that should occupy any prognostication that takes the environmental crisis seriously.

    The replicator is the key to Frase’s dream of endless abundance on the model of post-war US style consumer affluence and the end of all human labor. But, rather than a simple blueprint for utopia, Star Trek’s juxtaposition of techno-abundance with military hierarchy and a tacitly expansionist galactic empire—despite the show’s depiction of a Starfleet “prime directive” that forbids direct intervention into the affairs of the extraterrestrial civilizations encountered by the federation’s starships, the Enterprise’s crew, like its ostensibly benevolent US original, almost always intervenes—is significant. The original Star Trek is arguably a liberal iteration of Kennedy-era US exceptionalism, and reflects a moment in which relatively wide-spread first world abundance was underwritten by the deliberate underdevelopment, appropriation, and exploitation of various “alien races’” resources, land, and labor abroad. Abundance in fact comes from somewhere and some one.

    As historian H. Bruce Franklin argues, the original series reflects US Cold War liberalism, which combined Roddenberry’s progressive stances regarding racial inclusion within the parameters of the United States and its Starfleet doppelganger, with a tacitly anti-communist expansionist viewpoint, so that the show’s Klingon villains often serve as proxies for the Soviet menace. Franklin accordingly charts the show’s depictions of the Vietnam War, moving from a pro-war and pro-American stance to a mildly anti-war position in the wake of the Tet Offensive over the course of several episodes: “The first of these two episodes, ‘The City on the Edge of Forever‘ and ‘A Private Little War,’ had suggested that the Vietnam War was merely an unpleasant necessity on the way to the future dramatized by Star Trek. But the last two, ‘The Omega Glory‘ and ‘Let That Be Your Last Battlefield,’ broadcast in the period between March 1968 and January 1969, are so thoroughly infused with the desperation of the period that they openly call for a radical change of historic course, including an end to the Vietnam War and to the war at home.” [11]

    Perhaps Frase’s inattention to Jameson’s dialectic of ideology and utopia reflects a too-literal approach to these fantastical narratives, even as he proffers them as valid tools for radical political and social analysis. We could see in this inattention a bit too much of the fan-boy’s enthusiasm, which is also evinced by the rather narrow and backward-looking focus on post-war space operas to the exclusion of the self-consciously radical science fiction narratives of Ursula LeGuin, Samuel Delany, and Octavia Butler, among others. These writers use the tropes of speculative fiction to imagine profoundly different social relations that are the end-goal of all emancipatory movements. In place of emancipated social relations, Frase too often relies on technology and his readings must in turn be read with these limitations in mind.

    Unlike the best speculative fiction, utopian or dystopian, Frase’s “social science fiction” too often avoids the question of social relations—including the social relations embedded in the complex megatechnical systems Frase  takes for granted as neutral forces of production. He accordingly announces at the outset of his exercise: “I will make the strongest assumption possible: all need for human labor in the production process can be eliminated, and it is possible to live a life of pure leisure while machines do all the work.” [12] The science fiction trope effectively absolves Frase from engagement with the technological, ecological, or social feasibility of these predictions, even as he announces his ideological affinities with a certain version of post- and anti-work politics that breaks with orthodox Marxism and its socialist variants.

    Frase’s Jetsonian vision of the future resonates with various futurist currents that  can we now see across the political spectrum, from the Silicon Valley Singulitarianism of Ray Kurzweil or Elon Musk, on the right, to various neo-Promethean currents on the left, including so-called “left accelerationism.” Frase defends his assumption as a desire “to avoid long-standing debates about post-capitalist organization of the production process.” While such a strict delimitation is permissible for speculative fiction—an imaginative exercise regarding what is logically possible, including time travel or immortality—Frase specifically offers science fiction as a mode of social analysis, which presumably entails grappling with rather than avoiding current debates on labor, automation, and the production process.

    Ruth Levitas, in her 2013 book Utopia as Method: The Imaginary Reconstitution of Society, offers a more rigorous definition of social science fiction via her eponymous “utopia as method.”  This method combines sociological analysis and imaginative speculation, which Levitas defends as “holistic. Unlike political philosophy and political theory, which have been more open than sociology to normative approaches, this holism is expressed at the level of concrete social institutions and processes.” [13] But that attentiveness to concrete social institutions and practices combined with counterfactual speculation regarding another kind of human social world are exactly what is missing in Four Futures. Frase uses grand speculative assumptions-such as the inevitable rise of human-like AI or the complete disappearance of human labor, all within 25 years or so—in order to avoid significant debates that are ironically much more present in purely fictional works, such as the aforementioned Black Mirror or the novels of Kim Stanley Robinson, than in his own overtly non-fictional speculations. From the standpoint of radical literary criticism and radical social theory, Four Futures is wanting. It fails as analysis. And, if one primary purpose of utopian speculation, in its positive and negative forms, is to open an imaginative space in which wholly other forms of human social relations can be entertained, Frase’s speculative exercise also exhibits a revealing paucity of imagination.

    This is most evident in Frase’s most  explicitly utopian future, which he calls “communism,” without any mention of class struggle, the collective ownership of the means of production, or any of the other elements we usually associate with “communism”; instead, 3D printers-cum-replicators will produce whatever you need whenever you need it at home, an individualizing techno-solution to the problem of labor, production, and its organization that resembles alchemy in its indifference to material reality and the scarce material inputs required by 3D printers. Frase proffers a magical vision of technology so as to avoid grappling with the question of social relations; even more than this, in the coda to this chapter, Frase reveals the extent to which current patterns of social organization and stratification remain under Frase’s “communism.” Frase begins this coda with a question: “in a communist society, what do we do all day?”  To which he responds: “The kind of communism   I’ve described is sometimes mistakenly construed, by both its critics and its adherents,  as a society in which hierarchy and conflict are wholly absent. But rather than see the abolition of the capital-wage relation as a single shot solution to all possible social problems, it is perhaps better to think of it in the terms used by political scientist, Corey Robin, as a way to ‘convert hysterical misery into ordinary unhappiness.’” [14]

    Frase goes on to argue—rightly—that the abolition of class society or wage labor will not put an end to a variety of other oppressions, such as those based in gender and racial stratification; he in this way departs from the class reductionist tendencies sometimes on view in the magazine he edits.  His invocation of Corey Robin is nonetheless odd considering the Promethean tenor of Frase’s preferred futures. Robin contends that while the end of exploitation, and capitalist social relations, would remove the major obstacle to  human flourishing, human beings will remain finite and fragile creatures in a finite and fragile world. Robin in this way overlaps with Fredric Jameson’s remarkable essay on Soviet writer Andre Platonov’s Chevengur, in which Jameson writes: “Utopia is merely the political and social solution of collective life: it does not do away with the tensions and inherent contradictions  inherent in both interpersonal relations and in bodily existence itself (among them, those of sexuality), but rather exacerbates those and allows them free rein, by removing the artificial miseries of money and self-preservation [since] it is not the function of Utopia to bring the dead back to life nor abolish death in the first place.” [15] Both Jameson and Robin recall Frankfurt School thinker Herbert Marcuse’s distinction between necessary and surplus repression: while the latter encompasses all of the unnecessary miseries attendant upon a class stratified form of social organization that runs on exploitation, the former represents the necessary adjustments we make to socio-material reality and its limits.

    It is telling that while Star Trek-style replicators fall within the purview of the possible for Frase, hierarchy, like death, will always be with us, since he at least initially argues that status hierarchies will persist after the “organizing force of the capital relation has been removed” (59). Frase oscillates between describing these status hierarchies as an unavoidable, if unpleasant, necessity and a desirable counter to the uniformity of an egalitarian society. Frase illustrates this point in recalling Cory Doctorow’s Down and Out in The Magic Kingdom, a dystopian novel that depicts a world where all people’s needs are met at the same time that everyone competes for reputational “points”—called Whuffie—on the model of Facebook “likes” and Twitter retweets. Frase’s communism here resembles the world of Black Mirror described above.  Although Frase shifts from the rhetoric of necessity to qualified praise in an extended discussion of Dogecoin, an alternative currency used to tip or “transfer a small number of to another Internet user in appreciation of their witty and helpful contributions” (60). Yet Dogecoin, among all cryptocurrencies, is mostly a joke, and like many cryptocurrencies is one whose “decentralized” nature scammers have used to their own advantage, most famously in 2015. In the words of one former enthusiast: “Unfortunately, the whole ordeal really deflated my enthusiasm for cryptocurrencies. I experimented, I got burned, and I’m moving on to less gimmicky enterprises.” [16]

    But how is this dystopian scenario either necessary or desirable?  Frase contends that “the communist society I’ve sketched here, though imperfect, is at least one in which conflict is no longer based in the opposition between wage workers and capitalists or on struggles…over scarce resources” (67). His account of how capitalism might be overthrown—through a guaranteed universal income—is insufficient, while resource scarcity and its relationship to techno-abundance remains unaddressed in a book that purports to take the environmental crisis seriously. What is of more immediate interest in the case of this coda to his most explicitly utopian future is Frase’s non-recognition of how internet status hierarchies and alternative currencies are modeled on and work in tandem with capitalist logics of entrepreneurial selfhood. We might consider Pierre Bourdieu’s theory of social and cultural capital in this regard, or how these digital platforms and their ever-shifting reputational hierarchies are the foundation of what Jodi Dean calls “communicative capitalism.” [17]

    Yet Frase concludes his chapter by telling his readers that it would be a “misnomer” to call his communist future an “egalitarian configuration.” Perhaps Frase offers his fully automated Facebook utopia as counterpoint to the Cold War era critique of utopianism in general and communism in particular: it leads to grey uniformity and universal mediocrity. This response—a variation on Frase’s earlier discussion of Star Trek’s “voluntary hierarchy”—accepts the premise of the Cold War anti-utopian criticisms, i.e., how the human differences that make life interesting, and generate new possibilities, require hierarchy of some kind. In other words, this exercise in utopian speculation cannot move outside the horizon of our own present day ideological common sense.

    We can again see this tendency at the very start of the book. Is total automation an unambiguous utopia or a reflection of Frase’s own unexamined ideological proclivities, on view throughout the various futures, for high tech solutions to complex socio-ecological problems? For various flavors of deus ex machina—from 3D printers to replicators to robotic bees—in place of social actors changing the material realities that constrain them through collective action? Conversely, are the “crisis of scarcity” and the visions of ecological apocalypse Frase evokes intermittently throughout his book purely dystopian or ideological? Surely, since Thomas Malthus’s 1798 Essay on Population, apologists for various ruling orders have used the threat of scarcity and material limits to justify inequity, exploitation, and class division: poverty is “natural.” Yet, can’t we also discern in contemporary visions of apocalypse a radical desire to break with a stagnant capitalist status quo? And in the case of the environmental state of emergency, don’t we have a rallying point for constructing a very different eco-socialist order?

    Frase is a founding editor of Jacobin magazine and a long-time member of the Democratic Socialists of America. He nonetheless distinguishes himself from the reformist and electoral currents at those organizations, in addition to much of what passes for orthodox Marxism. Rather than full employment—for example—Frase calls for the abolition of work and the working class in a way that echoes more radical anti-work and post-workerist modes of communist theory. So, in a recent editorial published by Jacobin, entitled “What It Means to Be on the Left,” Frase differentiates himself from many of his DSA comrades in declaring that “The socialist project, for me, is about something more than just immediate demands for more jobs, or higher wages, or universal social programs, or shorter hours. It’s about those things. But it’s also about transcending, and abolishing, much of what we think defines our identities and our way of life.” Frase goes on to sketch an emphatically utopian communist horizon that includes the abolition of class, race, and gender as such. These are laudable positions, especially when we consider a new new left milieu some of whose most visible representatives dismiss race and gender concerns as “identity politics,” while redefining radical class politics as a better deal for some amorphous US working class within an apparently perennial capitalist status quo.

    Frase’s utopianism in this way represents an important counterpoint within this emergent left. Yet his book-length speculative exercise—policy proposals cloaked as possible scenarios—reveals his own enduring investments in the simple “forces vs. relations of production” dichotomy that underwrote so much of twentieth century state socialism with its disastrous ecological record and human cost.  And this simple faith in the emancipatory potential of capitalist technology—given the right political circumstances despite the complete absence of what creating those circumstances might entail— frequently resembles a social democratic version of the Californian ideology or the kind of Silicon Valley conventional wisdom pushed by Elon Musk. This is a more efficient, egalitarian, and techno-utopian version of US capitalism. Frase mines various left communist currents, from post-operaismo to communization, only to evacuate these currents of their radical charge in marrying them to technocratic and technophilic reformism, hence UBI plus “replicators” will spontaneously lead to full communism. Four Futures is in this way an important, because symptomatic, expression of what Jason Smith (2017) calls “social democratic accelerationism,” animated by a strange faith in magical machines in addition to a disturbing animus toward ecology, non-human life, and the natural world in general.

    _____

    Anthony Galluzzo earned his PhD in English Literature at UCLA. He specializes in radical transatlantic English language literary cultures of the late eighteenth- and nineteenth centuries. He has taught at the United States Military Academy at West Point, Colby College, and NYU.

    Back to the essay

    _____

    Notes

    [1] See Tom Moylan, Scraps of the Untainted Sky: Science Fiction, Utopia, Dystopia (Boulder: Westview Press, 2000).

    [2] Peter Frase, Four Futures: Life After Capitalism. (London: Verso Books, 2016),
    3.

    [3] Ibid, 27.

    [4] Fredric Jameson,  “Cognitive Mapping.” In C. Nelson and L. Grossberg, eds. Marxism and the Interpretation of Culture (Illinois: University of Illinois Press, 1990), 6.

    [5] McKenzie Wark, “Cognitive Mapping,” Public Seminar (May 2015).

    [6] Frase, 24.

    [7] This space fantasy also exhibits the escapist, mythopoetic, and even reactionary elements Frase notes—for example, its hereditary caste of Jedi fighters and their ancient religion—as Benjamin Hufbauer notes, “in many ways, the political meanings in Star Wars were and are progressive, but in other ways the film can be described as middle-of-the-road, or even conservative. Hufbauer, “The Politics Behind the Original Star Wars,” Los Angeles Review of Books (December 21, 2015).

    [8] Frase, 49.

    [9]  Angry Workers World, “Soldering On: Report on Working in a 3D-Printer Manufacturing Plant in London,” libcom. org (March 24, 2017).

    [10] Johan Söderberg, “A Critique of 3D Printing as a Critical Technology,” P2P Foundation (March 16, 2013).

    [11] Franklin, “Star Trek in the Vietnam Era,” Science Fiction Studies, #62 = Volume 21, Part 1 (March 1994).

    [12] Frase, 6.

    [13] Ruth Levitas, Utopia As Method: The Imaginary Reconstitution of Society. (London: Palgrave Macmillan, 2013), xiv-xv.

    [14] Frase, 58.

    [15]  Jameson, “Utopia, Modernism, and Death,” in Seeds of Time (New York: Columbia University Press, 1996), 110.

    [16]  Kaleigh Rogers, “The Guy Who Ruined Dogecoin,” VICE Motherboard (March 6, 2015).

    [17] See Jodi Dean, Democracy and Other Neoliberal Fantasies: Communicative Capitalism and Left  Politics (Durham: Duke University Press, 2009).

    _____

    Works Cited

    • Frase, Peter. 2016. Four Futures: Life After Capitalism. New York: Verso.
    • Jameson, Fredric. 1982. “Progress vs. Utopia; Or Can We Imagine The Future?” Science Fiction Studies 9:2 (July). 147-158
    • Jameson, Fredric. 1996. “Utopia, Modernism, and Death,” in Seeds of Time. New York: Columbia University Press.
    • Jameson, Fredric. 2005. Archaeologies of the Future: The Desire Called Utopia and Other Science Fictions. London: Verso.
    • Levitas, Ruth. 2013. Utopia As Method; The Imaginary Reconstitution of Society. London: Palgrave Macmillan.
    • Moylan, Tom. 2000. Scraps of the Untainted Sky: Science Fiction, Utopia, Dystopia. Boulder: Westview Press.
    • Smith, Jason E. 2017. “Nowhere To Go: Automation Then And Now.” The Brooklyn Rail (March 1).

     

  • Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    By Audrey Watters

    ~

    This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology and the Media, Art, and Text PhD Program. The slides are also available here.

    Thank you very much for inviting me here to speak today. I’m particularly pleased to be speaking to those from Sociology and those from the English and those from the Media, Art, and Text departments, and I hope my talk can walk the line between and among disciplines and methods – or piss everyone off in equal measure. Either way.

    This is the last public talk I’ll deliver in 2016, and I confess I am relieved (I am exhausted!) as well as honored to be here. But when I finish this talk, my work for the year isn’t done. No rest for the wicked – ever, but particularly in the freelance economy.

    As I have done for the past six years, I will spend the rest of November and December publishing my review of what I deem the “Top Ed-Tech Trends” of the year. It’s an intense research project that usually tops out at about 75,000 words, written over the course of four to six weeks. I pick ten trends and themes in order to closely at the recent past, the near-term history of education technology. Because of the amount of information that is published about ed-tech – the amount of information, its irrelevance, its incoherence, its lack of context – it can be quite challenging to keep up with what is really happening in ed-tech. And just as importantly, what is not happening.

    So that’s what I try to do. And I’ll boast right here – no shame in that – no one else does as in-depth or thorough job as me, certainly no one who is entirely independent from venture capital, corporate or institutional backing, or philanthropic funding. (Of course, if you look for those education technology writers who are independent from venture capital, corporate or institutional backing, or philanthropic funding, there is pretty much only me.)

    The stories that I write about the “Top Ed-Tech Trends” are the antithesis of most articles you’ll see about education technology that invoke “top” and “trends.” For me, still framing my work that way – “top trends” – is a purposeful rhetorical move to shed light, to subvert, to offer a sly commentary of sorts on the shallowness of what passes as journalism, criticism, analysis. I’m not interested in making quickly thrown-together lists and bullet points. I’m not interested in publishing clickbait. I am interested nevertheless in the stories – shallow or sweeping – that we tell and spread about technology and education technology, about the future of education technology, about our technological future.

    Let me be clear, I am not a futurist – even though I’m often described as “ed-tech’s Cassandra.” The tagline of my website is “the history of the future of education,” and I’m much more interested in chronicling the predictions that others make, have made about the future of education than I am writing predictions of my own.

    One of my favorites: “Books will soon be obsolete in schools,” Thomas Edison said in 1913. Any day now. Any day now.

    Here are a couple of more recent predictions:

    “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.” – that’s Sebastian Thrun, best known perhaps for his work at Google on the self-driving car and as a co-founder of the MOOC (massive open online course) startup Udacity. The quotation is from 2012.

    And from 2013, by Harvard Business School professor, author of the book The Innovator’s Dilemma, and popularizer of the phrase “disruptive innovation,” Clayton Christensen: “In fifteen years from now, half of US universities may be in bankruptcy. In the end I’m excited to see that happen. So pray for Harvard Business School if you wouldn’t mind.”

    Pray for Harvard Business School. No. I don’t think so.

    Both of these predictions are fantasy. Nightmarish, yes. But fantasy. Fantasy about a future of education. It’s a powerful story, but not a prediction made based on data or modeling or quantitative research into the growing (or shrinking) higher education sector. Indeed, according to the latest statistics from the Department of Education – now granted, this is from the 2012–2013 academic year – there are 4726 degree-granting postsecondary institutions in the United States. A 46% increase since 1980. There are, according to another source (non-governmental and less reliable, I think), over 25,000 universities in the world. This number is increasing year-over-year as well. So to predict that the vast vast majority of these schools (save Harvard, of course) will go away in the next decade or so or that they’ll be bankrupt or replaced by Silicon Valley’s version of online training is simply wishful thinking – dangerous, wishful thinking from two prominent figures who will benefit greatly if this particular fantasy comes true (and not just because they’ll get to claim that they predicted this future).

    Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

    Fantasy. Fortune-telling. Or as capitalism prefers to call it “market research.”

    “Market research” involves fantastic stories of future markets. These predictions are often accompanied with a press release touting the size that this or that market will soon grow to – how many billions of dollars schools will spend on computers by 2020, how many billions of dollars of virtual reality gear schools will buy by 2025, how many billions of dollars of schools will spend on robot tutors by 2030, how many billions of dollars will companies spend on online training by 2035, how big will coding bootcamp market will be by 2040, and so on. The markets, according to the press releases, are always growing. Fantasy.

    In 2011, the analyst firm Gartner predicted that annual tablet shipments would exceed 300 million units by 2015. Half of those, the firm said, would be iPads. IDC estimates that the total number of shipments in 2015 was actually around 207 million units. Apple sold just 50 million iPads. That’s not even the best worst Gartner prediction. In October of 2006, Gartner said that Apple’s “best bet for long-term success is to quit the hardware business and license the Mac to Dell.” Less than three months later, Apple introduced the iPhone. The very next day, Apple shares hit $97.80, an all-time high for the company. By 2012 – yes, thanks to its hardware business – Apple’s stock had risen to the point that the company was worth a record-breaking $624 billion.

    But somehow, folks – including many, many in education and education technology – still pay attention to Gartner. They still pay Gartner a lot of money for consulting and forecasting services.

    People find comfort in these predictions, in these fantasies. Why?

    Gartner is perhaps best known for its “Hype Cycle,” a proprietary graphic presentation that claims to show how emerging technologies will be adopted.

    According to Gartner, technologies go through five stages: first, there is a “technology trigger.” As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage: the “peak of inflated expectations.” So many promises have been made about this technological breakthrough. Then, the third stage: the “trough of disillusionment.” Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly – this is the “slope of enlightenment.” Eventually the new technology becomes mainstream – the “plateau of productivity.”

    It’s not that hard to identify significant problems with the Hype Cycle, least of which being it’s not a cycle. It’s a curve. It’s not a particularly scientific model. It demands that technologies always move forward along it.

    Gartner says its methodology is proprietary – which is code for “hidden from scrutiny.” Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place technologies on the line. But most of the time when Gartner uses the word “methodology,” it is trying to signify “science,” and what it really means is “expensive reports you should buy to help you make better business decisions.”

    Can it really help you make better business decisions? It’s just a curve with some technologies plotted along it. The Hype Cycle doesn’t help explain why technologies move from one stage to another. It doesn’t account for technological precursors – new technologies rarely appear out of nowhere – or political or social changes that might prompt or preclude adoption. And at the end it is simply too optimistic, unreasonably so, I’d argue. No matter how dumb or useless a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted. Where would you plot the Segway, for example? (In 2008, ever hopeful, Gartner insisted that “This thing certainly isn’t dead and maybe it will yet blossom.” Maybe it will, Gartner. Maybe it will.)

    And maybe this gets to the heart as to why I’m not a futurist. I don’t share this belief in an increasingly technological future; I don’t believe that more technology means the world gets “more better.” I don’t believe that more technology means that education gets “more better.”

    Every year since 2004, the New Media Consortium, a non-profit organization that advocates for new media and new technologies in education, has issued its own forecasting report, the Horizon Report, naming a handful of technologies that, as the name suggests, it contends are “on the horizon.”

    Unlike Gartner, the New Media Consortium is fairly transparent about how this process works. The organization invites various “experts” to participate in the advisory board that, throughout the course of each year, works on assembling its list of emerging technologies. The process relies on the Delphi method, whittling down a long list of trends and technologies by a process of ranking and voting until six key trends, six emerging technologies remain.

    Disclosure/disclaimer: I am a folklorist by training. The last time I took a class on “methods” was, like, 1998. And admittedly I never learned about the Delphi method – what the New Media Consortium uses for this research project – until I became a scholar of education technology looking into the Horizon Report. As a folklorist, of course, I did catch the reference to the Oracle of Delphi.

    Like so much of computer technology, the roots of the Delphi method are in the military, developed during the Cold War to forecast technological developments that the military might use and that the military might have to respond to. The military wanted better predictive capabilities. But – and here’s the catch – it wanted to identify technology trends without being caught up in theory. It wanted to identify technology trends without developing models. How do you do that? You gather experts. You get those experts to consensus.

    So here is the consensus from the past twelve years of the Horizon Report for higher education. These are the technologies it has identified that are between one and five years from mainstream adoption:

    It’s pretty easy, as with the Gartner Hype Cycle, to look at these predictions and note that they are almost all wrong in some way or another.

    Some are wrong because, say, the timeline is a bit off. The Horizon Report said in 2010 that “open content” was less than a year away from widespread adoption. I think we’re still inching towards that goal – admittedly “open textbooks” have seen a big push at the federal and at some state levels in the last year or so.

    Some of these predictions are just plain wrong. Virtual worlds in 2007, for example.

    And some are wrong because, to borrow a phrase from the theoretical physicist Wolfgang Pauli, they’re “not even wrong.” Take “collaborative learning,” for example, which this year’s K–12 report posits as a mid-term trend. Like, how would you argue against “collaborative learning” as occurring – now or some day – in classrooms? As a prediction about the future, it is not even wrong.

    But wrong or right – that’s not really the problem. Or rather, it’s not the only problem even if it is the easiest critique to make. I’m not terribly concerned about the accuracy of the predictions about the future of education technology that the Horizon Report has made over the last decade. But I do wonder how these stories influence decision-making across campuses.

    What might these predictions – this history of the future – tell us about the wishful thinking surrounding education technology and about the direction that the people the New Media Consortium views as “experts” want the future to take. What can we learn about the future by looking at the history of our imagining about education’s future. What role does powerful ed-tech storytelling (also known as marketing) play in shaping that future? Because remember: to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

    It’s both convenient and troubling then these forward-looking reports act as though they have no history of their own; they purposefully minimize or erase their own past. Each year – and I think this is what irks me most – the NMC fails to looks back at what it had predicted just the year before. It never revisits older predictions. It never mentions that they even exist. Gartner too removes technologies from the Hype Cycle each year with no explanation for what happened, no explanation as to why trends suddenly appear and disappear and reappear. These reports only look forward, with no history to ground their direction in.

    I understand why these sorts of reports exist, I do. I recognize that they are rhetorically useful to certain people in certain positions making certain claims about “what to do” in the future. You can write in a proposal that, “According to Gartner… blah blah blah.” Or “The Horizon Reports indicates that this is one of the most important trends in coming years, and that is why we need to commit significant resources – money and staff – to this initiative.” But then, let’s be honest, these reports aren’t about forecasting a future. They’re about justifying expenditures.

    “The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Let’s consider: is there something about the field of computer science in particular – and its ideological underpinnings – that makes it more prone to encourage, embrace, espouse these sorts of predictions? Is there something about Americans’ faith in science and technology, about our belief in technological progress as a signal of socio-economic or political progress, that makes us more susceptible to take these predictions at face value? Is there something about our fears and uncertainties – and not just now, days before this Presidential Election where we are obsessed with polls, refreshing Nate Silver’s website obsessively – that makes us prone to seek comfort, reassurance, certainty from those who can claim that they know what the future will hold?

    “Software is eating the world,” investor Marc Andreessen pronounced in a Wall Street Journal op-ed in 2011. “Over the next 10 years,” he wrote, “I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.” Buy stock in technology companies was really the underlying message of Andreessen’s op-ed; this isn’t another tech bubble, he wanted to reinsure investors. But many in Silicon Valley have interpreted this pronouncement – “software is eating the world” – as an affirmation and an inevitability. I hear it repeated all the time – “software is eating the world” – as though, once again, repeating things makes them true or makes them profound.

    If we believe that, indeed, “software is eating the world,” that we are living in a moment of extraordinary technological change, that we must – according to Gartner or the Horizon Report – be ever-vigilant about emerging technologies, that these technologies are contributing to uncertainty, to disruption, then it seems likely that we will demand a change in turn to our educational institutions (to lots of institutions, but let’s just focus on education). This is why this sort of forecasting is so important for us to scrutinize – to do so quantitatively and qualitatively, to look at methods and at theory, to ask who’s telling the story and who’s spreading the story, to listen for counter-narratives.

    This technological change, according to some of the most popular stories, is happening faster than ever before. It is creating an unprecedented explosion in the production of information. New information technologies, so we’re told, must therefore change how we learn – change what we need to know, how we know, how we create and share knowledge. Because of the pace of change and the scale of change and the locus of change (that is, “Silicon Valley” not “The Ivory Tower”) – again, so we’re told – our institutions, our public institutions can no longer keep up. These institutions will soon be outmoded, irrelevant. Again – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    These forecasting reports, these predictions about the future make themselves necessary through this powerful refrain, insisting that technological change is creating so much uncertainty that decision-makers need to be ever vigilant, ever attentive to new products.

    As Neil Postman and others have cautioned us, technologies tend to become mythic – unassailable, God-given, natural, irrefutable, absolute. So it is predicted. So it is written. Techno-scripture, to which we hand over a certain level of control – to the technologies themselves, sure, but just as importantly to the industries and the ideologies behind them. Take, for example, the founding editor of the technology trade magazine Wired, Kevin Kelly. His 2010 book was called What Technology Wants, as though technology is a living being with desires and drives; the title of his 2016 book, The Inevitable. We humans, in this framework, have no choice. The future – a certain flavor of technological future – is pre-ordained. Inevitable.

    I’ll repeat: I am not a futurist. I don’t make predictions. But I can look at the past and at the present in order to dissect stories about the future.

    So is the pace of technological change accelerating? Is society adopting technologies faster than it’s ever done before? Perhaps it feels like it. It certainly makes for a good headline, a good stump speech, a good keynote, a good marketing claim, a good myth. But the claim starts to fall apart under scrutiny.

    This graph comes from an article in the online publication Vox that includes a couple of those darling made-to-go-viral videos of young children using “old” technologies like rotary phones and portable cassette players – highly clickable, highly sharable stuff. The visual argument in the graph: the number of years it takes for one quarter of the US population to adopt a new technology has been shrinking with each new innovation.

    But the data is flawed. Some of the dates given for these inventions are questionable at best, if not outright inaccurate. If nothing else, it’s not so easy to pinpoint the exact moment, the exact year when a new technology came into being. There often are competing claims as to who invented a technology and when, for example, and there are early prototypes that may or may not “count.” James Clerk Maxwell did publish A Treatise on Electricity and Magnetism in 1873. Alexander Graham Bell made his famous telephone call to his assistant in 1876. Guglielmo Marconi did file his patent for radio in 1897. John Logie Baird demonstrated a working television system in 1926. The MITS Altair 8800, an early personal computer that came as a kit you had to assemble, was released in 1975. But Martin Cooper, a Motorola exec, made the first mobile telephone call in 1973, not 1983. And the Internet? The first ARPANET link was established between UCLA and the Stanford Research Institute in 1969. The Internet was not invented in 1991.

    So we can reorganize the bar graph. But it’s still got problems.

    The Internet did become more privatized, more commercialized around that date – 1991 – and thanks to companies like AOL, a version of it became more accessible to more people. But if you’re looking at when technologies became accessible to people, you can’t use 1873 as your date for electricity, you can’t use 1876 as your year for the telephone, and you can’t use 1926 as your year for the television. It took years for the infrastructure of electricity and telephony to be built, for access to become widespread; and subsequent technologies, let’s remember, have simply piggy-backed on these existing networks. Our Internet service providers today are likely telephone and TV companies; our houses are already wired for new WiFi-enabled products and predictions.

    Economic historians who are interested in these sorts of comparisons of technologies and their effects typically set the threshold at 50% – that is, how long does it take after a technology is commercialized (not simply “invented”) for half the population to adopt it. This way, you’re not only looking at the economic behaviors of the wealthy, the early-adopters, the city-dwellers, and so on (but to be clear, you are still looking at a particular demographic – the privileged half.)

    And that changes the graph again:

    How many years do you think it’ll be before half of US households have a smart watch? A drone? A 3D printer? Virtual reality goggles? A self-driving car? Will they? Will it be fewer years than 9? I mean, it would have to be if, indeed, “technology” is speeding up and we are adopting new technologies faster than ever before.

    Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

    Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues (and this is from his recent book The Rise and Fall of American Growth: The US Standard of Living Since the Civil War), to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

    Let’s return briefly to those Horizon Report predictions again. They certainly reflect this belief that technology must be speeding up. Every year, there’s something new. There has to be. That’s the purpose of the report. The horizon is always “out there,” off in the distance.

    But if you squint, you can see each year’s report also reflects a decided lack of technological change. Every year, something is repeated – perhaps rephrased. And look at the predictions about mobile computing:

    • 2006 – the phones in their pockets
    • 2007 – the phones in their pockets
    • 2008 – oh crap, we don’t have enough bandwidth for the phones in their pockets
    • 2009 – the phones in their pockets
    • 2010 – the phones in their pockets
    • 2011 – the phones in their pockets
    • 2012 – the phones too big for their pockets
    • 2013 – the apps on the phones too big for their pockets
    • 2015 – the phones in their pockets
    • 2016 – the phones in their pockets

    This hardly makes the case for technological speeding up, for technology changing faster than it’s ever changed before. But that’s the story that people tell nevertheless. Why?

    I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

    But I also pay attention to these sorts of stories because there’s that part of me that is horrified at the stuff – predictions – that people pass off as true or as inevitable.

    “65% of today’s students will be employed in jobs that don’t exist yet.” I hear this statistic cited all the time. And it’s important, rhetorically, that it’s a statistic – that gives the appearance of being scientific. Why 65%? Why not 72% or 53%? How could we even know such a thing? Some people cite this as a figure from the Department of Labor. It is not. I can’t find its origin – but it must be true: a futurist said it in a keynote, and the video was posted to the Internet.

    The statistic is particularly amusing when quoted alongside one of the many predictions we’ve been inundated with lately about the coming automation of work. In 2014, The Economist asserted that “nearly half of American jobs could be automated in a decade or two.”“Before the end of this century,” Wired Magazine’s Kevin Kelly announced earlier this year, “70 percent of today’s occupations will be replaced by automation.”

    Therefore the task for schools – and I hope you can start to see where these different predictions start to converge – is to prepare students for a highly technological future, a future that has been almost entirely severed from the systems and processes and practices and institutions of the past. And if schools cannot conform to this particular future, then “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Now, I don’t believe that there’s anything inevitable about the future. I don’t believe that Moore’s Law – that the number of transistors on an integrated circuit doubles every two years and therefore computers are always exponentially smaller and faster – is actually a law. I don’t believe that robots will take, let alone need take, all our jobs. I don’t believe that YouTube has been rendered school irrevocably out-of-date. I don’t believe that technologies are changing so quickly that we should hand over our institutions to entrepreneurs, privatize our public sphere for techno-plutocrats.

    I don’t believe that we should cheer Elon Musk’s plans to abandon this planet and colonize Mars – he’s predicted he’ll do so by 2026. I believe we stay and we fight. I believe we need to recognize this as an ego-driven escapist evangelism.

    I believe we need to recognize that predicting the future is a form of evangelism as well. Sure gets couched in terms of science, it is underwritten by global capitalism. But it’s a story – a story that then takes on these mythic proportions, insisting that it is unassailable, unverifiable, but true.

    The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.

    Image credits: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28. And a special thanks to Tressie McMillan Cottom and David Golumbia for organizing this talk. And to Mike Caulfield for always helping me hash out these ideas.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • Zachary Loeb – What Technology Do We Really Need? – A Critique of the 2016 Personal Democracy Forum

    Zachary Loeb – What Technology Do We Really Need? – A Critique of the 2016 Personal Democracy Forum

    by Zachary Loeb

    ~

    Technological optimism is a dish best served from a stage. Particularly if it’s a bright stage in front of a receptive and comfortably seated audience, especially if the person standing before the assembled group is delivering carefully rehearsed comments paired with compelling visuals, and most importantly if the stage is home to a revolving set of speakers who take turns outdoing each other in inspirational aplomb. At such an event, even occasional moments of mild pessimism – or a rogue speaker who uses their fifteen minutes to frown more than smile – serve to only heighten the overall buoyant tenor of the gathering. From TED talks to the launching of the latest gizmo by a major company, the person on a stage singing the praises of technology has become a familiar cultural motif. And it is a trope that was alive and drawing from that well at the 2016 Personal Democracy Forum, the theme of which was “The Tech We Need.”

    Over the course of two days some three-dozen speakers and a similar number of panelists gathered to opine on the ways in which technology is changing democracy to a rapt and appreciative audience. The commentary largely aligned with the sanguine spirit animating the founding manifesto of the Personal Democracy Forum (PDF) – which frames the Internet as a potent force set to dramatically remake and revitalize democratic society. As the manifesto boldly decrees, “the realization of ‘Personal Democracy,’ where everyone is a full participant, is coming” and it is coming thanks to the Internet. The two days of PDF 2016 consisted of a steady flow of intelligent, highly renowned, well-meaning speakers expounding on the conference’s theme to an audience largely made up of bright caring individuals committed to answering that call. To attend an event like PDF and not feel moved, uplifted or inspired by the speakers would be a testament to an empathic failing. How can one not be moved? But when one’s eyes are glistening and when one’s heart is pounding it is worth being wary of the ideology in which one is being baptized.

    To critique an event like the Personal Democracy Forum – particularly after having actually attended it – is something of a challenge. After all, the event is truly filled with genuine people delivering (mostly) inspiring talks. There is something contagious about optimism, especially when it presents itself as measured optimism. And besides, who wants to be the jerk grousing and grumbling after an activist has just earned a standing ovation? Who wants to cross their arms and scoff that the criticism being offered is precisely the type that serves to shore up the system being criticized? Pessimists don’t often find themselves invited to the after party. Thus, insofar as the following comments – and those that have already been made – may seem prickly and pessimistic it is not meant as an attack upon any particular speaker or attendee. Many of those speakers truly were inspiring (and that is meant sincerely), many speakers really did deliver important comments (that is also meant sincerely), and the goal here is not to question the intentions of PDF’s founders or organizers. Yet prominent events like PDF are integral to shaping the societal discussions surrounding technology – and therefore it is essential to be willing to go beyond the inspirational moments and ask: what is really being said here?

    For events like PDF do serve to advance an ideology, whether they like it or not. And it is worth considering what that ideology means, even if it forces one to wipe the smile from one’s lips. And when it comes to PDF much of its ideology can be discovered simply by dissecting the theme for the 2016 conference: “The Tech We Need.”

    “The Tech”

    What do you (yes, you) think of when you hear the word technology? After all, it is a term that encompasses a great deal, which is one of the reasons why Leo Marx (1997) was compelled to describe technology as a “hazardous concept.” Eyeglasses are technology, but so too is Google Glass. A hammer is technology, and so too is a smart phone. In other words, when somebody says “technology is X” or “technology does Q” or “technology will result in R” it is worth pondering whether technology really is, does or results in those things, or if what is being discussed is really a particular type of technology in a particular context. Granted, technology remains a useful term, it is certainly a convenient shorthand (one which very many people [including me] are guilty of occasionally deploying), but in throwing the term technology about so casually it is easy to obfuscate as much as one clarifies. At PDF it seemed as though a sentence was not complete unless it included a noun, a verb and the word technology – or “tech.” Yet what was meant by “tech” at PDF almost always meant the Internet or a device linked to the Internet – and qualifying this by saying “almost” is perhaps overly generous.

    Thus the Internet (as such), web browsers, smart phones, VR, social networks, server farms, encryption, other social networks, apps, and websites all wound up being pleasantly melted together into “technology.” When “technology” encompasses so much a funny thing begins to happen – people speak effusively about “technology” and only name specific elements when they want to single something out for criticism. When technology is so all encompassing who can possibly criticize technology? And what would it mean to criticize technology when it isn’t clear what is actually meant by the term? Yes, yes, Facebook may be worthy of mockery and smart phones can be used for surveillance but insofar as the discussion is not about the Internet but “technology” on what grounds can one say: “this stuff is rubbish”? For even if it is clear that the term “technology” is being used in a way that focuses on the Internet if one starts to seriously go after technology than one will inevitably be confronted with the question “but aren’t hammers also technology?” In short, when a group talks about “the tech” but by “the tech” only means the Internet and the variety of devices tethered to it, what happens is that the Internet appears as being synonymous with technology. It isn’t just a branch or an example of technology, it is technology! Or to put this in sharper relief: at a conference about “the tech we need” held in the US in 2016 how can one avoid talking about the technology that is needed in the form of water pipes that don’t poison people? The answer: by making it so that the term “technology” does not apply to such things.

    The problem is that when “technology” is used to only mean one set of things it muddles the boundaries of what those things are, and what exists outside of them. And while it does this it allows people to confidently place trust in a big category, “technology,” whereas they would probably have been more circumspect if they were just being asked to place trust in smart phones. After all, “the Internet will save us” doesn’t have quite the same seductive sway as “technology will save us” – even if the belief is usually put more eloquently than that. When somebody says “technology will save us” people can think of things like solar panels and vaccines – even if the only technology actually being discussed is the Internet. Here, though, it is also vital to approach the question of “the tech” with some historically grounded modesty in mind. For the belief that technology is changing the world and fundamentally altering democracy is nothing new. The history of technology (as an academic field) is filled with texts describing how a new tool was perceived as changing everything – from the compass to the telegraph to the phonograph to the locomotive to the [insert whatever piece of technology you (the reader) can think of]. And such inventions were often accompanied by an, often earnest, belief that these inventions would improve everything for the better! Claims that the Internet will save us, invoke déjà vu for those with a familiarity with the history of technology. Carolyn Marvin’s masterful study When Old Technologies Were New (1988) examines the way in which early electrical communications methods were seen at the time of their introduction, and near the book’s end she writes:

    Predictions that strife would cease in a world of plenty created by electrical technology were clichés breathed by the influential with conviction. For impatient experts, centuries of war and struggle testified to the failure of political efforts to solve human problems. The cycle of resentment that fueled political history could perhaps be halted only in a world of electrical abundance, where greed could not impede distributive justice. (206)

    Switch out the words ”electrical technology” for “Internet technology” and the above sentences could apply to the present (and the PDF forum) without further alterations. After all, PDF was certainly a gathering of “the influential” and of “impatient experts.”

    And whenever “tech” and democracy are invoked in the same sentence it is worth pondering whether the tech is itself democratic, or whether it is simply being claimed that the tech can be used for democratic purposes. Lewis Mumford wrote at length about the difference between what he termed “democratic” and “authoritarian” technics – in his estimation “democratic” systems were small scale and manageable by individuals, whereas “authoritarian” technics represented massive systems of interlocking elements where no individual could truly assert control. While Mumford did not live to write about the Internet, his work makes it very clear that he did not consider computer technologies to belong to the “democratic” lineage. Thus, to follow from Mumford, the Internet appears as a wonderful example of an “authoritarian” technic (it is massive, environmentally destructive, turns users into cogs, runs on surveillance, cannot be controlled locally, etc…) – what PDF argues for is that this authoritarian technology can be used democratically. There is an interesting argument there, and it is one with some merit. Yet such a discussion cannot even occur in the confusing morass that one finds oneself in when “the tech” just means the Internet.

    Indeed, by meaning “the Internet” but saying “the tech” groups like PDF (consciously or not) pull a bait and switch whereby a genuine consideration of what “the tech we need” simply becomes a consideration of “the Internet we need.”

    “We”

    Attendees to the PDF conference received a conference booklet upon registration; it featured introductory remarks, a code of conduct, advertisements from sponsors, and a schedule. It also featured a fantastically jarring joke created through the wonders of, perhaps accidental, juxtaposition; however, to appreciate the joke one needed to open the booklet so as to be able to see the front and back cover simultaneously. Here is what that looked like:

    Personal Democracy Forum (2016)

    Get it?

    Hilarious.

    The cover says “The Tech We Need” emblazoned in blue over the faces of the conference speakers, and the back is an advertisement for Microsoft stating: “the future is what we make it.” One almost hopes that the layout was intentional. For, who the heck is the “we” being discussed? Is it the same “we”? Are you included in that “we”? And this is a question that can be asked of each of those covers independently of the other: when PDF says “we” who is included and who is excluded? When Microsoft says “we” who is included and who is excluded? Of course, this gets muddled even more when you consider that Microsoft was the “presenting sponsor” for PDF and that many of the speakers at PDF have funding ties to Microsoft. The reason this is so darkly humorous is that there is certainly an argument to be made that “the tech we need” has no place for mega-corporations like Microsoft, while at the same time the booklet assures that “the future is what we [Microsoft] make it.” In short: the future is what corporations like Microsoft will make it…which might be very different from the kind of tech we need.

    In considering the “we” of PDF it is worth restating that this is a gathering of well-meaning individuals who largely seem to want to approach the idea of “we” with as much inclusivity as possible. Yet defining a “we” is always fraught, speaking for a “we” is always dangerous, and insofar as one can think of PDF with any kind of “we” (or “us”) in mind the only version of the group that really emerges is one that leans heavily towards describing the group actually present at the event. And while one can certainly speak about the level (or lack) of diversity at the PDF event – the “we” who came together at PDF is not particularly representative of the world. This was also brought into interesting relief in some other amusing ways: throughout the event one heard numerous variations of the comment “we all have smart phones” – but this did not even really capture the “we” of PDF. While walking down the stairs to a session one day I clearly saw a man (wearing a conference attendee badge) fiddling with a flip-phone – I suppose he wasn’t included in the “we” of “we all have smart phones.” But I digress.

    One encountered further issues with the “we” when it came to the political content of the forum. While the booklet states, and the hosts repeated over and over, that the event was “non-partisan” such a descriptor is pretty laughable. Those taking to the stage were a procession of people who had cut their teeth working for MoveOn and the activists represented continually self-identified as hailing from the progressive end of the spectrum. The token conservative speaker who stepped onto the stage even made a self-deprecating joke in which she recognized that she was one of only a handful (if that) of Republicans present. So, again, who is missing from this “we”? One can be a committed leftist and genuinely believe that a figure like Donald Trump is a xenophobic demagogue – and still recognize that some of his supporters might have offered a very interesting perspective to the PDF conversation. After all, the Internet (“the tech”) has certainly been used by movements on the right as well – and used quite effectively at that. But this part of a national “we” was conspicuously absent from the forum even if they are not nearly so absent from Twitter, Facebook, or the population of people owning smart phones. Again, it is in no way shape or form an endorsement of anything that Trump has said to point out that when a forum is held to discuss the Internet and democracy that it is worth having the people you disagree with present.

    Another question of the “we” that is worth wrestling with revolves around the way in which events like PDF involve those who offer critical viewpoints. If, as is being argued here, PDF’s basic ideology is that the Internet (“the tech”) is improving people’s lives and will continue to do so (leading towards “personal democracy”) – it is important to note that PDF welcomed several speakers who offered accounts of some of the shortcomings of the Internet. Figures including Sherry Turkle, Kentaro Toyama, Safiya Noble, Kate Crawford, danah boyd, and Douglas Rushkoff all took the stage to deliver some critical points of view – and yet in incorporating such voices into the “we” what occurs is that these critiques function less as genuine retorts and more as safety valves that just blow off a bit of steam. Having Sherry Turkle (not to pick on her) vocally doubt the empathetic potential of the Internet just allows the next speaker (and countless conference attendees) to say “well, I certainly don’t agree with Sherry Turkle.” Nevertheless, one of the best ways to inoculate yourself against the charge of unthinking optimism is to periodically turn the microphone over to a critic. But perhaps the most important things that such critics say are the ways in which they wind up qualifying their comments – thus Turkle says “I’m not anti-technology,” Toyama disparages Facebook only to immediately add “I love Facebook,” and fears regarding the threat posed by AI get laughed off as the paranoia of today’s “apex predators” (rich white men) being concerned that they will lose their spot at the top of the food chain. The environmental costs of the cloud are raised, the biased nature of algorithms is exposed – but these points are couched against a backdrop that says to the assembled technologists “do better” not “the Internet is a corporately controlled surveillance mall, and it’s overrated.” The heresies that are permitted are those that point out the rough edges that need to be rounded so that the pill can be swallowed. To return to the previous paragraph, this is not to say that PDF needs to invite John Zerzan or Chellis Glendinning to speak…but one thing that would certainly expose the weaknesses of the PDF “we” is to solicit viewpoints that genuinely come from outside of that “we.” Granted, PDF is more TED talk than FRED talk.

    And of course, and most importantly, one must think of the “we” that goes totally unheard. Yes, comments were made about the environmental cost of the cloud and passing phrases recognized mining – but PDF’s “we” seems to mainly refer to a “we” defined as those who use the Internet and Internet connected devices. Miners, those assembling high-tech devices, e-waste recyclers, and the other victims of those processes are only a hazy phantom presence. They are mentioned in passing, but not ever included fully in the “we.” PDF’s “the tech we need” is for a “we” that loves the Internet and just wants it to be even better and perhaps a bit nicer, while Microsoft’s we in “the future is what we make it” is a “we” that is committed to staying profitable. But amidst such statements there is an even larger group saying: “we are not being included.” That unheard “we” being the same “we” from the classic IWW song “we have fed you all for a thousand years” (Green et al 2016). And as the second line of that song rings out “and you hail us still unfed.”

    “Need”

    When one looks out upon the world it is almost impossible not to be struck by how much is needed. People need homes, people need –not just to be tolerated – but accepted, people need food, people need peace, people need stability, people need the ability to love without being subject to oppression, people need to be free from bigotry and xenophobia, people need…this list could continue with a litany of despair until we all don sackcloth. But do people need VR headsets? Do people need Facebook or Twitter? Do those in the possession of still-functioning high-tech devices need to trade them in every eighteen months? Of course it is important to note that technology does have an important role in meeting people’s needs – after all “shelter” refers to all sorts of technology. Yet, when PDF talks about “the tech we need” the “need” is shaded by what is meant by “the tech” and as was previously discussed that really means “the Internet.” Therefore it is fair to ask, do people really “need” an iPhone with a slightly larger screen? Do people really need Uber? Do people really need to be able to download five million songs in thirty seconds? While human history is a tale of horror it requires a funny kind of simplistic hubris to think that World War II could have been prevented if only everybody had been connected on Facebook (to be fair, nobody at PDF was making this argument). Are today’s “needs” (and they are great) really a result of a lack of technology? It seems that we already have much of the tech that is required to meet today’s needs, and we don’t even require new ways to distribute it. Or, to put it clearly at the risk of being grotesque: people in your city are not currently going hungry because they lack the proper app.

    The question of “need” flows from both the notion of “the tech” and “we” – and as was previously mentioned it would be easy to put forth a compelling argument that “the tech we need” involves water pipes that don’t poison people with lead, but such an argument is not made when “the tech” means the Internet and when the “we” has already reached the top of Maslow’s hierarchy of needs. If one takes a more expansive view of “the tech” and “we” than the range of what is needed changes accordingly. This issue – the way “tech” “we” and “need” intersect – is hardly a new concern. It is what prompted Ivan Illich (1973) to write, in Tools for Conviviality, that:

    People need new tools to work with rather than tools that ‘work’ for them. They need technology to make the most of the energy and imagination each has, rather than more well-programmed energy slaves. (10)

    Granted, it is certainly fair to retort “but who is the ‘we’ referred to by Illich” or “why can’t the Internet be the type of tool that Illich is writing about” – but here Illich’s response would be in line with the earlier referral to Mumford. Namely: accusations of technological determinism aside, maybe it’s fair to say that some technologies are oversold, and maybe the occasional emphasis on the way that the Internet helps activists serves as a patina that distracts from what is ultimately an environmentally destructive surveillance system. Is the person tethered to their smart phone being served by that device – or are they serving it? Or, to allow Illich to reply with his own words:

    As the power of machines increases, the role of persons more and more decreases to that of mere consumers. (11)

    Mindfulness apps, cameras on phones that can be used to film oppression, new ways of downloading music, programs for raising money online, platforms for connecting people on a political campaign – the user is empowered as a citizen but this empowerment tends to involve needing the proper apps. And therefore that citizen needs the proper device to run that app, and a good wi-fi connection, and… the list goes on. Under the ideology captured in the PDF’s “the tech we need” to participate in democracy becomes bound up with “to consume the latest in Internet innovation.” Every need can be met, provided that it is the type of need, which the Internet can meet. Thus the old canard “to the person with a hammer every problem looks like a nail” finds its modern equivalent in “to the person with a smart phone and a good wi-fi connection, every problem looks like one that can be solved by using the Internet.” But as for needs? Freedom from xenophobia and oppression are real needs – undoubtedly – but the Internet has done a great deal to disseminate xenophobia and prop up oppressive regimes. Continuing to double down on the Internet seems like doing the same thing “we” have been doing and expecting different results because finally there’s an “app for that!”

    It is, again, quite clear that those assembled at PDF came together with well-meaning attitudes, but as Simone Weil (2010) put it:

    Intentions, by themselves, are not of any great importance, save when their aim is directly evil, for to do evil the necessary means are always within easy reach. But good intentions only count when accompanied by the corresponding means for putting them into effect. (180)

    The ideology present at PDF emphasizes that the Internet is precisely “the means” for the realization of its attendees’ good intentions. And those who took to the stage spoke rousingly of using Facebook, Twitter, smart phones, and new apps for all manner of positive effects – but hanging in the background (sometimes more clearly than at other times) is the fact that these systems also track their users’ every move and can be used just as easily by those with very different ideas as to what “positive effects” look like. The issue of “need” is therefore ultimately a matter not simply of need but of “ends” – but in framing things in terms of “the tech we need” what is missed is the more difficult question of what “ends” do we seek. Instead “the tech we need” subtly shifts the discussion towards one of “means.” But, as Jacques Ellul, recognized the emphasis on means – especially technological ones – can just serve to confuse the discussion of ends. As he wrote:

    It must always be stressed that our civilization is one of means…the means determine the ends, by assigning us ends that can be attained and eliminating those considered unrealistic because our means do not correspond to them. At the same time, the means corrupt the ends. We live at the opposite end of the formula that ‘the ends justify the means.’ We should understand that our enormous present means shape the ends we pursue. (Ellul 2004, 238)

    The Internet and the raft of devices and platforms associated with it are a set of “enormous present means” – and in celebrating these “means” the ends begin to vanish. It ceases to be a situation where the Internet is the mean to a particular end, and instead the Internet becomes the means by which one continues to use the Internet so as to correct the current problems with the Internet so that the Internet can finally achieve the… it is a snake eating its own tail.

    And its own tale.

    Conclusion: The New York Ideology

    In 1995, Richard Barbrook and Andy Cameron penned an influential article that described what they called “The Californian Ideology” which they characterized as

    promiscuously combin[ing] the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through a profound faith in the emancipatory potential of the new information technologies. In the digital utopia, everybody will be both hip and rich. (Barbrook and Cameron 2001, 364)

    As the placing of a state’s name in the title of the ideology suggests, Barbrook and Cameron were setting out to describe the viewpoint that was underneath the firms that were (at that time) nascent in Silicon Valley. They sought to describe the mixture of hip futurism and libertarian politics that worked wonderfully in the boardroom, even if there was now somebody in the boardroom wearing a Hawaiian print shirt – or perhaps jeans and a hoodie. As companies like Google and Facebook have grown the “Californian Ideology” has been disseminated widely, and though such companies periodically issued proclamations about not being evil and claimed that connecting the world was their goal they maintained their utopian confidence in the “independence of cyberspace” while directing a distasteful gaze towards the “dinosaurs” of representative democracy that would dare to question their zeal. And though it is a more recent player in the game, one is hard-pressed to find a better example than Uber of the fact that this ideology is alive and well.

    The Personal Democracy Forum is not advancing the Californian Ideology. And though the event may have featured a speaker who suggested that the assembled “we” think of the “founding fathers” as start-up founders – the forum continually returned to the questions of democracy. While the Personal Democracy Forum shares the “faith in the emancipatory potential of the new information technologies” with Silicon Valley startups it seems less “free-wheeling” and more skeptical of “entrepreneurial zeal.” In other words, whereas Barbrook and Cameron spoke of “The Californian Ideology” what PDF makes clear is that there is also a “New York Ideology.” Wherein the ideological hallmark is an embrace of the positive potential of new information technologies tempered by the belief that such potential can best be reached by taming the excesses of unregulated capitalism. Where the Californian Ideology says “libertarian” the New York Ideology says “liberation.” Where the Californian Ideology celebrates capital the New York Ideology celebrates the power found in a high-tech enhanced capitol. The New York Ideology balances the excessive optimism of the Californian Ideology by acknowledging the existence of criticism, and proceeds to neutralize this criticism by making it part and parcel of the celebration of the Internet’s potential. The New York Ideology seeks to correct the hubris of the Californian Ideology by pointing out that it is precisely this hubris that turns many away from the faith in the “emancipatory potential.” If the Californian Ideology is broadcast from the stage at the newest product unveiling or celebratory conference, than the New York Ideology is disseminated from conferences like PDF and the occasional skeptical TED talk. The New York Ideology may be preferable to the Californian Ideology in a thousand ways – but ultimately it is the ideology that manifests itself in the “we” one encounters in the slogan “the tech we need.”

    Or, to put it simply, whereas the Californian Ideology is “wealth meaning,” the New York Ideology is “well-meaning.”

    Of course, it is odd and unfair to speak of either ideology as “Californian” or “New York.” California is filled with Californians who do not share in that ideology, and New York is filled with New Yorkers who do not share in that ideology either. Yet to dub what one encounters at PDF to be “The New York Ideology” is to indicate the way in which current discussions around the Internet are not solely being framed by “The Californian Ideology” but also by a parallel position wherein faith in Internet enabled solutions puts aside its libertarian sneer to adopt a democratic smile. One could just as easily call the New York Ideology the “Tech On Stage Ideology” or the “Civic Tech Ideology” – perhaps it would be better to refer to the Californian Ideology as the SV Ideology (silicon valley) and the New York Ideology as the CV ideology (civic tech). But if the Californian Ideology refers to the tech campus in Silicon Valley than the New York Ideology refers to the foundation based in New York – that may very well be getting much of its funding from the corporations that call Silicon Valley home. While Uber sticks with the Californian Ideology, companies like Facebook have begun transitioning to the New York Ideology so that they can have their panoptic technology and their playgrounds too. Whilst new tech companies emerging in New York (like Kickstarter and Etsy) make positive proclamations about ethics and democracy by making it seem that ethics and democracy are just more consumption choices that one picks from the list of downloadable apps.

    The Personal Democracy Forum is a fascinating event. It is filled with intelligent individuals who speak of democracy with unimpeachable sincerity, and activists who really have been able to use the Internet to advance their causes. But despite all of this, the ideological emphasis on “the tech we need” remains based upon a quizzical notion of “need,” a problematic concept of “we,” and a reductive definition of “tech.” For statements like “the tech we need” are not value neutral – and even if the surface ethics are moving and inspirational, sometimes a problematic ideology is most easily disseminated when it takes care to dispense with ideologues. And though the New York Ideology is much more subtle than the Californian Ideology – and makes space for some critical voices – it remains a vehicle for disseminating an optimistic faith that a technologically enhanced Moses shall lead us into the high-tech promised land.

    The 2016 Personal Democracy Forum put forth an inspirational and moving vision of “the tech we need.”

    But when it comes to promises of technological salvation, isn’t it about time that “we” stopped getting our hopes up?

    Coda

    I confess, I am hardly free of my own ideological biases. And I recognize that everything written here may simply be dismissed of by those who find it hypocritical that I composed such remarks on a computer and then posted them online. But I would say that the more we find ourselves using technology the more careful we must be that we do not allow ourselves to be used by that technology.

    And thus, I shall simply conclude by once more citing a dead, but prescient, pessimist:

    I have no illusions that my arguments will convince anyone. (Ellul 1994, 248)

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, where an earlier version of this post first appeared, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Barbrook, Richard and Andy Cameron. 2001. “The Californian Ideology.” In Peter Ludlow, ed., Crypto Anarchy, Cyberstates and Pirate Utopias. Cambridge: MIT Press. 363-387.
    • Ellul, Jacques. 2004. The Political Illusion. Eugene, OR: Wipf and Stock.
    • Ellul, Jacques. 1994. A Critique of the New Commonplaces. Eugene, OR: Wipf and Stock.
    • Green, Archie, David Roediger, Franklin Rosemont, and Salvatore Salerno. 2016. The Big Red Songbook: 250+ IWW Songs! Oakland, CA: PM Press.
    • Illich, Ivan. 1973. Tools for Conviviality. New York: Harper and Row.
    • Marvin, Carolyn. 1988. When Old Technologies Were New: Thinking About Electric Communication in the Late Nineteenth Century. New York: Oxford University Press.
    • Marx, Leo. 1997. “‘Technology’: The Emergence of a Hazardous Concept.” Social Research 64:3 (Fall). 965-988.
    • Mumford, Lewis. 1964. “Authoritarian and Democratic Technics.” in Technology and Culture, 5:1 (Winter). 1-8.
    • Weil, Simone. 2010. The Need for Roots. London: Routledge.
  • Ending the World as We Know It: Alexander R. Galloway in Conversation with Andrew Culp

    Ending the World as We Know It: Alexander R. Galloway in Conversation with Andrew Culp

    by Alexander R. Galloway and Andrew Culp
    ~

    Alexander R. Galloway: You have a new book called Dark Deleuze (University of Minnesota Press, 2016). I particularly like the expression “canon of joy” that guides your investigation. Can you explain what canon of joy means and why it makes sense to use it when talking about Deleuze?

    Andrew Culp, Dark Deleuze (University of Minnesota Press, 2016)

    Andrew Culp: My opening is cribbed from a letter Gilles Deleuze wrote to philosopher and literary critic Arnaud Villani in the early 1980s. Deleuze suggests that any worthwhile book must have three things: a polemic against an error, a recovery of something forgotten, and an innovation. Proceeding along those three lines, I first argue against those who worship Deleuze as the patron saint of affirmation, second I rehabilitate the negative that already saturates his work, and third I propose something he himself was not capable of proposing, a “hatred for this world.” So in an odd twist of Marx on history, I begin with those who hold up Deleuze as an eternal optimist, yet not to stand on their shoulders but to topple the church of affirmation.

    The canon portion of “canon of joy” is not unimportant. Perhaps more than any other recent thinker, Deleuze queered philosophy’s line of succession. A large portion of his books were commentaries on outcast thinkers that he brought back from exile. Deleuze was unwilling to discard Nietzsche as a fascist, Bergson as a spiritualist, or Spinoza as a rationalist. Apparently this led to lots of teasing by fellow agrégation students at the Sorbonne in the late ’40s. Further showing his strange journey through the history of philosophy, his only published monograph for nearly a decade was an anti-transcendental reading of Hume at a time in France when phenomenology reigned. Such an itinerant path made it easy to take Deleuze at his word as a self-professed practitioner of “minor philosophy.” Yet look at Deleuze’s outcasts now! His initiation into the pantheon even bought admission for relatively forgotten figures such as sociologist Gabriel Tarde. Deleuze’s popularity thus raises a thorny question for us today: how do we continue the minor Deleuzian line when Deleuze has become a “major thinker”? For me, the first step is to separate Deleuze (and Guattari) from his commentators.

    I see two popular joyous interpretations of Deleuze in the canon: unreconstructed Deleuzians committed to liberating flows, and realists committed to belief in this world. The first position repeats the language of molecular revolution, becoming, schizos, transversality, and the like. Some even use the terms without transforming them! The resulting monotony seals Deleuze and Guattari’s fate as a wooden tongue used by people still living in the ’80s. Such calcification of their concepts is an especially grave injustice because Deleuze quite consciously shifted terminology from book to book to avoid this very outcome. Don’t get me wrong, I am deeply indebted to the early work on Deleuze! I take my insistence on the Marxo-Freudian core of Deleuze and Guattari from one of their earliest Anglophone commentators, Eugene Holland, who I sought out to direct my dissertation. But for me, the Tiqqun line “the revolution was molecular, and so was the counter-revolution” perfectly depicts the problem of advocating molecular politics. Why? Today’s techniques of control are now molecular. The result is that control societies have emptied the molecular thinker’s only bag of tricks (Bifo is a good test case here), which leaves us with a revolution that only goes one direction: backward.

    I am equally dissatisfied by realist Deleuzians who delve deep into the early strata of A Thousand Plateaus and away from the “infinite speed of thought” that motivates What is Philosophy? I’m thinking of the early incorporations of dynamical systems theory, the ’90s astonishment over everything serendipitously looking like a rhizome, the mid-00s emergence of Speculative Realism, and the ongoing “ontological” turn. Anyone who has read Manuel DeLanda will know this exact dilemma of materiality versus thought. He uses examples that slow down Deleuze and Guattari’s concepts to something easily graspable. In his first book, he narrates history as a “robot historian,” and in A Thousand Years of Nonlinear History, he literally traces the last thousand years of economics, biology, and language back to clearly identifiable technological inventions. Such accounts are dangerously compelling due to their lucidity, but they come at a steep cost: android realism dispenses with Deleuze and Guattari’s desiring subject, which is necessary for a theory of revolution by way of the psychoanalytic insistence on the human ability to overcome biological instincts (e.g. Freud’s Instincts and their Vicissitudes and Beyond the Pleasure Principle). Realist interpretations of Deleuze conceive of the subject as fully of this world. And with it, thought all but evaporates under the weight of this world. Deleuze’s Hume book is an early version of this criticism, but the realists have not taken heed. Whether emergent, entangled, or actant, strong realists ignore Deleuze and Guattari’s point in What is Philosophy? that thought always comes from the outside at a moment when we are confronted by something so intolerable that the only thing remaining is to think.

    Galloway: The left has always been ambivalent about media and technology, sometimes decrying its corrosive influence (Frankfurt School), sometimes embracing its revolutionary potential (hippy cyberculture). Still, you ditch technical “acceleration” in favor of “escape.” Can you expand your position on media and technology, by way of Deleuze’s notion of the machinic?

    Culp: Foucault says that an episteme can be grasped as we are leaving it. Maybe we can finally catalogue all of the contemporary positions on technology? The romantic (computer will never capture my soul), the paranoiac (there is an unknown force pulling the strings), the fascist-pessimist (computers will control everything)…

    Deleuze and Guattari are certainly not allergic to technology. My favorite quote actually comes from the Foucault book in which Deleuze says that “technology is social before it is technical” (6). The lesson we can draw from this is that every social formation draws out different capacities from any given technology. An easy example is from the nomads Deleuze loved so much. Anarcho-primitivists speculate that humans learn oppression with the domestication of animals and settled agriculture during the Neolithic Revolution. Diverging from the narrative, Deleuze celebrates the horse people of the Eurasian steppe described by Arnold Toynbee. Threatened by forces that would require them to change their habitat, Toynbee says, they instead chose to change their habits. The subsequent domestication of the horse did not sew the seeds of the state, which was actually done by those who migrated from the steppes after the last Ice Age to begin wet rice cultivation in alluvial valleys (for more, see James C Scott’s The Art of Not Being Governed). On the contrary, the new relationship between men and horses allowed nomadism to achieve a higher speed, which was necessary to evade the raiding-and-trading used by padi-states to secure the massive foreign labor needed for rice farming. This is why the nomad is “he who does not move” and not a migrant (A Thousand Plateaus, 381).

    Accelerationism attempts to overcome the capitalist opposition of human and machine through the demand for full automation. As such, it peddles in technological Proudhonism that believes one can select what is good about technology and just delete what is bad. The Marxist retort is that development proceeds by its bad side. So instead of flashy things like self-driving cars, the real dot-communist question is: how will Amazon automate the tedious, low-paying jobs that computers are no good at? What happens to the data entry clerks, abusive-content managers, or help desk technicians? Until it figures out who will empty the recycle bin, accelerationism is only a socialism of the creative class.

    The machinic is more than just machines–it approaches technology as a question of organization. The term is first used by Guattari in a 1968 paper titled “Machine and Structure” that he presented to Lacan’s Freudian School of Paris, a paper that would jumpstart his collaboration with Deleuze. He argues for favoring machine to structure. Structures transform parts of a whole by exchanging or substituting particularities so that every part shares in a general form (in other words, the production of isomorphism). An easy political example is the Leninist Party, which mediates the particularized private interests to form them into the general will of a class. Machines instead treat the relationship between things as a problem of communication. The result is the “control and communication” of Norbert Wiener’s cybernetics, which connects distinct things in a circuit instead of implanting a general logic. The word “machine” never really caught on but the concept has made inroads in the social sciences, where actor-network theory, game theory, behaviorism, systems theory, and other cybernetic approaches have gained acceptance.

    Structure or machine, each engenders a different type of subjectivity, and each realizes a different model of communication. The two are found in A Thousand Plateaus, where Deleuze and Guattari note two different types of state subject formation: social subjection and machinic enslavement (456-460). While it only takes up a few short pages, the distinction is essential to Bernard Stiegler’s work and has been expertly elaborated by Maurizio Lazzarato in the book Signs and Machines. We are all familiar with molar social subjection synonymous with “agency”–it is the power that results from individuals bridging the gap between themselves and broader structures of representation, social roles, and institutional demands. This subjectivity is well outlined by Lacanians and other theorists of the linguistic turn (Virno, Rancière, Butler, Agamben). Missing from their accounts is machinic enslavement, which treats people as simply cogs in the machine. Such subjectivity is largely overlooked because it bypasses existential questions of recognition or self-identity. This is because machinic enslavement operates at the level of the infra-social or pre-individual through the molecular operators of unindividuated affects, sensations, desires not assigned to a subject. Offering a concrete example, Deleuze and Guattari reference Mumford’s megamachines of surplus societies that create huge landworks by treating humans as mere constituent parts. Capitalism revived the megamachine in the sixteenth century, and more recently, we have entered the “third age” of enslavement marked by the development of cybernetic and informational machines. In place of the pyramids are technical machines that use humans at places in technical circuits where computers are incapable or too costly, e.g. Amazon’s Mechanical Turk.

    I should also clarify that not all machines are bad. Rather, Dark Deleuze only trusts one kind of machine, the war machine. And war machines follow a single trajectory–a line of flight out of this world. A major task of the war machine conveniently aligns with my politics of techno-anarchism: to blow apart the networks of communication created by the state.

    Galloway: I can’t resist a silly pun, cannon of joy. Part of your project is about resisting a certain masculinist tendency. Is that a fair assessment? How do feminism and queer theory influence your project?

    Culp: Feminism is hardwired into the tagline for Dark Deleuze through a critique of emotional labor and the exhibition of bodies–“A revolutionary Deleuze for today’s digital world of compulsory happiness, decentralized control, and overexposure.” The major thread I pull through the book is a materialist feminist one: something intolerable about this world is that it demands we participate in its accumulation and reproduction. So how about a different play on words: Sara Ahmed’s feminist killjoy, who refuses the sexual contract that requires women to appear outwardly grateful and agreeable? Or better yet, Joy Division? The name would associate the project with post-punk, its conceptual attack on the mainstream, and the band’s nod to the sexual labor depicted in the novella House of Dolls.

    My critique of accumulation is also a media argument about connection. The most popular critics of ‘net culture are worried that we are losing ourselves. So on the one hand, we have Sherry Turkle who is worried that humans are becoming isolated in a state of being “alone-together”; and on the other, there is Bernard Stiegler, who thinks that the network supplants important parts of what it means to be human. I find this kind of critique socially conservative. It also victim-blames those who use social media the most. Recall the countless articles attacking women who take selfies as part of self-care regimen or teens who creatively evade parental authority. I’m more interested in the critique of early ’90s ‘net culture and its enthusiasm for the network. In general, I argue that network-centric approaches are now the dominant form of power. As such, I am much more interested in how the rhizome prefigures the digitally-coordinated networks of exploitation that have made Apple, Amazon, and Google into the world’s most powerful corporations. While not a feminist issue on its face, it’s easy to see feminism’s relevance when we consider the gendered division of labor that usually makes women the employees of choice for low-paying jobs in electronics manufacturing, call centers, and other digital industries.

    Lastly, feminism and queer theory explicitly meet in my critique of reproduction. A key argument of Deleuze and Guattari in Anti-Oedipus is the auto-production of the real, which is to say, we already live in a “world without us.” My argument is that we need to learn how to hate some of the things it produces. Of course, this is a reworked critique of capitalist alienation and exploitation, which is a system that gives to us (goods and the wage) only because it already stole them behind our back (restriction from the means of subsistence and surplus value). Such ambivalence is the everyday reality of the maquiladora worker who needs her job but may secretly hope that all the factories burn to the ground. Such degrading feelings are the result of the compromises we make to reproduce ourselves. In the book, I give voice to them by fusing together David Halperin and Valerie Traub’s notion of gay shame acting as a solvent to whatever binds us to identity and Deleuze’s shame at not being able to prevent the intolerable. But feeling shame is not enough. To complete the argument, we need to draw out the queer feminist critique of reproduction latent in Marx and Freud. Détourning an old phrase: direct action begins at the point of reproduction. My first impulse is to rely on the punk rock attitude of Lee Edelman and Paul Preciado’s indictment of reproduction. But you are right that they have their masculinist moments, so what we need is something more post-punk–a little less aggressive and a lot more experimental. Hopefully Dark Deleuze is that.

    Galloway: Edelman’s “fuck Annie” is one of the best lines in recent theory. “Fuck the social order and the Child in whose name we’re collectively terrorized; fuck Annie; fuck the waif from Les Mis; fuck the poor, innocent kid on the Net; fuck Laws both with capital ls and small; fuck the whole network of Symbolic relations and the future that serves as its prop” (No Future, 29). Your book claims, in essence, that the Fuck Annies are more interesting than the Aleatory Materialists. But how can we escape the long arm of Lucretius?

    Culp: My feeling is that the politics of aleatory materialism remains ambiguous. Beyond the literal meaning of “joy,” there are important feminist takes on the materialist Spinoza of the encounter that deserve our attention. Isabelle Stengers’s work is among the most comprehensive, though the two most famous are probably Donna Haraway’s cyborg feminism and Karen Barad’s agential realism. Curiously, while New Materialism has been quite a boon for the art and design world, its socio-political stakes have never been more uncertain. One would hope that appeals to matter would lend philosophical credence to topical events such as #blacklivesmatter. Yet for many, New Materialism has simply led to a new formalism focused on material forms or realist accounts of physical systems meant to eclipse the “epistemological excesses” of post-structuralism. This divergence was not lost on commentators in the most recent issue of of October, which functioned as a sort of referendum on New Materialism. On the hand, the issue included a generous accounting of the many avenues artists have taken in exploring various “new materialist” directions. Of those, I most appreciated Mel Chen’s reminder that materialism cannot serve as a “get out of jail free card” on the history of racism, sexism, ablism, and speciesism. While on the other, it included the first sustained attack on New Materialism by fellow travelers. Certainly the New Materialist stance of seeing the world from the perspective of “real objects” can be valuable, but only if it does not exclude old materialism’s politics of labor. I draw from Deleuzian New Materialist feminists in my critique of accumulation and reproduction, but only after short-circuiting their world-building. This is a move I learned from Sue Ruddick, whose Theory, Culture & Society article on the affect of the philosopher’s scream is an absolute tour de force. And then there is Graham Burnett’s remark that recent materialisms are like “Etsy kissed by philosophy.” The phrase perfectly crystallizes the controversy, but it might be too hot to touch for at least a decade…

    Galloway: Let’s focus more on the theme of affirmation and negation, since the tide seems to be changing. In recent years, a number of theorists have turned away from affirmation toward a different set of vectors such as negation, eclipse, extinction, or pessimism. Have we reached peak affirmation?

    Culp: We should first nail down what affirmation means in this context. There is the metaphysical version of affirmation, such as Foucault’s proud title as a “happy positivist.” In this declaration in Archaeology of Knowledge and “The Order of Discourse,” he is not claiming to be a logical positivist. Rather, Foucault is distinguishing his approach from Sartrean totality, transcendentalism, and genetic origins (his secondary target being the reading-between-the-lines method of Althusserian symptomatic reading). He goes on to formalize this disagreement in his famous statement on the genealogical method, “Nietzsche, Genealogy, History.” Despite being an admirer of Sartre, Deleuze shares this affirmative metaphysics with Foucault, which commentators usually describe as an alternative to the Hegelian system of identity, contradiction, determinate negation, and sublation. Nothing about this “happily positivist” system forces us to be optimists. In fact, it only raises the stakes for locating how all the non-metaphysical senses of the negative persist.

    Affirmation could be taken to imply a simple “more is better” logic as seen in Assemblage Theory and Latourian Compositionalism. Behind this logic is a principle of accumulation that lacks a theory of exploitation and fails to consider the power of disconnection. The Spinozist definition of joy does little to dispel this myth, but it is not like either project has revolutionary political aspirations. I think we would be better served to follow the currents of radical political developments over the last twenty years, which have been following an increasingly negative path. One part of the story is a history of failure. The February 15, 2003 global demonstration against the Iraq War was the largest protest in history but had no effect on the course of the war. More recently, the election of democratic socialist governments in Europe has done little to stave off austerity, even as economists publicly describe it as a bankrupt model destined to deepen the crisis. I actually find hope in the current circuit of struggle and think that its lack of alter-globalization world-building aspirations might be a plus. My cues come from the anarchist black bloc and those of the post-Occupy generation who would rather not pose any demands. This is why I return to the late Deleuze of the “control societies” essay and his advice to scramble the codes, to seek out spaces where nothing needs to be said, and to establish vacuoles of non-communication. Those actions feed the subterranean source of Dark Deleuze‘s darkness and the well from which comes hatred, cruelty, interruption, un-becoming, escape, cataclysm, and the destruction of worlds.

    Galloway: Does hatred for the world do a similar work for you that judgment or moralism does in other writers? How do we avoid the more violent and corrosive forms of hate?

    Culp: Writer Antonin Artaud’s attempt “to have done with the judgment of God” plays a crucial role in Dark Deleuze. Not just any specific authority but whatever gods are left. The easiest way to summarize this is “the three deaths.” Deleuze already makes note of these deaths in the preface to Difference and Repetition, but it only became clear to me after I read Gregg Flaxman’s Gilles Deleuze and the Fabulation of Philosophy. We all know of Nietzsche’s Death of God. With it, Nietzsche notes that God no longer serves as the central organizing principle for us moderns. Important to Dark Deleuze is Pierre Klossowski’s Nietzsche, who is part of a conspiracy against all of humanity. Why? Because even as God is dead, humanity has replaced him with itself. Next comes the Death of Man, which we can lay at the feet of Foucault. More than any other text, The Order of Things demonstrates how the birth of modern man was an invention doomed to fail. So if that death is already written in sand about to be washed away, then what comes next? Here I turn to the world, worlding, and world-building. It seems obvious when looking at the problems that plague our world: global climate change, integrated world capitalism, and other planet-scale catastrophes. We could try to deal with each problem one by one. But why not pose an even more radical proposition? What if we gave up on trying to save this world? We are already awash in sci-fi that tries to do this, though most of it is incredibly socially conservative. Perhaps now is the time for thinkers like us to catch up. Fragments of Deleuze already lay out the terms of the project. He ends the preface to Different and Repetition by assigning philosophy the task of writing apocalyptic science fiction. Deleuze’s book opens with lightning across the black sky and ends with the world swelling into a single ocean of excess. Dark Deleuze collects those moments and names it the Death of This World.

    Galloway: Speaking of climate change, I’m reminded how ecological thinkers can be very religious, if not in word then in deed. Ecologists like to critique “nature” and tout their anti-essentialist credentials, while at the same time promulgating tellurian “change” as necessary, even beneficial. Have they simply replaced one irresistible force with another? But your “hatred of the world” follows a different logic…

    Culp: Irresistible indeed! Yet it is very dangerous to let the earth have the final say. Not only does psychoanalysis teach us that it is necessary to buck the judgment of nature, the is/ought distinction at the philosophical core of most ethical thought refuses to let natural fact define the good. I introduce hatred to develop a critical distance from what is, and, as such, hatred is also a reclamation of the future in that it is a refusal to allow what-is to prevail over what-could-be. Such an orientation to the future is already in Deleuze and Guattari. What else is de-territorialization? I just give it a name. They have another name for what I call hatred: utopia.

    Speaking of utopia, Deleuze and Guattari’s definition of utopia in What is Philosophy? as simultaneously now-here and no-where is often used by commentators to justify odd compromise positions with the present state of affairs. The immediate reference is Samuel Butler’s 1872 book Erewhon, a backward spelling of nowhere, which Deleuze also references across his other work. I would imagine most people would assume it is a utopian novel in the vein of Edward Bellamy’s Looking Backward. And Erewhon does borrow from the conventions of utopian literature, but only to skewer them with satire. A closer examination reveals that the book is really a jab at religion, Victorian values, and the British colonization of New Zealand! So if there is anything that the now-here of Erewhon has to contribute to utopia, it is that the present deserves our ruthless criticism. So instead of being a simultaneous now-here and no-where, hatred follows from Deleuze and Guattari’s suggestion in A Thousand Plateaus to “overthrow ontology” (25). Therefore, utopia is only found in Erewhon by taking leave of the now-here to get to no-where.

    Galloway: In Dark Deleuze you talk about avoiding “the liberal trap of tolerance, compassion, and respect.” And you conclude by saying that the “greatest crime of joyousness is tolerance.” Can you explain what you mean, particularly for those who might value tolerance as a virtue?

    Culp: Among the many followers of Deleuze today, there are a number of liberal Deleuzians. Perhaps the biggest stronghold is in political science, where there is a committed group of self-professed radical liberals. Another strain bridges Deleuze with the liberalism of John Rawls. I was a bit shocked to discover both of these approaches, but I suppose it was inevitable given liberalism’s ability to assimilate nearly any form of thought.

    Herbert Marcuse recognized “repressive tolerance” as the incredible power of liberalism to justify the violence of positions clothed as neutral. The examples Marcuse cites are governments who say they respect democratic liberties because they allow political protest although they ignore protesters by labeling them a special interest group. For those of us who have seen university administrations calmly collect student demands, set up dead-end committees, and slap pictures of protestors on promotional materials as a badge of diversity, it should be no surprise that Marcuse dedicated the essay to his students. An important elaboration on repressive tolerance is Wendy Brown’s Regulating Aversion. She argues that imperialist US foreign policy drapes itself in tolerance discourse. This helps diagnose why liberal feminist groups lined up behind the US invasion of Afghanistan (the Taliban is patriarchal) and explains how a mere utterance of ISIS inspires even the most progressive liberals to support outrageous war budgets.

    Because of their commitment to democracy, Brown and Marcuse can only qualify liberalism’s universal procedures for an ethical subject. Each criticizes certain uses of tolerance but does not want to dispense with it completely. Deleuze’s hatred of democracy makes it much easier for me. Instead, I embrace the perspective of a communist partisan because communists fight from a different structural position than that of the capitalist.

    Galloway: Speaking of structure and position, you have a section in the book on asymmetry. Most authors avoid asymmetry, instead favoring concepts like exchange or reciprocity. I’m thinking of texts on “the encounter” or “the gift,” not to mention dialectics itself as a system of exchange. Still you want to embrace irreversibility, incommensurability, and formal inoperability–why?

    Culp: There are a lot of reasons to prefer asymmetry, but for me, it comes down to a question of political strategy.

    First, a little background. Deleuze and Guattari’s critique of exchange is important to Anti-Oedipus, which was staged through a challenge to Claude Lévi-Strauss. This is why they shift from the traditional Marxist analysis of mode of production to an anthropological study of anti-production, for which they use the work of Pierre Clastres and Georges Bataille to outline non-economic forms of power that prevented the emergence of capitalism. Contemporary anthropologists have renewed this line of inquiry, for instance, Eduardo Viveiros de Castro, who argues in Cannibal Metaphysics that cosmologies differ radically enough between peoples that they essentially live in different worlds. The cannibal, he shows, is not the subject of a mode of production but a mode of predation.

    Those are not the stakes that interest me the most. Consider instead the consequence of ethical systems built on the gift and political systems of incommensurability. The ethical approach is exemplified by Derrida, whose responsibility to the other draws from the liberal theological tradition of accepting the stranger. While there is distance between self and other, it is a difference that is bridged through the democratic project of radical inclusion, even if such incorporation can only be aporetically described as a necessary-impossibility. In contrast, the politics of asymmetry uses incommensurability to widen the chasm opened by difference. It offers a strategy for generating antagonism without the formal equivalence of dialectics and provides an image of revolution based on fundamental transformation. The former can be seen in the inherent difference between the perspective of labor and the perspective of capital, whereas the latter is a way out of what Guy Debord calls “a perpetual present.”

    Galloway: You are exploring a “dark” Deleuze, and I’m reminded how the concepts of darkness and blackness have expanded and interwoven in recent years in everything from afro-pessimism to black metal theory (which we know is frighteningly white). How do you differentiate between darkness and blackness? Or perhaps that’s not the point?

    Culp: The writing on Deleuze and race is uneven. A lot of it can be blamed on the imprecise definition of becoming. The most vulgar version of becoming is embodied by neoliberal subjects who undergo an always-incomplete process of coming more into being (finding themselves, identifying their capacities, commanding their abilities). The molecular version is a bit better in that it theorizes subjectivity as developing outside of or in tension with identity. Yet the prominent uses of becoming and race rarely escaped the postmodern orbit of hybridity, difference, and inclusive disjunction–the White Man’s face as master signifier, miscegenation as anti-racist practice, “I am all the names of history.” You are right to mention afro-pessimism, as it cuts a new way through the problem. As I’ve written elsewhere, Frantz Fanon describes being caught between “infinity and nothingness” in his famous chapter on the fact of blackness in Black Skin White Masks. The position of infinity is best championed by Fred Moten, whose black fugitive is the effect of an excessive vitality that has survived five hundred years of captivity. He catches fleeting moments of it in performances of jazz, art, and poetry. This position fits well with the familiar figures of Deleuzo-Guattarian politics: the itinerant nomad, the foreigner speaking in a minor tongue, the virtuoso trapped in-between lands. In short: the bastard combination of two or more distinct worlds. In contrast, afro-pessimism is not the opposite of the black radical tradition but its outside. According to afro-pessimism, the definition of blackness is nothing but the social death of captivity. Remember the scene of subjection mentioned by Fanon? During that nauseating moment he is assailed by a whole series of cultural associations attached to him by strangers on the street. “I was battered down by tom-toms, cannibalism, intellectual deficiency, fetishism, racial defects, slave-ships, and above all else, above all: ‘Sho’ good eatin”” (112). The lesson that afro-pessimism draws from this scene is that cultural representations of blackness only reflect back the interior of white civil society. The conclusion is that combining social death with a culture of resistance, such as the one embodied by Fanon’s mentor Aimé Césaire, is a trap that leads only back to whiteness. Afro-pessimism thus follows the alternate route of darkness. It casts a line to the outside through an un-becoming that dissolves the identity we are give as a token for the shame of being a survivor.

    Galloway: In a recent interview the filmmaker Haile Gerima spoke about whiteness as “realization.” By this he meant both realization as such–self-realization, the realization of the self, the ability to realize the self–but also the more nefarious version as “realization through the other.” What’s astounding is that one can replace “through” with almost any other preposition–for, against, with, without, etc.–and the dynamic still holds. Whiteness is the thing that turns everything else, including black bodies, into fodder for its own realization. Is this why you turn away from realization toward something like profanation? And is darkness just another kind of whiteness?

    Culp: Perhaps blackness is to the profane as darkness is to the outside. What is black metal if not a project of political-aesthetic profanation? But as other commentators have pointed out, the politics of black metal is ultimately telluric (e.g. Benjamin Noys’s “‘Remain True to the Earth!’: Remarks on the Politics of Black Metal”). The left wing of black metal is anarchist anti-civ and the right is fascist-nativist. Both trace authority back to the earth that they treat as an ultimate judge usurped by false idols.

    The process follows what Badiou calls “the passion for the real,” his diagnosis of the Twentieth Century’s obsession with true identity, false copies, and inauthentic fakes. His critique equally applies to Deleuzian realists. This is why I think it is essential to return to Deleuze’s work on cinema and the powers of the false. One key example is Orson Welles’s F for Fake. Yet my favorite is the noir novel, which he praises in “The Philosophy of Crime Novels.” The noir protagonist never follows in the footsteps of Sherlock Holmes or other classical detectives’s search for the real, which happens by sniffing out the truth through a scientific attunement of the senses. Rather, the dirty streets lead the detective down enough dead ends that he proceeds by way of a series of errors. What noir reveals is that crime and the police have “nothing to do with a metaphysical or scientific search for truth” (82). The truth is rarely decisive in noir because breakthroughs only come by way of “the great trinity of falsehood”: informant-corruption-torture. The ultimate gift of noir is a new vision of the world whereby honest people are just dupes of the police because society is fueled by falsehood all the way down.

    To specify the descent to darkness, I use darkness to signify the outside. The outside has many names: the contingent, the void, the unexpected, the accidental, the crack-up, the catastrophe. The dominant affects associated with it are anticipation, foreboding, and terror. To give a few examples, H. P. Lovecraft’s scariest monsters are those so alien that characters cannot describe them with any clarity, Maurice Blanchot’s disaster is the Holocaust as well as any other event so terrible that it interrupts thinking, and Don DeLillo’s “airborne toxic event” is an incident so foreign that it can only be described in the most banal terms. Of Deleuze and Guattari’s many different bodies without organs, one of the conservative varieties comes from a Freudian model of the psyche as a shell meant to protect the ego from outside perturbations. We all have these protective barriers made up of habits that help us navigate an uncertain world–that is the purpose of Guattari’s ritornello, that little ditty we whistle to remind us of the familiar even when we travel to strange lands. There are two parts that work together, the refrain and the strange land. The refrains have only grown yet the journeys seem to have ended.

    I’ll end with an example close to my own heart. Deleuze and Guattari are being used to support new anarchist “pre-figurative politics,” which is defined as seeking to build a new society within the constraints of the now. The consequence is that the political horizon of the future gets collapsed into the present. This is frustrating for someone like me, who holds out hope for a revolutionary future that ceases the million tiny humiliations that make up everyday life. I like J. K. Gibson-Graham’s feminist critique of political economy, but community currencies, labor time banks, and worker’s coops are not my image of communism. This is why I have drawn on the gothic for inspiration. A revolution that emerges from the darkness holds the apocalyptic potential of ending the world as we know it.

    Works Cited

    • Ahmed, Sara. The Promise of Happiness. Durham, NC: Duke University Press, 2010.
    • Artaud, Antonin. To Have Done With The Judgment of God. 1947. Live play, Boston: Exploding Envelope, c1985. https://www.youtube.com/watch?v=VHtrY1UtwNs.
    • Badiou, Alain. The Century. 2005. Cambridge, UK: Polity Press, 2007.
    • Barad, Karen. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter. Durham, NC: Duke University Press, 2007.
    • Bataille, Georges. “The Notion of Expenditure.” 1933. In Visions of Excess: Selected Writings, 1927-1939, translated by Allan Stoekl, Carl R. Lovin, and Donald M. Leslie Jr., 167-81. Minneapolis: University of Minnesota Press, 1985.
    • Bellamy, Edward. Looking Backward: From 2000 to 1887. Boston: Ticknor & co., 1888.
    • Blanchot, Maurice. The Writing of the Disaster. 1980. Translated by Ann Smock. Lincoln, NE: University of Nebraska Press, 1995.
    • Brown, Wendy. Regulating Aversion: Tolerance in the Age of Identity and Empire. Princeton, N.J.: Princeton University Press, 2006.
    • Burnett, Graham. “A Questionnaire on Materialisms.” October 155 (2016): 19-20.
    • Butler, Samuel. Erewhon: or, Over the Range. 1872. London: A.C. Fifield, 1910. http://www.gutenberg.org/files/1906/1906-h/1906-h.htm.
    • Chen, Mel Y. “A Questionnaire on Materialisms.” October 155 (2016): 21-22.
    • Clastres, Pierre. Society against the State. 1974. Translated by Robert Hurley and Abe Stein. New York: Zone Books, 1987.
    • Culp, Andrew. Dark Deleuze. Minneapolis: University of Minnesota Press, 2016.
    • ———. “Blackness.” New York: Hostis, 2015.
    • Debord, Guy. The Society of the Spectacle. 1967. Translated by Fredy Perlman et al. Detroit: Red and Black, 1977.
    • DeLanda, Manuel. A Thousand Years of Nonlinear History. New York: Zone Books, 2000.
    • ———. War in the Age of Intelligent Machines. New York: Zone Books, 1991.
    • DeLillo, Don. White Noise. New York: Viking Press, 1985.
    • Deleuze, Gilles. Cinema 2: The Time-Image. 1985. Translated by Hugh Tomlinson and Robert Galeta. Minneapolis: University of Minnesota Press, 1989.
    • ———. “The Philosophy of Crime Novels.” 1966. Translated by Michael Taormina. In Desert Islands and Other Texts, 1953-1974, 80-85. New York: Semiotext(e), 2004.
    • ———. Difference and Repetition. 1968. Translated by Paul Patton. New York: Columbia University Press, 1994.
    • ———. Empiricism and Subjectivity: An Essay on Hume’s Theory of Human Nature. 1953. Translated by Constantin V. Boundas. New York: Columbia University Press, 1995.
    • ———. Foucault. 1986. Translated by Seán Hand. Minneapolis: University of Minnesota Press, 1988.
    • Deleuze, Gilles, and Félix Guattari. Anti-Oedipus. 1972. Translated by Robert Hurley, Mark Seem, and Helen R. Lane. Minneapolis: University of Minnesota Press, 1977.
    • ———. A Thousand Plateaus. 1980. Translated by Brian Massumi. Minneapolis: University of Minnesota Press, 1987.
    • ———. What Is Philosophy? 1991. Translated by Hugh Tomlinson and Graham Burchell. New York: Columbia University Press, 1994.
    • Derrida, Jacques. The Gift of Death and Literature in Secret. Translated by David Willis. Chicago: University of Chicago Press, 2007; second edition.
    • Edelman, Lee. No Future: Queer Theory and the Death Drive. Durham, N.C.: Duke University Press, 2004.
    • Fanon, Frantz. Black Skin White Masks. 1952. Translated by Charles Lam Markmann. New York: Grove Press, 1968.
    • Flaxman, Gregory. Gilles Deleuze and the Fabulation of Philosophy. Minneapolis: University of Minnesota Press, 2011.
    • Foucault, Michel. The Archaeology of Knowledge and the Discourse on Language. 1971. Translated by A.M. Sheridan Smith. New York: Pantheon Books, 1972.
    • ———. “Nietzsche, Genealogy, History.” 1971. In Language, Counter-Memory, Practice: Selected Essays and Interviews, translated by Donald F. Bouchard and Sherry Simon, 113-38. Ithaca, N.Y.: Cornell University Press, 1977.
    • ———. The Order of Things. 1966. New York: Pantheon Books, 1970.
    • Freud, Sigmund. Beyond the Pleasure Principle. 1920. Translated by James Strachley. London: Hogarth Press, 1955.
    • ———. “Instincts and their Vicissitudes.” 1915. Translated by James Strachley. In Standard Edition of the Complete Psychological Works of Sigmund Freud 14, 111-140. London: Hogarth Press, 1957.
    • Gerima, Haile. “Love Visual: A Conversation with Haile Gerima.” Interview by Sarah Lewis and Dagmawi Woubshet. Aperture, Feb 23, 2016. http://aperture.org/blog/love-visual-haile-gerima/.
    • Gibson-Graham, J.K. The End of Capitalism (As We Knew It): A Feminist Critique of Political Economy. Hoboken: Blackwell, 1996.
    • ———. A Postcapitalist Politics. Minneapolis: University of Minnesota Press, 2006.
    • Guattari, Félix. “Machine and Structure.” 1968. Translated by Rosemary Sheed. In Molecular Revolution: Psychiatry and Politics, 111-119. Harmondsworth, Middlesex: Penguin, 1984.
    • Halperin, David, and Valerie Traub. “Beyond Gay Pride.” In Gay Shame, 3-40. Chicago: University of Chicago Press, 2009.
    • Haraway, Donna. Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge, 1991.
    • Klossowski, Pierre. “Circulus Vitiosus.” Translated by Joseph Kuzma. The Agonist: A Nietzsche Circle Journal 2, no. 1 (2009): 31-47.
    • ———. Nietzsche and the Vicious Circle. 1969. Translated by Daniel W. Smith. Chicago: University of Chicago Press, 1997.
    • Lazzarato, Maurizio. Signs and Machines. 2010. Translated by Joshua David Jordan. Los Angeles: Semiotext(e), 2014.
    • Marcuse, Herbert. “Repressive Tolerance.” In A Critique of Pure Tolerance, 81-117. Boston: Beacon Press, 1965.
    • Mauss, Marcel. The Gift: The Form and Reason for Exchange in Archaic Societies. 1950. Translated by W. D. Hallis. New York: Routledge, 1990.
    • Moten, Fred. In The Break: The Aesthetics of the Black Radical Tradition. Minneapolis: University of Minnesota Press, 2003.
    • Mumford, Lewis. Technics and Human Development. San Diego: Harcourt Brace Jovanovich, 1967.
    • Noys, Benjamin. “‘Remain True to the Earth!’: Remarks on the Politics of Black Metal.” In: Hideous Gnosis: Black Metal Theory Symposium 1 (2010): 105-128.
    • Preciado, Paul. Testo-Junkie: Sex, Drugs, and Biopolitics in the Phamacopornographic Era. 2008. Translated by Bruce Benderson. New York: The Feminist Press, 2013.
    • Ruddick, Susan. “The Politics of Affect: Spinoza in the Work of Negri and Deleuze.” Theory, Culture, Society 27, no. 4 (2010): 21-45.
    • Scott, James C. The Art of Not Being Governed: An Anarchist History of Upland Southeast Asia. New Haven: Yale University Press, 2009.
    • Sexton, Jared. “Afro-Pessimism: The Unclear Word.” In Rhizomes 29 (2016). http://www.rhizomes.net/issue29/sexton.html.
    • ———. “Ante-Anti-Blackness: Afterthoughts.” In Lateral 1 (2012). http://lateral.culturalstudiesassociation.org/issue1/content/sexton.html.
    • ———. “The Social Life of Social Death: On Afro-Pessimism and Black Optimism.” In Intensions 5 (2011). http://www.yorku.ca/intent/issue5/articles/jaredsexton.php.
    • Stiegler, Bernard. For a New Critique of Political Economy. Cambridge: Polity Press, 2010.
    • ———. Technics and Time 1: The Fault of Epimetheus. 1994. Translated by George Collins and Richard Beardsworth. Redwood City, CA: Stanford University Press, 1998.
    • Tiqqun. “How Is It to Be Done?” 2001. In Introduction to Civil War. 2001. Translated by Alexander R. Galloway and Jason E. Smith. Los Angeles, Calif.: Semiotext(e), 2010.
    • Toynbee, Arnold. A Study of History. Abridgement of Volumes I-VI by D.C. Somervell. London, Oxford University Press, 1946.
    • Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books, 2012.
    • Viveiros de Castro, Eduardo. Cannibal Metaphysics: For a Post-structural Anthropology. 2009. Translated by Peter Skafish. Minneapolis, Minn.: Univocal, 2014.
    • Villani, Arnaud. La guêpe et l’orchidée. Essai sur Gilles Deleuze. Paris: Éditions de Belin, 1999.
    • Welles, Orson, dir. F for Fake. 1974. New York: Criterion Collection, 2005.
    • Wiener, Norbert. Cybernetics: Cybernetics: Or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press, 1948; second revised edition.
    • Williams, Alex, and Nick Srincek. “#ACCELERATE MANIFESTO for an Accelerationist Politics.” Critical Legal Thinking. 2013. http://criticallegalthinking.com/2013/05/14/accelerate-manifesto-for-an-accelerationist-politics/.

    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. He is a frequent contributor to The b2 Review “Digital Studies.”

    Andrew Culp is a Visiting Assistant Professor of Rhetoric Studies at Whitman College. He specializes in cultural-communicative theories of power, the politics of emerging media, and gendered responses to urbanization. His work has appeared in Radical Philosophy, Angelaki, Affinities, and other venues. He previously pre-reviewed Galloway’s Laruelle: Against the Digital for The b2 Review “Digital Studies.”

    Back to the essay

  • How We Think About Technology (Without Thinking About Politics)

    How We Think About Technology (Without Thinking About Politics)

    N. Katherine Hayles, How We Think: Digital Media and Contemporary Technogenesis (Chicago, 2012)a review of N. Katherine Hayles, How We Think: Digital Media and Contemporary Technogenesis (Chicago, 2012)
    by R. Joshua Scannell

    ~

    In How We Think, N Katherine Hayles addresses a number of increasingly urgent problems facing both the humanities in general and scholars of digital culture in particular. In keeping with the research interests she has explored at least since 2002’s Writing Machines (MIT Press), Hayles examines the intersection of digital technologies and humanities practice to argue that contemporary transformations in the orientation of the University (and elsewhere) are attributable to shifts that ubiquitous digital culture have engendered in embodied cognition. She calls this process of mutual evolution between the computer and the human technogenesis (a term that is mostly widely associated with the work of Bernard Stiegler, although Hayles’s theories often aim in a different direction from Stiegler’s). Hayles argues that technogenesis is the basis for the reorientation of the academy, including students, away from established humanistic practices like close reading. Put another way, not only have we become posthuman (as Hayles discusses in her landmark 1999 University of Chicago Press book, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics), but our brains have begun to evolve to think with computers specifically and digital media generally. Rather than a rearguard eulogy for the humanities that was, Hayles advocates for an opening of the humanities to digital dromology; she sees the Digital Humanities as a particularly fertile ground from which to reimagine the humanities generally.

    Hayles is an exceptional scholar, and while her theory of technogenesis is not particularly novel, she articulates it with a clarity and elegance that are welcome and useful in a field that is often cluttered with good ideas, unintelligibly argued. Her close engagement with work across a range of disciplines – from Hegelian philosophy of mind (Catherine Malabou) to theories of semiosis and new media (Lev Manovich) to experimental literary production – grounds an argument about the necessity of transmedial engagement in an effective praxis. Moreover, she ably shifts generic gears over the course of a relatively short manuscript, moving from quasi-ethnographic engagement with University administrators, to media archaeology a la Friedrich Kittler, to contemporary literary theory, with grace. Her critique of the humanities that is, therefore, doubles as a praxis: she is actually producing the discipline-flouting work that she calls on her colleagues to pursue.

    The debate about the death and/or future of the humanities is weather worn, but Hayles’s theory of technogenesis as a platform for engaging in it is a welcome change. For Hayles, the technogenetic argument centers on temporality, and the multiple temporalities embedded in computer processing and human experience. She envisions this relation as cybernetic, in which computer and human are integrated as a system through the feedback loops of their coemergent temporalities. So, computers speed up human responses, which lag behind innovations, which prompt beta test cycles at quicker rates, which demand humans to behave affectively, nonconsciously. The recursive relationship between human duration and machine temporality effectively mutates both. Humanities professors might complain that their students cannot read “closely” like they used to, but for Hayles this is a fault of those disciplines to imagine methods in step with technological changes. Instead of digital media making us “dumber” by reducing our attention spans, as Nicholas Carr argues, Hayles claims that the movement towards what she calls “hyper reading” is an ontological and biological fact of embodied cognition in the age of digital media. If “how we think” were posed as a question, the answer would be: bodily, quickly, cursorily, affectively, non-consciously.

    Hayles argues that this doesn’t imply an eliminative teleology of human capacity, but rather an opportunity to think through novel, expansive interventions into this cyborg loop. We may be thinking (and feeling, and experiencing) differently than we used to, but this remains a fact of human existence. Digital media has shifted the ontics of our technogenetic reality, but it has not fundamentally altered its ontology. Morphological biology, in fact, entails ontological stability. To be human, and to think like one, is to be with machines, and to think with them. The kids, in other words, are all right.

    This sort of quasi-Derridean or Stieglerian Hegelianism is obviously not uncommon in media theory. As Hayles deploys it, this disposition provides a powerful framework for thinking through the relationship of humans and machines without ontological reductivism on either end. Moreover, she engages this theory in a resolutely material fashion, evading the enervating tendency of many theorists in the humanities to reduce actually existing material processes to metaphor and semiosis. Her engagement with Malabou’s work on brain plasticity is particularly useful here. Malabou has argued that the choice facing the intellectual in the age of contemporary capitalism is between plasticity and self-fashioning. Plasticity is a quintessential demand of contemporary capitalism, whereas self-fashioning opens up radical possibilities for intervention. The distinction between these two potentialities, however, is unclear – and therefore demands an ideological commitment to the latter. Hayles is right to point out that this dialectic insufficiently accounts for the myriad ways in which we are engaged with media, and are in fact produced, bodily, by it.

    But while Hayles’ critique is compelling, the responses she posits may be less so. Against what she sees as Malabou’s snide rejection of the potential of media, she argues

    It is precisely because contemporary technogenesis posits a strong connection between ongoing dynamic adaptation of technics and humans that multiple points of intervention open up. These include making new media…adapting present media to subversive ends…using digital media to reenvision academic practices, environments and strategies…and crafting reflexive representations of media self fashionings…that call attention to their own status as media, in the process raising our awareness of both the possibilities and dangers of such self-fashioning. (83)

    With the exception of the ambiguous labor done by the word “subversive,” this reads like a catalog of demands made by administrators seeking to offload ever-greater numbers of students into MOOCs. This is unfortunately indicative of what is, throughout the book, a basic failure to engage with the political economics of “digital media and contemporary technogenesis.” Not every book must explicitly be political, and there is little more ponderous than the obligatory, token consideration of “the political” that so many media scholars feel compelled to make. And yet, this is a text that claims to explain “how” “we” “think” under post-industrial, cognitive capitalism, and so the lack of this engagement cannot help but show.

    Universities across the country are collapsing due to lack of funding, students are practically reduced to debt bondage to cope with the costs of a desperately near-compulsory higher education that fails to deliver economic promises, “disruptive” deployment of digital media has conjured teratic corporate behemoths that all presume to “make the world a better place” on the backs of extraordinarily exploited workforces. There is no way for an account of the relationship between the human and the digital in this capitalist context not to be political. Given the general failure of the book to take these issues seriously, it is unsurprising that two of Hayles’ central suggestions for addressing the crisis in the humanities are 1) to use voluntary, hobbyist labor to do the intensive research that will serve as the data pool for digital humanities scholars and 2) to increasingly develop University partnerships with major digital conglomerates like Google.

    This reads like a cost-cutting administrator’s fever dream because, in the chapter in which Hayles promulgates novel (one might say “disruptive”) ideas for how best to move the humanities forward, she only speaks to administrators. There is no consideration of labor in this call for the reformation of the humanities. Given the enormous amount of writing that has been done on affective capitalism (Clough 2008), digital labor (Scholz 2012), emotional labor (Van Kleaf 2015), and so many other iterations of exploitation under digital capitalism, it boggles the mind a bit to see an embrace of the Mechanical Turk as a model for the future university.

    While it may be true that humanities education is in crisis – that it lacks funding, that its methods don’t connect with students, that it increasingly must justify its existence on economic grounds – it is unclear that any of these aspects of the crisis are attributable to a lack of engagement with the potentials of digital media, or the recognition that humans are evolving with our computers. All of these crises are just as plausibly attributable to what, among many others, Chandra Mohanty identified ten years ago as the emergence of the corporate university, and the concomitant transformation of the mission of the university from one of fostering democratic discourse to one of maximizing capital (Mohanty 2003). In other words, we might as easily attribute the crisis to the tightening command that contemporary capitalist institutions have over the logic of the university.

    Humanities departments are underfunded precisely because they cannot – almost by definition – justify their existence on monetary grounds. When students are not only acculturated, but are compelled by financial realities and debt, to understand the university as a credentialing institution capable of guaranteeing certain baseline waged occupations – then it is no surprise that they are uninterested in “close reading” of texts. Or, rather, it might be true that students’ “hyperreading” is a consequence of their cognitive evolution with machines. But it is also just as plausibly a consequence of the fact that students often are working full time jobs while taking on full time (or more) course loads. They do not have the time or inclination to read long, difficult texts closely. They do not have the time or inclination because of the consolidating paradigm around what labor, and particularly their labor, is worth. Why pay for a researcher when you can get a hobbyist to do it for free? Why pay for a humanities line when Google and Wikipedia can deliver everything an institution might need to know?

    In a political economy in which Amazon’s reduction of human employees to algorithmically-managed meat wagons is increasingly diagrammatic and “innovative” in industries from service to criminal justice to education, the proposals Hayles is making to ensure the future of the university seem more fifth columnary that emancipatory.

    This stance also evacuates much-needed context from what are otherwise thoroughly interesting, well-crafted arguments. This is particularly true of How We Think’s engagement with Lev Manovich’s claims regarding narrative and database. Speaking reductively, in The Language of New Media (MIT Press, 2001), Manovich argued that under there are two major communicative forms: narrative and database. Narrative, in his telling, is more or less linear, and dependent on human agency to be sensible. Novels and films, despite many modernist efforts to subvert this, tend toward narrative. The database, as opposed to the narrative, arranges information according to patterns, and does not depend on a diachronic point-to-point communicative flow to be intelligible. Rather, the database exists in multiple temporalities, with the accumulation of data for rhizomatic recall of seemingly unrelated information producing improbable patterns of knowledge production. Historically, he argues, narrative has dominated. But with the increasing digitization of cultural output, the database will more and more replace narrative.

    Manovich’s dichotomy of media has been both influential and roundly criticized (not least by Manovich himself in Software Takes Command, Bloomsbury 2013) Hayles convincingly takes it to task for being reductive and instituting a teleology of cultural forms that isn’t borne out by cultural practice. Narrative, obviously, hasn’t gone anywhere. Hayles extends this critique by considering the distinctive ways space and time are mobilized by database and narrative formations. Databases, she argues, depend on interoperability between different software platforms that need to access the stored information. In the case of geographical information services and global positioning services, this interoperability depends on some sort of universal standard against which all information can be measured. Thus, Cartesian space and time are inevitably inserted into database logics, depriving them of the capacity for liveliness. That is to say that the need to standardize the units that measure space and time in machine-readable databases imposes a conceptual grid on the world that is creatively limiting. Narrative, on the other hand, does not depend on interoperability, and therefore does not have an absolute referent against which it must make itself intelligible. Given this, it is capable of complex and variegated temporalities not available to databases. Databases, she concludes, can only operate within spatial parameters, while narrative can represent time in different, more creative ways.

    As an expansion and corrective to Manovich, this argument is compelling. Displacing his teleology and infusing it with a critique of the spatio-temporal work of database technologies and their organization of cultural knowledge is crucial. Hayles bases her claim on a detailed and fascinating comparison between the coding requirements of relational databanks and object-oriented databanks. But, somewhat surprisingly, she takes these different programming language models and metonymizes them as social realities. Temporality in the construction of objects transmutes into temporality as a philosophical category. It’s unclear how this leap holds without an attendant sociopolitical critique. But it is impossible to talk about the cultural logic of computation without talking about the social context in which this computation emerges. In other words, it is absolutely true that the “spatializing” techniques of coders (like clustering) render data points as spatial within the context of the data bank. But it is not an immediately logical leap to then claim that therefore databases as a cultural form are spatial and not temporal.

    Further, in the context of contemporary data science, Hayles’s claims about interoperability are at least somewhat puzzling. Interoperability and standardized referents might be a theoretical necessity for databases to be useful, but the ever-inflating markets around “big data,” data analytics, insights, overcoming data siloing, edge computing, etc, demonstrate quite categorically that interoperability-in-general is not only non-existent, but is productively non-existent. That is to say, there are enormous industries that have developed precisely around efforts to synthesize information generated and stored across non-interoperable datasets. Moreover, data analytics companies provide insights almost entirely based on their capacity to track improbably data patterns and resonances across unlikely temporalities.

    Far from a Cartesian world of absolute space and time, contemporary data science is a quite posthuman enterprise in committing machine learning to stretch, bend and strobe space and time in order to generate the possibility of bankable information. This is both theoretically true in the sense of setting algorithms to work sorting, sifting and analyzing truly incomprehensible amounts of data and materially true in the sense of the massive amount of capital and labor that is invested in building, powering, cooling, staffing and securing data centers. Moreover, the amount of data “in the cloud” has become so massive that analytics companies have quite literally reterritorialized information– particularly trades specializing in high frequency trading, which practice “co- location,” locating data centers geographically closer   the sites from which they will be accessed in order to maximize processing speed.

    Data science functions much like financial derivatives do (Martin 2015). Value in the present is hedged against the probable future spatiotemporal organization of software and material infrastructures capable of rendering a possibly profitable bundling of information in the immediate future. That may not be narrative, but it is certainly temporal. It is a temporality spurred by the queer fluxes of capital.

    All of which circles back to the title of the book. Hayles sets out to explain How We Think. A scholar with such an impeccable track record for pathbreaking analyses of the relationship of the human to technology is setting a high bar for herself with such a goal. In an era in which (in no small part due to her work) it is increasingly unclear who we are, what thinking is or how it happens, it may be an impossible bar to meet. Hayles does an admirable job of trying to inject new paradigms into a narrow academic debate about the future of the humanities. Ultimately, however, there is more resting on the question than the book can account for, not least the livelihoods and futures of her current and future colleagues.
    _____

    R Joshua Scannell is a PhD candidate in sociology at the CUNY Graduate Center. His current research looks at the political economic relations between predictive policing programs and urban informatics systems in New York City. He is the author of Cities: Unauthorized Resistance and Uncertain Sovereignty in the Urban World (Paradigm/Routledge, 2012).

    Back to the essay
    _____

    Patricia T. Clough. 2008. “The Affective Turn.” Theory Culture and Society 25(1) 1-22

    N. Katherine Hayles. 2002. Writing Machines. Cambridge: MIT Press

    N. Katherine Hayles. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press

    Catherine Malabou. 2008. What Should We Do with Our Brain? New York: Fordham University Press

    Lev Manovich. 2001. The Language of New Media. Cambridge: MIT Press.

    Lev Manovich. 2009. Software Takes Command. London: Bloomsbury

    Randy Martin. 2015. Knowledge LTD: Toward a Social Logic of the Derivative. Philadelphia: Temple University Press

    Chandra Mohanty. 2003. Feminism Without Borders: Decolonizing Theory, Practicing Solidarity. Durham: Duke University Press.

    Trebor Scholz, ed. 2012. Digital Labor: The Internet as Playground and Factory. New York: Routledge

    Bernard Stiegler. 1998. Technics and Time, 1: The Fault of Epimetheus. Palo Alto: Stanford University Press

    Kara Van Cleaf. 2015. “Of Woman Born to Mommy Blogged: The Journey from the Personal as Political to the Personal as Commodity.” Women’s Studies Quarterly 43(3/4) 247-265

    Back to the essay

  • The Social Construction of Acceleration

    The Social Construction of Acceleration

    Judy Wajcman, Pressed for Time (Chicago, 2014)a review of Judy Wajcman, Pressed for Time: The Acceleration of Life in Digital Capitalism (Chicago, 2014)
    by Zachary Loeb

    ~

    Patience seems anachronistic in an age of high speed downloads, same day deliveries, and on-demand assistants who can be summoned by tapping a button. Though some waiting may still occur the amount of time spent in anticipation seems to be constantly diminishing, and every day a new bevy of upgrades and devices promise that tomorrow things will be even faster. Such speed is comforting for those who feel that they do not have a moment to waste. Patience becomes a luxury for which we do not have time, even as the technologies that claimed they would free us wind up weighing us down.

    Yet it is far too simplistic to heap the blame for this situation on technology, as such. True, contemporary technologies may be prominent characters in the drama in which we are embroiled, but as Judy Wajcman argues in her book Pressed for Time, we should not approach technology as though it exists separately from the social, economic, and political factors that shape contemporary society. Indeed, to understand technology today it is necessary to recognize that “temporal demands are not inherent to technology. They are built into our devices by all-too-human schemes and desires” (3). In Wajcman’s view, technology is not the true culprit, nor is it an out-of-control menace. It is instead a convenient distraction from the real forces that make it seem as though there is never enough time.

    Wajcman sets a course that refuses to uncritically celebrate technology, whilst simultaneously disavowing the damning of modern machines. She prefers to draw upon “a social shaping approach to technology” (4) which emphasizes that the shape technology takes in a society is influenced by many factors. If current technologies leave us feeling exhausted, overwhelmed, and unsatisfied it is to our society we must look for causes and solutions – not to the machine.

    The vast array of Internet-connected devices give rise to a sense that everything is happening faster, that things are accelerating, and that compared to previous epochs things are changing faster. This is the kind of seemingly uncontroversial belief that Wajcman seeks to counter. While there is a present predilection for speed, the ideas of speed and acceleration remain murky, which may not be purely accidental when one considers “the extent to which the agenda for discussing the future of technology is set by the promoters of new technological products” (14). Rapid technological and societal shifts may herald the emergence of a “acceleration society” wherein speed increases even as individuals experience a decrease of available time. Though some would describe today’s world (at least in affluent nations) as being a synecdoche of the “acceleration society,” it would be a mistake to believe this to be a wholly new invention.

    Nevertheless the instantaneous potential of information technologies may seem to signal a break with the past – as the sort of “timeless time” which “emerged in financial markets…is spreading to every realm” (19). Some may revel in this speed even as others put out somber calls for a slow-down, but either approach risks being reductionist. Wajcman pushes back against the technological determinism lurking in the thoughts of those who revel and those who rebel, noting “that all technologies are inherently social in that they are designed, produced, used and governed by people” (27).

    Both today and yesterday “we live our lives surrounded by things, but we tend to think about only some of them as being technologies” (29). The impacts of given technologies depend upon the ways in which they are actually used, and Wajcman emphasizes that people often have a great deal of freedom in altering “the meanings and deployment of technologies” (33).

    Over time certain technologies recede into the background, but the history of technology is of a litany of devices that made profound impacts in determining experiences of time and speed. After all, the clock is itself a piece of technology, and thus we assess our very lack of time by looking to a device designed to measure its passage. The measurement of time was a technique used to standardize – and often exploit – labor, and the ability to carefully keep track of time gave rise to an ideology in which time came to be interchangeable with money. As a result speed came to be associated with profit even as slowness became associated with sloth. The speed of change became tied up in notions of improvement and progress, and thus “the speed of change becomes a self-evident good” (44). The speed promised by inventions are therefore seen as part of the march of progress, though a certain irony emerges as widespread speed leads to new forms of slowness – the mass diffusion of cars leading to traffic jams, And what was fast yesterday is often deemed slow today. As Wajcman shows, the experience of time compression that occurs tied to “our valorization of a busy lifestyle, as well as our profound ambivalence toward it” (58), has roots that go far back.

    Time takes on an odd quality – to have it is a luxury, even as constant busyness becomes a sign of status. A certain dissonance emerges wherein individuals feel that they have less time even as studies show that people are not necessarily working more hours. For Wajcman much of the explanation is related to “real increases in the combined work commitments of family members as it is about changes in the working time of individuals” with such “time poverty” being experienced particularly acutely “among working mothers, who juggle work, family, and leisure” (66). To understand time pressure it is essential to consider the degree to which people are free to use their time as they see fit.

    Societal pressures on the time of men and women differ, and though the hours spent doing paid labor may not have shifted dramatically, the hours parents (particularly mothers) spend performing unpaid labor remains high. Furthermore, “despite dramatic improvements in domestic technology, the amount of time spent on household tasks has not actually shown any corresponding dramatic decline” (68). Though household responsibilities can be shared equitably between partners, much of the onus still falls on women. As a busy event-filled life becomes a marker of status for adults so too may they attempt to bestow such busyness on the whole family, but busy parents needing to chaperone and supervise busy children only creates a further crunch on time. As Wajcman notes “perhaps we should be giving as much attention to the intensification of parenting as to the intensification of work” (82).

    Yet the story of domestic, unpaid and unrecognized, labor is a particularly strong example of a space wherein the promises of time-saving technological fixes have fallen short. Instead, “devices allegedly designed to save labor time fail to do so, and in some cases actually increase the time needed for the task” (111). The variety of technologies marketed for the household are often advertised as time savers, yet altering household work is not the same as eliminating it – even as certain tasks continually demand a significant investment of real time.

    Many of the technologies that have become mainstays of modern households – such as the microwave – were not originally marketed as such, and thus the household represents an important example of the way in which technologies “are both socially constructed and society shaping” (122). Of further significance is the way in which changing labor relations have also lead to shifts in the sphere of domestic work, wherein those who can afford it are able to buy themselves time through purchasing food from restaurants or by employing others for tasks such as child care and cleaning. Though the image of “the home of the future,” courtesy of the Internet of Things, may promise an automated abode, Wajcman highlights that those making and selling such technologies replicate society’s dominant blind spot for the true tasks of domestic labor. Indeed, the Internet of Things tends to “celebrate technology and its transformative power at the expense of home as a lived practice.” (130) Thus, domestic technologies present an important example of the way in which those designing and marketing technologies instill their own biases into the devices they build.

    Beyond the household, information communications technologies (ICTs) allow people to carry their office in their pocket as e-mails and messages ping them long after the official work day has ended. However, the idea “of the technologically tethered worker with no control over their own time…fails to convey the complex entanglement of contemporary work practices, working time, and the materiality of technical artifacts” (88). Thus, the problem is not that an individual can receive e-mail when they are off the clock, the problem is the employer’s expectation that this worker should be responding to work related e-mails while off the clock – the issue is not technological, it is societal. Furthermore, Wajcman argues, communications technologies permit workers to better judge whether or not something is particularly time sensitive. Though technology has often been used by employers to control employees, approaching communications technologies from an STS position “casts doubt on the determinist view that ICTs, per se, are driving the intensification of work” (107). Indeed some workers may turn to such devices to help manage this intensification.

    Technologies offer many more potentialities than those that are presented in advertisements. Though the ubiquity of communications devices may “mean that more and more of our social relationships are machine-mediated” (138), the focus should be as much on the word “social” as on the word “machine.” Much has been written about the way that individuals use modern technologies and the ways in which they can give rise to families wherein parents and children alike are permanently staring at a screen, but Wajcman argues that these technologies should “be regarded as another node in the flows of affect that create and bind intimacy” (150). It is not that these devices are truly stealing people’s time, but that they are changing the ways in which people spend the time they have – allowing harried individuals to create new forms of being together which “needs to be understood as adding a dimension to temporal experience” (158) which blurs boundaries between work and leisure.

    The notion that the pace of life has been accelerated by technological change is a belief that often goes unchallenged; however, Wajcman emphasizes that “major shifts in the nature of work, the composition of families, ideas about parenting, and patterns of consumption have all contributed to our sense that the world is moving faster than hitherto” (164). The experience of acceleration can be intoxicating, and the belief in a culture of improvement wrought by technological change may be a rare glimmer of positivity amidst gloomy news reports. However, “rapid technological change can actually be conservative, maintaining or solidifying existing social arrangements” (180). At moments when so much emphasis is placed upon the speed of technologically sired change the first step may not be to slow-down but to insist that people consider the ways in which these machines have been socially constructed, how they have shaped society – and if we fear that we are speeding towards a catastrophe than it becomes necessary to consider how they can be socially constructed to avoid such a collision.

    * * *

    It is common, amongst current books assessing the societal impacts of technology, for authors to present themselves as critical while simultaneously wanting to hold to an unshakable faith in technology. This often leaves such texts in an odd position: they want to advance a radical critique but their argument remains loyal to a conservative ideology. With Pressed for Time, Judy Wajcman, has demonstrated how to successfully achieve the balance between technological optimism and pessimism. It is a great feat, and Pressed for Time executes this task skillfully. When Wajcman writes, towards the end of the book, that she wants “to embrace the emancipatory potential of technoscience to create new meanings and new worlds while at the same time being its chief critic” (164) she is not writing of a goal but is affirming what she has achieved with Pressed for Time (a similar success can be attributed to Wajcman’s earlier books TechnoFeminism (Polity, 2004) and the essential Feminism Confronts Technology (Penn State, 1991).

    By holding to the framework of the social shaping of technology, Pressed for Time provides an investigation of time and speed that is grounded in a nuanced understanding of technology. It would have been easy for Wajcman to focus strictly on contemporary ICTs, but what her argument makes clear is that to do so would have been to ignore the facts that make contemporary technology understandable. A great success of Pressed for Time is the way in which Wajcman shows that the current sensation of being pressed for time is not a modern invention. Instead, the emphasis on speed as being a hallmark of progress and improvement is a belief that has been at work for decades. Wajcman avoids the stumbling block of technological determinism and carefully points out that falling for such beliefs leads to critiques being directed incorrectly. Written in a thoroughly engaging style, Pressed for Time is an academic book that can serve as an excellent introduction to the terminology and style of STS scholarship.

    Throughout Pressed for Time, Wajcman repeatedly notes the ways in which the meanings of technologies transcend what a device may have been narrowly intended to do. For Wajcman people’s agency is paramount as people have the ability to construct meaning for technology even as such devices wind up shaping society. Yet an area in which one could push back against Wajcman’s views would be to ask if communications technologies have shaped society to such an extent that it is becoming increasingly difficult to construct new meanings for them. Perhaps the “slow movement,” which Wajcman describes as unrealistic for “we cannot in fact choose between fast and slow, technology and nature” (176), is best perceived as a manifestation of the sense that much of technology’s “emancipatory potential” has gone awry – that some technologies offer little in the way of liberating potential. After all, the constantly connected individual may always feel rushed – but they may also feel as though they are under constant surveillance, that their every online move is carefully tracked, and that through the rise of wearable technology and the Internet of Things that all of their actions will soon be easily tracked. Wajcman makes an excellent and important point by noting that humans have always lived surrounded by technologies – but the technologies that surrounded an individual in 1952 were not sending every bit of minutiae to large corporations (and governments). Hanging in the background of the discussion of speed are also the questions of planned obsolescence and the mountains of toxic technological trash that wind up flowing from affluent nations to developing ones. The technological speed experienced in one country is the “slow violence” experienced in another. Though to make these critiques is to in no way to seriously diminish Wajcman’s argument, especially as many of these concerns simply speak to the economic and political forces that have shaped today’s technology.

    Pressed for Time is a Rosetta stone for decoding life in high speed, high tech societies. Wajcman deftly demonstrates that the problems facing technologically-addled individuals today are not as new as they appear, and that the solutions on offer are similarly not as wildly inventive as they may seem. Through analyzing studies and history, Wajcman shows the impacts of technologies, while making clear why it is still imperative to approach technology with a consideration of class and gender in mind. With Pressed for Time, Wajcman champions the position that the social shaping of technology framework still provides a robust way of understanding technology. As Wajcman makes clear the way technologies “are interpreted and used depends on the tapestry of social relations woven by age, gender, race, class, and other axes of inequality” (183).

    It is an extremely timely argument.
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

  • Towards a Bright Mountain: Laudato Si' as Critique of Technology

    Towards a Bright Mountain: Laudato Si' as Critique of Technology

    by Zachary Loeb

    ~

    “We hate the people who make us form the connections we do not want to form.” – Simone Weil

    1. Repairing Our Common Home

    When confronted with the unsettling reality of the world it is easy to feel overwhelmed and insignificant. This feeling of powerlessness may give rise to a temptation to retreat – or to simply shrug – and though people may suspect that they bear some responsibility for the state of affairs in which they are embroiled the scale of the problems makes individuals doubtful that they can make a difference. In this context, the refrain “well, it could always be worse” becomes a sort of inured coping strategy, though this dark prophecy has a tendency to prove itself true week after week and year after year. Just saying that things could be worse than they presently are does nothing to prevent things from deteriorating further. It can be rather liberating to decide that one is powerless, to conclude that one’s actions do not truly matter, to imagine that one will be long dead by the time the bill comes due – for taking such positions enables one to avoid doing something difficult: changing.

    A change is coming. Indeed, the change is already here. The question is whether people are willing to consciously change to meet this challenge or if they will only change when they truly have no other option.

    The matter of change is at the core of Pope Francis’s recent encyclical Laudato Si’ (“Praise be to You”). Much of the discussion around Laudato Si’ has characterized the document as being narrowly focused on climate change and the environment. Though Laudato Si’ has much to say about the environment, and the threat climate change poses, it is rather reductive to cast Laudato Si’ as “the Pope’s encyclical about the environment.” Granted, that many are describing the encyclical in such terms is understandable as framing it in that manner makes it appear quaint – and may lead to many concluding that they do not need to spend the time reading through the encyclical’s 245 sections (roughly 200 pages). True, Pope Francis is interested in climate change, but in the encyclical he proves far more interested in the shifts in the social, economic, and political climate that have allowed climate change to advance. The importance of Laudato Si’ is precisely that it is less about climate change than it is about the need for humanity to change, as Pope Francis writes:

    “we cannot adequately combat environmental degradation unless we attend to causes related to human and social degradation.” (Francis, no. 48)

    And though the encyclical is filled with numerous pithy aphorisms it is a text that is worth engaging in its entirety.

    Lest there be any doubt, Laudato Si’ is a difficult text to read. Not because it is written in archaic prose, or because it assumes the reader is learned in theology, but because it is discomforting. Laudato Si’ does not tell the reader that they are responsible for the world, instead it reminds them that they have always been responsible for the world, and then points to some of the reasons why this obligation may have been forgotten. The encyclical calls on those with their heads in the clouds (or head in “the cloud”) to see they are trampling the poor and the planet underfoot. Pope Francis has the audacity to suggest, despite what the magazine covers and advertisements tell us, that there is no easy solution, and that if we are honest with ourselves we are not fulfilled by consumerism. What Laudato Si’ represents is an unabashed ethical assault on high-tech/high-consumption life in affluent nations. Yet it is not an angry diatribe. Insofar as the encyclical represents a hammer it is not as a blunt instrument with which one bludgeons foes into submission, but is instead a useful tool one might take up to pull out the rusted old nails in order to build again, as Pope Francis writes:

    “Humanity still has the ability to work together in building our common home.” (Francis, no. 13)

    Laudato Si’ is a work of intense, even radical, social criticism in the fine raiment of a papal encyclical. The text contains an impassioned critique of technology, an ethically rooted castigation of capitalism, a defense of the environment that emphasizes that humans are part of that same environment, and a demand that people accept responsibility. There is much in Laudato Si’ that those well versed in activism, organizing, environmentalism, critical theory, the critique of technology, radical political economy (and so forth) will find familiar – and it is a document that those bearing an interest in the aforementioned areas would do well to consider. While the encyclical (it was written by the Pope, after all) contains numerous references to Jesus, God, the Church, and the saints – it is clear that Pope Francis intends the document for a wide (not exclusively Catholic, or even Christian) readership. Indeed, those versed in other religious traditions will likely find much in the encyclical that echoes their own beliefs – and the same can likely be said of those interested in ethics with our without the presence of God. While many sections of Laudato Si’ speak to the religious obligation of believers, Pope Francis makes a point of being inclusive to those of different faiths (and no faith) – an inclusion which speaks to his recognition that the problems facing humanity can only be solved by all of humanity. After all:

    “we need only take a frank look at the facts to see that our common home is falling into serious disrepair.” (Francis, no. 61)

    The term “common home” refers to the planet and all those – regardless of their faith – who dwell there.

    Nevertheless, there are several sections in Laudato Si’ that will serve to remind the reader that Pope Francis is the male head of a patriarchal organization. Pope Francis stands firm in his commitment to the poor, and makes numerous comments about the rights of indigenous communities – but he does not have particularly much to say about women. While women certainly number amongst the poor and indigenous, Laudato Si’ does not devote attention to the ways in which the theologies and ideologies of dominance that have wreaked havoc on the planet have also oppressed women. It is perhaps unsurprising that the only woman Laudato Si’ focuses on at any length is Mary, and that throughout the encyclical Pope Francis continually feminizes nature whilst referring to God with terms such as “Father.” The importance of equality is a theme which is revisited numerous times in Laudato Si’ and though Pope Francis addresses his readers as “sisters and brothers” it is worth wondering whether or not this entails true equality between all people – regardless of gender. It is vital to recognize this shortcoming of Laudato Si’ – as it is a flaw that undermines much of the ethical heft of the argument.

    In the encyclical Pope Francis laments the lack of concern being shown to those – who are largely poor – already struggling against the rising tide of climate change, noting:

    “Our lack of response to these tragedies involving our brothers and sisters points to the loss of that sense of responsibility to our fellow men and women upon which all civil society is founded.” (Francis, no. 25)

    Yet it is worth pushing on this “sense of responsibility to our fellow men and women” – and doing so involves a recognition that too often throughout history (and still today) “civil society” has been founded on an emphasis on “fellow men” and not necessarily upon women. In considering responsibilities towards other people Simone Weil wrote:

    “The object of any obligation, in the realm of human affairs, is always the human being as such. There exists an obligation towards every human being for the sole reason that he or she is a human being, without any other condition requiring to be fulfilled, and even without any recognition of such obligation on the part of the individual concerned.” (Weil, 5 – The Need for Roots)

    To recognize that the obligation is due to “the human being as such” – which seems to be something Pope Francis is claiming – necessitates acknowledging that “the human being” is still often defined as male. And this is a bias that can easily be replicated, even in encyclicals that tout the importance of equality.

    There are aspects of Laudato Si’ that will give readers cause to furrow their brows; however, it would be unfortunate if the shortcomings of the encyclical led people to dismiss it completely. After all, Laudato Si’ is not a document that one reads, it is a text with which one wrestles. And, as befits a piece written by a former nightclub bouncer, Laudato Si’ proves to be a challenging and scrappy combatant. Granted, the easiest way to emerge victorious from a bout is to refuse to engage in it in the first place – which is the tactic that many seem to be taking towards Laudato Si’. Yet it should be noted that those whose responses are variations of “the Pope should stick to religion” are largely revealing that they have not seriously engaged with the encyclical. Laudato Si’ does not claim to be a scientific document, but instead recognizes – in understated terms – that:

    “A very solid scientific consensus indicates that we are presently witnessing a disturbing warming of the climate system.” (Francis, no. 23)

    And that,

    “Climate change is a global problem with grave implications: environmental, social, economic, political and for the distribution of goods. It represents one of the principal challenges facing humanity in our day. Its worst impact will probably be felt by developing countries in the coming decades.” (Francis, no. 25)

    However, when those who make a habit of paying no heed to scientists themselves make derisive comments that the Pope is not a scientist they are primarily delivering a television-news-bite-ready-quip which ignores that the climate Pope Francis is mainly concerned with today’s social, economic and political climate.

    As has been previously noted, Laudato Si’ is as much a work of stinging social criticism as it is a theological document. It is a text which benefits from the particular analysis of people – be they workers, theologians, activists, scholars, and the list could go on – with knowledge in the particular fields the encyclical touches upon. And yet, one of the most striking aspects of the encyclical – that which poses a particular challenge to the status quo – is way in which the document engages with technology.

    For, it may well be that Laudato Si’ will change the tone of current discussions around technology and its role in our lives.

    At least one might hope that it will do so.

    caption
    Image source: Photo of Pope Francis, Christoph Wagener via Wikipedia, with further modifications by the author of this piece.

    2. Meet the New Gods, Not the Same as the Old God

    Perhaps being a person of faith makes it easier to recognize the faith of others. Or, put another way, perhaps belief in God makes one attuned to the appearance of new gods. While some studies have shown that in recent years the number of individuals who do not adhere to a particular religious doctrine has risen, Laudadto Si’ suggests – though not specifically in these terms – that people may have simply turned to new religions. In the book To Be and To Have, Erich Fromm uses the term “religion” not to:

    “refer to a system that has necessarily to do with a concept of God or with idols or even to a system perceived as religion, but to any group-shared system of thought and action that offers the individual a frame of orientation and an object of devotion.” (Fromm, 135 – italics in original)

    Though the author of Laudato Si’, obviously, ascribes to a belief system that has a heck-of-a-lot to do “with a concept of God” – the main position of the encyclical is staked out in opposition to the rise of a “group-shared system of thought” which has come to offer many people both “a frame of orientation and an object of devotion.” Pope Francis warns his readers against giving fealty and adoration to false gods – gods which are as appealing to atheists as they are to old-time-believers. And while Laudato Si’ is not a document that seeks (not significantly, at least) to draw people into the Catholic church, it is a document that warns people against the religion of technology. After all, we cannot return to the Garden of Eden by biting into an Apple product.

    It is worth recognizing, that there are many reasons why the religion of technology so easily wins converts. The world is a mess and the news reports are filled with a steady flow of horrors – the dangers of environmental degradation seem to grow starker by the day, as scientists issue increasingly dire predictions that we may have already passed the point at which we needed to act. Yet, one of the few areas that continually operates as a site of unbounded optimism is the missives fired off by the technology sector and its boosters. Wearable technology, self-driving cars, the Internet of Things, delivery drones, artificial intelligence, virtual reality – technology provides a vision of the future that is not fixated on rising sea levels and extinction. Indeed, against the backdrop of extinction some even predict that through the power of techno-science humans may not be far off from being able to bring back species that had previously gone extinct.

    Technology has become a site of millions of minor miracles that have drawn legions of adherents to the technological god and its sainted corporations – and while technology has been a force present with humans for nearly as long as there have been humans, technology today seems increasingly to be presented in a way that encourages people to bask in its uncanny glow. Contemporary technology – especially of the Internet connected variety – promises individuals that they will never be alone, that they will never be bored, that they will never get lost, and that they will never have a question for which they cannot execute a web search and find an answer. If older religions spoke of a god who was always watching, and always with the believer, than the smart phone replicates and reifies these beliefs – for it is always watching, and it is always with the believer. To return to Fromm’s description of religion it should be fairly apparent that technology today provides people with “a frame of orientation and an object of devotion.” It is thus not simply that technology comes to be presented as a solution to present problems, but that technology comes to be presented as a form of salvation from all problems. Why pray if “there’s an app for that”?

    In Laudato Si’, Pope Francis warns against this new religion by observing:

    “Life gradually becomes a surrender to situations conditioned by technology, itself viewed as the principle key to the meaning of existence.” (Francis, no. 110)

    Granted, the question should be asked as to what is “the meaning of existence” supplied by contemporary technology? The various denominations of the religion of technology are skilled at offering appealing answers to this question filled with carefully tested slogans about making the world “more open and connected.” What the religion of technology continually offers is not so much a way of being in the world as a way of escaping from the world. Without mincing words, the world described in Laudato Si’ is rather distressing: it is a world of vast economic inequality, rising sea levels, misery, existential uncertainty, mountains of filth discarded by affluent nations (including e-waste), and the prospects are grim. By comparison the religion of technology provides a shiny vision of the future, with the promise of escape from earthly concerns through virtual reality, delivery on demand, and the truly transcendent dream of becoming one with machines. The religion of technology is not concerned with the next life, or with the lives of future generations, it is about constructing a new Eden in the now, for those who can afford the right toys. Even if constructing this heaven consigns much of the world’s population to hell. People may not be bending their necks in prayer, but they’re certainly bending their necks to glance at their smart phones. As David Noble wrote:

    “A thousand years in the making, the religion of technology has become the common enchantment, not only of the designers of technology but also of those caught up in, and undone by, their godly designs. The expectation of ultimate salvation through technology, whatever the immediate human and social costs, has become the unspoken orthodoxy, reinforced by a market-induced enthusiasm for novelty and sanctioned by millenarian yearnings for new beginnings. This popular faith, subliminally indulged and intensified by corporate, government, and media pitchmen, inspires an awed deference to the practitioners and their promises of deliverance while diverting attention from more urgent concerns.” (Noble, 207)

    Against this religious embrace of technology, and the elevation of its evangels, Laudato Si’ puts forth a reminder that one can, and should, appreciate the tools which have been invented – but one should not worship them. To return to Erich Fromm:

    “The question is not one of religion or not? but of which kind of religion? – whether it is one that furthers human development, the unfolding of specifically human powers, or one that paralyzes human growth…our religious character may be considered an aspect of our character structure, for we are what we are devoted to, and what we are devoted to is what motivates our conduct. Often, however, individuals are not even aware of the real objects of their personal devotion and mistake their ‘official’ beliefs for their real, though secret religion.” (Fromm, 135-136)

    It is evident that Pope Francis considers the worship of technology to be a significant barrier to further “human development” as it “paralyzes human growth.” Technology is not the only false religion against which the encyclical warns – the cult of self worship, unbridled capitalism, the glorification of violence, and the revival tent of consumerism are all considered as false faiths. They draw adherents in by proffering salvation and prescribing a simple course of action – but instead of allowing their faithful true transcendence they instead warp their followers into sycophants.

    Yet the particularly nefarious aspect of the religion of technology, in line with the quotation from Fromm, is the way in which it is a faith to which many subscribe without their necessarily being aware of it. This is particularly significant in the way that it links to the encyclical’s larger concern with the environment and with the poor. Those in affluent nations who enjoy the pleasures of high-tech lifestyles – the faithful in the religion of technology – are largely spared the serious downsides of high-technology. Sure, individuals may complain of aching necks, sore thumbs, difficulty sleeping, and a creeping sense of dissatisfaction – but such issues do not tell of the true cost of technology. What often goes unseen by those enjoying their smart phones are the exploitative regimes of mineral extraction, the harsh labor conditions where devices are assembled, and the toxic wreckage of e-waste dumps. Furthermore, insofar as high-tech devices (and the cloud) require large amounts of energy it is worth considering the degree to which high-tech lifestyles contribute to the voracious energy consumption that helps drive climate change. Granted, those who suffer from these technological downsides are generally not the people enjoying the technological devices.

    And though Laudato Si’ may have a particular view of salvation – one need not subscribe to that religion to recognize that the religion of technology is not the faith of the solution.

    But the faith of the problem.

    3. Laudato Si’ as Critique of Technology

    Relatively early in the encyclical, Pope Francis decries how, against the background of “media and the digital world”:

    “the great sages of the past run the risk of going unheard amid the noise and distractions of an information overload.” (Frances, no. 47)

    Reading through Laudato Si’ it becomes fairly apparent who Pope Francis considers many of these “great sages” to be. For the most part Pope Francis cites the encyclicals of his predecessors, declarations from Bishops’ conferences, the bible, and theologians who are safely ensconced in the Church’s wheelhouse. While such citations certainly help to establish that the ideas being put forth in Laudato Si’ have been circulating in the Catholic Church for some time – Pope Francis’s invocation of “great sages of the past…going unheard” raises a larger question. How much of the encyclical is truly new and how much is a reiteration of older ideas that have gone “unheard?” In fairness, the social critique being advanced by Laudato Si’ may strike many people as novel – particularly in terms of its ethically combative willingness to take on technology – but it may be that the significant thing about Laudato Si’ is not that the message is new, but that the messenger is new. Without wanting to decry or denigrate Laudato Si’ it is worth noting that much of the argument being presented in the document could previously be found in works by thinkers associated with the critique of technology, notably Lewis Mumford and Jacques Ellul. Indeed, the following statement, from Lewis Mumford’s Art and Technics, could have appeared in Laudato Si’ without seeming out of place:

    “We overvalue the technical instrument: the machine has become our main source of magic, and it has given us a false sense of possessing godlike powers. An age that has devaluated all its symbols has turned the machine itself into a universal symbol: a god to be worshiped.” (Mumford, 138 – Art and Technics)

    The critique of technology does not represent a cohesive school of thought – rather it is a tendency within several fields (history and philosophy of technology, STS, media ecology, critical theory) that places particular emphasis on the negative impacts of technology. What many of these thinkers emphasized was the way in which the choices of certain technologies over others winds up having profound impacts upon the shape of a society. Thus, within the critique of technology, it is not a matter of anything so ridiculously reductive as “technology is bad” but of considering what alternative forms technology could take: “democratic technics” (Mumford), “convivial tools” (Illich), “appropriate technology” (Schumacher), “liberatory technology” (Bookchin), and so forth. Yet what is particularly important is the fact that the serious critique of technology was directly tied to a critique of the broader society. And thus, Mumford also wrote extensively about urban planning, architecture and cities – while Ellul wrote as much (perhaps more) about theological issues (Ellul was a devout individual who described himself as a Christian anarchist).

    With the rise of ever more powerful and potentially catastrophic technological systems, many thinkers associated with the critique of technology began issuing dire warnings about the techno-science wrought danger in which humanity had placed itself. With the appearance of the atomic bomb, humanity had invented the way to potentially bring an end to the whole of the human project. Galled by the way in which technology seemed to be drawing ever more power to itself, Ellul warned of the ascendance of “technique” while Mumford cautioned of the emergence of “the megamachine” with such terms being used to denote not simply technology and machinery but the fusion of techno-science with social, economic and political power – though Pope Francis seems to prefer to use the term “technological paradigm” or “technocratic paradigm” instead of “megamachine.” When Pope Francis writes:

    “The technological paradigm has become so dominant that it would be difficult to do without its resources and even more difficult to utilize them without being dominated by their internal logic.” (Francis, no. 108)

    Or:

    “the new power structures based on the techno-economic paradigm may overwhelm not only our politics but also freedom and justice.” (Francis, no. 53)

    Or:

    “The alliance between the economy and technology ends up sidelining anything unrelated to its immediate interests.” (Francis, no. 54)

    These are comments that are squarely in line with Ellul’s comment that:

    Technical civilization means that our civilization is constructed by technique (makes a part of civilization only that which belongs to technique), for technique (in that everything in this civilization must serve a technical end), and is exclusively technique (in that it excludes whatever is not technique or reduces it to technical forms).” (Ellul, 128 – italics in original)

    A particular sign of the growing dominance of technology, and the techno-utopian thinking that everywhere evangelizes for technology, is the belief that to every problem there is a technological solution. Such wishful thinking about technology as the universal panacea was a tendency highly criticized by thinkers like Mumford and Ellul. Pope Francis chastises the prevalence of this belief at several points, writing:

    “Obstructionist attitudes, even on the part of believers, can range from denial of the problem to indifference, nonchalant resignation or blind confidence in technical solutions.” (Francis, no. 14)

    And the encyclical returns to this, decrying:

    “Technology, which, linked to business interests, is presented as the only way of solving these problems,” (Francis, no. 20)

    There is more than a passing similarity between the above two quotations from Pope Francis’s 2015 encyclical and the following quotation from Lewis Mumford’s book Technics and Civilization (first published in 1934):

    “But the belief that the social dilemmas created by the machine can be solved merely by inventing more machines is today a sign of half-baked thinking which verges close to quackery.” (Mumford, 367)

    At the very least this juxtaposition should help establish that there is nothing new about those in power proclaiming that technology will solve everything, but just the same there is nothing particularly new about forcefully criticizing this unblinking faith in technological solutions. If one wanted to do so it would not be an overly difficult task to comb through Laudato Si’ – particularly “Chapter Three: The Human Roots of the Ecological Crisis” – and find a couple of paragraphs by Mumford, Ellul or another prominent critic of technology in which precisely the same thing is being said. After all, if one were to try to capture the essence of the critique of technology in two sentences, one could do significantly worse than the following lines from Laudato Si’:

    “We have to accept that technological products are not neutral, for they create a framework which ends up conditioning lifestyles and shaping social possibilities along the lines dictated by the interests of certain powerful groups. Decisions which may seem purely instrumental are in reality decisions about the kind of society we want to build.” (Francis, no. 107)

    Granted, the line “technological products are not neutral” may have come as something of a disquieting statement to some readers of Laudato Si’ even if it has long been understood by historians of technology. Nevertheless, it is the emphasis placed on the matter of “the kind of society we want to build” that is of particular importance. For the encyclical does not simply lament the state of the technological world, it advances an alternative vision of technology – one which recognizes the tremendous potential of technological advances but sees how this potential goes unfulfilled. Laudato Si’ is a document which is skeptical of the belief that smart phones have made people happier, and it is a text which shows a clear unwillingness to believe that large tech companies are driven by much other than their own interests. The encyclical bears the mark of a writer who believes in a powerful God and that deity’s prophets, but has little time for would-be all powerful corporations and their lust for profits. One of the themes that ran continuously throughout Lewis Mumford’s work was his belief that the “good life” had been overshadowed by the pursuit of the “goods life” – and a similar theme runs through Laudato Si’ wherein the analysis of climate change, the environment, and what is owed to the poor, is couched in a call to reinvigorate the “good life” while recognizing that the “goods life” is a farce. Despite the power of the “technological paradigm,” Pope Francis remains hopeful regarding the power of people, writing:

    “We have the freedom needed to limit and direct technology; we can put it at the service of another type of progress, one which is healthier, more human, more social, more integral. Liberation from the dominant technocratic paradigm does in fact happen sometimes, for example, when cooperatives of small producers adopt less polluting methods of production, and opt for a non-consumerist model of life, recreation and community. Or when technology is directed primarily to resolving people’s concrete problems, truly helping them live with more dignity and less suffering.” (Francis, no. 112)

    In the above quotation, what Pope Francis is arguing for is the need for, to use Mumford’s terminology, “democratic technics” to replace “authoritarian technics.” Or, to use Ivan Illich’s terms (and Illich was himself a Catholic priest) the emergence of a “convivial society” centered around “convivial tools.” Granted, as is perhaps not particularly surprising for a call to action, Pope Francis tends to be rather optimistic about the prospects individuals have for limiting and directing technology. For, one of the great fears shared amongst numerous critics of technology was the belief that the concentration of power in “technique” or “the megamachine” or the “technological paradigm” gradually eliminated the freedom to limit or direct it. That potential alternatives emerged was clear, but such paths were quickly incorporated back into the “technological paradigm.” As Ellul observed:

    “To be in technical equilibrium, man cannot live by any but the technical reality, and he cannot escape from the social aspect of things which technique designs for him. And the more his needs are accounted for, the more he is integrated into the technical matrix.” (Ellul, 224)

    In other words, “technique” gradually eliminates the alternatives to itself. To live in a society shaped by such forces requires an individual to submit to those forces as well. What Laudato Si’ almost desperately seeks to claim, to the contrary, is that it is not too late, that people still have the ability “to limit and direct technology” provided they tear themselves away from their high-tech hallucinations. And this earnest belief is the hopeful core of the encyclical.

    Ethically impassioned books and articles decrying what a high consumption lifestyle wreaks upon the planet and which exhort people to think of those who do not share in the thrill of technological decadence are not difficult to come by. And thus, the aspect of Laudato Si’ which may be the most radical and the most striking are the sections devoted to technology. For what the encyclical does so impressively is that it expressly links environmental destruction and the neglect of the poor with the religious allegiance to high-tech devices. Numerous books and articles appear on a regular basis lamenting the current state of the technological world – and yet too often the authors of such texts seem terrified of being labeled “anti-technology.” Therefore, the authors tie themselves in knots trying to stake out a position that is not evangelizing for technology but at the same time they refuse to become heretics to the religion of technology – and as a result they easily become the permitted voices of dissent who only seem to empower the evangels as they conduct the debate on the terms of technological society. They try to reform the religion of technology instead of recognizing that it is a faith premised upon worshiping a false god. After all, one is permitted to say that Google is getting too big, that the Apple Watch is unnecessary, and that Internet should be called “the surveillance mall” – but to say:

    “There is a growing awareness that scientific and technological progress cannot be equated with the progress of humanity and history, a growing sense that the way to a better future lies elsewhere.” (Francis, no. 113)

    Well…one rarely hears such arguments today, precisely because the dominant ideology of our day places ample faith in equating “scientific and technological progress” with progress, as such. Granted, that was the type of argument being made by the likes of Mumford and Ellul – though the present predicament makes it woefully evident that too few heeded their warnings. Indeed a leitmotif that can be detected amongst the works of many critics of technology is a desire to be proved wrong, as Mumford wrote:

    “I would die happy if I knew that on my tombstone could be written these words, ‘This man was an absolute fool. None of the disastrous things that he reluctantly predicted ever came to pass!’ Yes: then I could die happy.” (Mumford, 528 – My Works and Days)

    Yet to read over Mumford’s predictions in the present day is to understand why those words are not carved into his tombstone – for Mumford was not an “absolute fool,” he was acutely prescient. Though, alas, the likes of Mumford and Ellul too easily number amongst the ranks of “the great sages of the past” who, in Pope Francis’s words, “run the risk of going unheard amid the noise and distractions of an information overload.”

    Despite the issues that various individuals will certainly have with Laudato Si’ – ranging from its stance towards women to its religious tonality – the element that is likely to disquiet the largest group is its serious critique of technology. Thus, it is somewhat amusing to consider the number of articles that have been penned about the encyclical which focus on the warnings about climate change but say little about Pope Francis’s comments about the danger of the “technological paradigm.” For the encyclical commits a profound act of heresy against the contemporary religion of technology – it dares to suggest that we have fallen for the PR spin about the devices in our pockets, it asks us to consider if these devices are truly filling an existential void or if they are simply distracting us from having to think about this absence, and the encyclical reminds us that we need not be passive consumers of technology. These arguments about technology are not new, and it is not new to make them in ethically rich or religiously loaded language; however, these are arguments which are verboten in contemporary discourse about technology. Alas, those who make such claims are regularly derided as “Luddites” or “NIMBYs” and banished to the fringes. And yet the historic Luddites were simply workers who felt they had the freedom “to limit and direct technology,” and as anybody who knows about e-waste can attest when people in affluent nations say “Not In My Back Yard” the toxic refuse simply winds up in somebody else’s back yard. Pope Francis writes that today:

    “It has become countercultural to choose a lifestyle whose goals are even partly independent of technology, of its costs and its power to globalize and make us all the same.” (Francis, no. 108)

    And yet, what Laudato Si’ may represent is an important turning point in discussions around technology, and a vital opportunity for a serious critique of technology to reemerge. For what Laudato Si’ does is advocate for a new cultural paradigm based upon harnessing technology as a tool instead of as an absolute. Furthermore, the inclusion of such a serious critique of technology in a widely discussed (and hopefully widely read) encyclical represents a point at which rigorously critiquing technology may be able to become less “countercultural.” Laudato Si’ is a profoundly pro-culture document insofar as it seeks to preserve human culture from being destroyed by the greed that is ruining the planet. It is a rare text that has the audacity to state: “you do not need that, and your desire for it is bad for you and bad for the planet.”

    Laudato Si’ is a piece of fierce social criticism, and like numerous works from the critique of technology, it is a text that recognizes that one cannot truly claim to critique a society without being willing to turn an equally critical gaze towards the way that society creates and uses technology. The critique of technology is not new, but it has been sorely underrepresented in contemporary thinking around technology. It has been cast as the province of outdated doom mongers, but as Pope Francis demonstrates, the critique of technology remains as vital and timely as ever.

    Too often of late discussions about technology are conducted through rose colored glasses, or worse virtual reality headsets – Laudato Si’ dares to actually look at technology.

    And to demand that others do the same.

    4. The Bright Mountain

    The end of the world is easy.

    All it requires of us is that we do nothing, and what can be simpler than doing nothing? Besides, popular culture has made us quite comfortable with the imagery of dystopian states and collapsing cities. And yet the question to ask of every piece of dystopian fiction is “what did the world that paved the way for this terrible one look like?” To which the follow up question should be: “did it look just like ours?” And to this, yet another follow up question needs to be asked: “why didn’t people do something?” In a book bearing the uplifting title The Collapse of Western Civilization Naomi Oreskes and Erik Conway analyze present inaction as if from the future, and write:

    “the people of Western civilization knew what was happening to them but were unable to stop it. Indeed, the most startling aspect of this story is just how much these people knew, and how unable they were to act upon what they knew.” (Oreskes and Conway, 1-2)

    This speaks to the fatalistic belief that despite what we know, things are not going to change, or that if change comes it will already be too late. One of the most interesting texts to emerge in recent years in the context of continually ignored environmental warnings is a slim volume titled Uncivilisation: The Dark Mountain Manifesto. It is the foundational text of a group of writers, artists, activists, and others that dares to take seriously the notion that we are not going to change in time. As the manifesto’s authors write:

    “Secretly, we all think we are doomed: even the politicians think this; even the environmentalists. Some of us deal with it by going shopping. Some deal with it by hoping it is true. Some give up in despair. Some work frantically to try and fend off the coming storm.” (Hine and Kingsnorth, 9)

    But the point is that change is coming – whether we believe it or not, and whether we want it or not. But what is one to do? The desire to retreat from the cacophony of modern society is nothing new and can easily sow the fields in which reactionary ideologies can grow. Particularly problematic is that the rejection of the modern world often entails a sleight of hand whereby those in affluent nations are able to shirk their responsibility to the world’s poor even as they walk somberly, flagellating themselves into the foothills. Apocalyptic romanticism, whether it be of the accelerationist or primitivist variety, paints an evocative image of the world of today collapsing so that a new world can emerge – but what Laudato Si’ counters with is a morally impassioned cry to think of the billions of people who will suffer and die. Think of those for whom fleeing to the foothills is not an option. We do not need to take up residence in the woods like latter day hermetic acolytes of Francis of Assisi, rather we need to take that spirit and live it wherever we find ourselves.

    True, the easy retort to the claim “secretly, we all think we are doomed” is to retort “I do not think we are doomed, secretly or openly” – but to read climatologists predictions and then to watch politicians grouse, whilst mining companies seek to extract even more fossil fuels is to hear that “secret” voice grow louder. People have always been predicting the end of the world, and here we still are, which leads many to simply shrug off dire concerns. Furthermore, many worry that putting too much emphasis on woebegone premonitions overwhelms people and leaves them unable and unwilling to act. Perhaps this is why Al Gore’s film An Inconvenient Truth concludes not by telling people they must be willing to fundamentally alter their high-tech/high-consumption lifestyles but instead simply tells them to recycle. In Laudato Si’ Pope Francis writes:

    “Doomsday predictions can no longer be met with irony or disdain. We may well be leaving to coming generations debris, desolation and filth.” (Francis, no. 161)

    Those lines, particularly the first of the two, should be the twenty-first century replacement for “Keep Calm and Carry On.” For what Laudato Si’ makes clear is that now is not the time to “Keep Calm” but to get very busy, and it is a text that knows that if we “Carry On” than we are skipping aimlessly towards the cliff’s edge. And yet one of the elements of the encyclical that needs to be highlighted is that it is a document that does not look hopefully towards a coming apocalypse. In the encyclical, environmental collapse is not seen as evidence that biblical preconditions for Armageddon are being fulfilled. The sorry state of the planet is not the result of God’s plan but is instead the result of humanity’s inability to plan. The problem is not evil, for as Simone Weil wrote:

    “It is not good which evil violates, for good is inviolate: only a degraded good can be violated.” (Weil, 70 – Gravity and Grace)

    It is that the good of which people are capable is rarely the good which people achieve. Even as possible tools for building the good life – such as technology – are degraded and mistaken for the good life. And thus the good is wasted, though it has not been destroyed.

    Throughout Laudato Si’, Pope Francis praises the merits of an ascetic life. And though the encyclical features numerous references to Saint Francis of Assisi, the argument is not that we must all abandon our homes to seek out new sanctuary in nature, instead the need is to learn from the sense of love and wonder with which Saint Francis approached nature. Complete withdrawal is not an option, to do so would be to shirk our responsibility – we live in this world and we bear responsibility for it and for other people. In the encyclical’s estimation, those living in affluent nations cannot seek to quietly slip from the scene, nor can they claim they are doing enough by bringing their own bags to the grocery store. Rather, responsibility entails recognizing that the lifestyles of affluent nations have helped sow misery in many parts of the world – it is unethical for us to try to save our own cities without realizing the part we have played in ruining the cities of others.

    Pope Francis writes – and here an entire section shall be quoted:

    “Many things have to change course, but it is we human beings above all who need to change. We lack an awareness of our common origin, of our mutual belonging, and of a future to be shared with everyone. This basic awareness would enable the development of new conviction, attitudes and forms of life. A great cultural, spiritual and educational challenge stands before us, and it will demand that we set out on the long path of renewal.” (Francis, no. 202)

    Laudato Si’ does not suggest that we can escape from our problems, that we can withdraw, or that we can “keep calm and carry on.” And though the encyclical is not a manifesto, if it were one it could possibly be called “The Bright Mountain Manifesto.” For what Laudato Si’ reminds its readers time and time again is that even though we face great challenges it remains within our power to address them, though we must act soon and decisively if we are to effect a change. We do not need to wander towards a mystery shrouded mountain in the distance, but work to make the peaks near us glisten – it is not a matter of retreating from the world but of rebuilding it in a way that provides for all. Nobody needs to go hungry, our cities can be beautiful, our lifestyles can be fulfilling, our tools can be made to serve us as opposed to our being made to serve tools, people can recognize the immense debt they owe to each other – and working together we can make this a better world.

    Doing so will be difficult. It will require significant changes.

    But Laudato Si’ is a document that believes people can still accomplish this.

    In the end Laudato Si’ is less about having faith in god, than it is about having faith in people.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, on which an earlier version of this post first appeared. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    Pope Francis. Encyclical Letter Laudato Si’ of the Holy Father Francis on Care For Our Common Home. Vatican Press, 2015. [Note – the numbers ins all citations from this document refer to the section number, not the page number]

    Ellul, Jacques. The Technological Society. Vintage Books, 1964.

    Fromm, Erich. To Be and To Have. Harper & Row, Publishers, 1976.

    Hine, Dougald and Kingsnorth, Paul. Uncivilization: The Dark Mountain Manifesto. The Dark Mountain Project, 2013.

    Mumford, Lewis. My Works and Days: A Personal Chronicle. Harcourt, Brace, Jovanovich, 1979.

    Mumford, Lewis. Art and Technics. Columbia University Press, 2000.

    Mumford, Lewis. Technics and Civilization. University of Chicago Press, 2010.

    Noble, David. The Religion of Technology. Penguin, 1999.

    Oreskes, Naomi and Conway, Erik M. The Collapse of Western Civilization: A View from the Future. Columbia University Press, 2014.

    Weil, Simone. The Need for Roots. Routledge Classics, 2002.

    Weil, Simone. Gravity and Grace. Routledge Classics, 2002. (the quote at the beginning of this piece is found on page 139 of this book)

  • The Digital Turn

    The Digital Turn

    800px-Culture_d'amandiers

    David Golumbia and The b2 Review look to digital culture

    ~
    I am pleased and honored to have been asked by the editors of boundary 2 to inaugurate a new section on digital culture for The b2 Review.

    The editors asked me to write a couple of sentences for the print journal to indicate the direction the new section will take, which I’ve included here:

    In the new section of the b2 Review, we’ll be bringing the same level of critical intelligence and insight—and some of the same voices—to the study of digital culture that boundary 2 has long brought to other areas of literary and cultural studies. Our main focus will be on scholarly books about digital technology and culture, but we will also branch out to articles, legal proceedings, videos, social media, digital humanities projects, and other emerging digital forms.

    While some might think it late in the day for boundary 2 to be joining the game of digital cultural criticism, I take the time lag between the moment at which thoroughgoing digitization became an unavoidable reality (sometime during the 1990s) and the first of the major literary studies journals to dedicate part of itself to digital culture as indicative of a welcome and necessary caution with regard to the breathless enthusiasm of digital utopianism. As humanists our primary intellectual commitment is to the deeply embedded texts, figures, and themes that constitute human culture, and precisely the intensity and thoroughgoing nature of the putative digital revolution must give somebody pause—and if not humanists, who?

    Today, the most overt mark of the digital in humanities scholarship goes by the name Digital Humanities, but it remains notable how little interaction there is between the rest of literary studies and that which comes under the DH rubric. That lack of interaction goes in both directions: DH scholars rarely cite or engage directly with the work the rest of us do, and the rest of literary studies rarely cites DH work, especially when DH is taken in its “narrow” or most heavily quantitative form. The enterprises seem, at times, to be entirely at odds, and the rhetoric of the digital enthusiasts who populate DH does little to forestall this impression. Indeed, my own membership in the field of DH has long been a vexed question, despite being one of the first English professors in the country to be hired to a position for which the primary specialization was explicitly indicated as Digital Humanities (at the University of Virginia in 2003), and despite being a humanist whose primary area is “digital studies,” and the inability of scholars “to be” or “not to be” members of a field in which they work is one of the several ways that DH does not resemble other developments in the always-changing world of literary studies.

    800px-054_Culture_de_fraises_en_hauteur_et_sous_serre_à_Plougastel

    Earlier this month, along with my colleague Jennifer Rhee, I organized a symposium called Critical Approaches to Digital Humanities sponsored by the MATX PhD program at Virginia Commonwealth University, where Prof. Rhee and I teach in the English Department. One of the conference participants, Fiona Barnett of Duke and HASTAC, prepared a Storify version of the Twitter activity at the symposium that provides some sense of the proceedings. While it followed on the heels and was continuous with panels such as the ‘Dark Side of the Digital Humanities’ at the 2013 MLA Annual Convention, and several at recent American Studies Association Conventions, among others, this was to our knowledge the first standalone DH event that resembled other humanities conferences as they are conducted today. Issues of race, class, gender, sexuality, and ability were primary; cultural representation and its relation to (or lack of relation to) identity politics was of primary concern; close reading of texts both likely and unlikely figured prominently; the presenters were diverse along several different axes. This arose not out of deliberate planning so much as organically from the speakers whose work spoke to the questions we wanted to raise.

    I mention the symposium to draw attention to what I think it represents, and what the launching of a digital culture section by boundary 2 also represents: the considered turning of the great ship of humanistic study toward the digital. For too long enthusiasts alone have been able to stake out this territory and claim special and even exclusive insight with regard to the digital, following typical “hacker” or cyberlibertarian assertions about the irrelevance of any work that does not proceed directly out of knowledge of the computer. That such claims could even be taken seriously has, I think, produced a kind of stunned silence on the part of many humanists, because it is both so confrontational and so antithetical to the remit of the literary humanities from comparative philology to the New Criticism to deconstruction, feminism and queer theory. That the core of the literary humanities as represented by so august an institution as boundary 2 should turn its attention there both validates the sense of digital enthusiasts of the medium’s importance, but should also provoke them toward a responsibility toward the project and history of the humanities that, so far, many of them have treated with a disregard that at times might be characterized as cavalier.

    -David Golumbia

    Browse All Digital Studies Reviews