The b2o Review is a non-peer reviewed publication, published and edited by the boundary 2 editorial collective and specific topic editors, featuring book reviews, interventions, videos, and collaborative projects.  

  • Program and Be Programmed

    Program and Be Programmed

    Programmed Visions: Software and Memory (MIT Press, 2013)a review of Wendy Chun, Programmed Visions: Software and Memory (MIT Press, 2013)
    by Zachary Loeb
    ~

    Type a letter on a keyboard and the letter appears on the screen, double-click on a program’s icon and it opens, use the mouse in an art program to draw a line and it appears. Yet knowing how to make a program work is not the same as knowing how or why it works. Even a level of skill approaching mastery of a complicated program does not necessarily mean that the user understands how the software works at a programmatic level. This is captured in the canonical distinctions between users and “power users,” on the one hand, and between users and programmers on the other. Whether being a power user or being a programmer gives one meaningful power over machines themselves should be a more open question than injunctions like Douglas Rushkoff’s “program or be programmed” or the general opinion that every child must learn to code appear to allow.

    Sophisticated computer programs give users a fantastical set of abilities and possibilities. But to what extent does this sense of empowerment depend on faith in the unseen and even unknown codes at work in a given program? We press a key on a keyboard and a letter appears on the screen—but do we really know why? These are some of the questions that Wendy Hui Kyong Chun poses in Programmed Visions: Software and Memory, which provides a useful history of early computing alongside a careful analysis of the ways in which computers are used—and use their users—today. Central to Chun’s analysis is her insistence “that a rigorous engagement with software makes new media studies more, rather than less, vapory” (21), and her book succeeds admirably in this regard.

    The central point of Chun’s argument is that computers (and media in general) rely upon a notion of programmability that has become part of the underlying societal logic of neoliberal capitalism. In a society where computers are tied ever more closely to power, Chun argues that canny manipulation of software restores a sense of control or sovereignty to individual users, even as their very reliance upon this software constitutes a type of disempowerment. Computers are the driving force and grounding metaphor behind an ideology that seeks to determine the future—a future that “can be bought and sold” and which “depends on programmable visions that extrapolate the future—or more precisely, a future—based on the past” (9).

    Yet, one of the pleasures of contemporary computer usage, is that one need not fully understand much of what is going on to be able to enjoy the benefits of the computer. Though we may use computer technology to answer critical questions, this does not necessarily mean we are asking critical questions about computer technology. As Chun explains, echoing Michel Foucault, “software, free or not, is embodied and participates in structures of knowledge-power” (21); users become tangled in these structures once they start using a given device or program. Much of this “knowledge-power” is bound up in the layers of code which make software function, the code is that which gives the machine the directions—that which ensures that the tapping of the letter “r” on the keyboard leads to that letter appearing on the screen. Nevertheless, this code typically goes unseen, especially as it becomes source code, and winds up being buried ever deeper, even though this source code is what “embodies the power of the executive, the power of enforcement” (27). Importantly, the ability to write code, the programmer’s skill, does not in and of itself provide systematic power: computers follow “a set of rules that programmers must follow” (28). A sense of power over certain aspects of a computer is still incumbent upon submitting to the control of other elements of the computer.

    Contemporary computers, and our many computer-esque devices (such as smart phones and tablets), are the primary sites in which most of us encounter the codes and programming about which Chun writes, but she takes lengths to introduce the reader to the history of programming. For it is against the historical backdrop of military research, during the Second World War, that one can clearly see the ways in which notions of control, the unquestioning following of orders, and hierarchies have long been at work within computation and programming. Beyond providing an enlightening aside into the vital role that women played in programming history, analyzing the early history of computing demonstrates how as a means of cutting down on repetitive work structured programming emerged that “limits the logical procedures coders can use, and insists that the program consist of small modular units, which can be called from the main program” (36). Gradually this emphasis on structured programming allows for more and more processes to be left to the machine, and thus processes and codes become hidden from view even as future programmers are taught to conform to the demands that will allow for new programs to successfully make use of these early programs. Therefore the processes that were once a result of expertise come to be assumed aspects of the software—they become automated—and it is this very automation (“automatic programming”) that “allows the production of computer-enabled human-readable code” (41).

    As the codes and programs become hidden by ever more layers of abstraction, the computer simultaneously and paradoxically appears to make more of itself visible (through graphic user interfaces, for example), while the code itself recedes ever further into the background. This transition is central to the computer’s rapid expansion into ever more societal spheres, and it is an expansion that Chun links to the influence of neoliberal ideology. The computer with its easy-to-use interfaces creates users who feel as though they are free and empowered to manipulate the machine even as they rely on the codes and programs that they do not see. Freedom to act becomes couched in code that predetermines the range and type of actions that the users are actually free to take. What transpires, as Chun writes, is that “interfaces and operating systems produce ‘users’—one and all” (67).

    Without fully comprehending the codes that lead from a given action (a user presses a button) to a given result, the user is positioned to believe ever more in the power of the software/hardware hybrid, especially as increased storage capabilities allow for computers to access vast informational troves. In so doing, the technologically-empowered user has been conditioned to expect a programmable world akin to the programmed devices they use to navigate that world—it has “fostered our belief in the world as neoliberal: as an economic game that follows certain rules” (92). And this takes place whether or not we understand who wrote those rules, or how they can be altered.

    This logic of programmability may be linked to inorganic machines, but Chun also demonstrates the ways in which this logic has been applied to the organic world as well. In truth, the idea that the organic can be programmed predates the computer; as Chun explains “breeding encapsulates an early logic of programmability… Eugenics, in other words, was not simply a factor driving the development of high-speed mass calculation at the level of content… but also at the level of operationality” (124). In considering the idea that the organic can be programmed, what emerges is a sense of the way that programming has long been associated with a certain will to exert control over things be they organic or inorganic. Far from being a digression, Chun’s discussion of eugenics provides for a fascinating historic comparison given the way in which its decline in acceptance seems to dovetail with the steady ascendance of the programmable machine.

    The intersection of software and memory (or “software as memory”) is an essential matter to consider given the informational explosion that has occurred with the spread of computers. Yet, as Chun writes eloquently: “information is ‘undead’; neither alive nor dead, neither quite present nor absent” (134), since computers simultaneously promise to make ever more information available while making the future of much of this information precarious (insofar as access may rely upon software and hardware that no longer functions). Chun elucidates the ways in which the shift from analog to digital has permitted a wider number of users to enjoy the benefits of computers while this shift has likewise made much that goes on inside a computer (software and hardware) less transparent. While the machine’s memory may seem ephemeral and (to humans) illegible, accessing information in “storage” involves codes that read by re-writing elsewhere. This “battle of diligence between the passing and the repetitive” characterizing machine memory, Chun argues, “also characterizes content today” (170). Users rely upon a belief that the information they seek will be available and that they will be able to call upon it with a few simple actions, even though they do not see (and usually cannot see) the processes that make this information present and which do or do not allow it to be presented.

    When people make use of computers today they find themselves looking—quite literally—at what the software presents to them, yet in allowing this act of seeing the programming also has determined much of what the user does not see. Programmed Visions is an argument for recognizing that sometimes the power structures that most shape our lives go unseen—even if we are staring right at them.

    * * *

    With Programmed Visions, Chun has crafted a nuanced, insightful, and dense, if highly readable, contribution to discussions about technology, media, and the digital humanities. It is a book that demonstrates Chun’s impressive command of a variety of topics and the way in which she can engagingly shift from history to philosophy to explanations of a more technical sort. Throughout the book Chun deftly draws upon a range of classic and contemporary thinkers, whilst raising and framing new questions and lines of inquiry even as she seeks to provide answers on many other topics.

    Though peppered with many wonderful turns of phrase, Programmed Visions remains a challenging book. While all readers of Programmed Visions will come to it with their own background and knowledge of coding, programming, software, and so forth—the simple truth is that Chun’s point (that many people do not understand software sufficiently) may make many a reader feel somewhat taken aback. For most computer users—even many programmers and many whose research involves the study of technology and media—are quite complicit in the situation that Chun describes. It is the sort of discomforting confrontation that is valuable precisely because of the anxiety it provokes. Most users take for granted that the software will work the way they expect it to—hence the frustration bordering on fury that many people experience when suddenly the machine does something other than that which is expected provoking a maddened outburst of “why aren’t you working!” What Chun helps demonstrate is that it is not so much that the machines betray us, but that we were mistaken in our thinking that machines ever really obeyed us.

    It will be easy for many readers to see themselves as the user that Chun describes—as someone positioned to feel empowered by the devices they use, even as that power depends upon faith in forces the user cannot see, understand, or control. Even power users and programmers, on careful self-reflection may identify with Chun’s relocation of the programmer from a position of authority to a role wherein they too must comply with the strictures of the code presents an important argument for considerations of such labor. Furthermore, the way in which Chun links the power of the machine to the overarching ideology of neoliberalism makes her argument useful for discussions broader than those in media studies and the digital humanities. What makes these arguments particularly interesting is the way in which Chun locates them within thinking about software. As she writes towards the end of the second chapter, “this chapter is not a call to return to an age when one could see and comprehend the actions of our computers. Those days are long gone… Neither is this chapter an indictment of software or programming… It is, however, an argument against common-sense notions of software precisely because of their status as common sense” (92). Such a statement refuses to provide the anxious reader (who has come to see themselves as an uninformed user) with a clear answer, for it suggests that the “common-sense” clear answer is part of what has disempowered them.

    The weaving of historic details regarding computers during World War II and eugenics provide an excellent and challenging atmosphere against which Chun’s arguments regarding programmability can grow. Chun lucidly describes the embodiment and materiality of information and obsolescence that serve as major challenges confronting those who seek to manage and understand the massive informational flux that computer technology has enabled. The idea of information as “undead” is both amusing and evocative as it provides for a rich way of describing the “there but not there” of information, while simultaneously playing upon the slight horror and uneasiness that seems to be lurking below the surface in the confrontation with information.

    As Chun sets herself the difficult task of exploring many areas, there are some topics where the reader may be left wanting more. The section on eugenics presents a troubling and fascinating argument—one which could likely have been a book in and of itself—especially when considered in the context of arguments about cyborg selves and post-humanity, and it is a section that almost seems to have been cut short. Likewise the discussion of race (“a thread that has been largely invisible yet central,” 179), which is brought to the fore in the epilogue, confronts the reader with something that seems like it could in fact be the introduction for another book. It leaves the reader with much to contemplate—though it is the fact that this thread was not truly “largely invisible” that makes the reader upon reaching the epilogue wish that the book could have dealt with that matter at greater length. Yet, these are fairly minor concerns—that Programmed Visions leaves its readers re-reading sections to process them in light of later points is a credit to the text.

    Programmed Visions: Software and Memory is an alternatively troubling, enlightening, and fascinating book. It allows its reader to look at software and hardware in a new way, with a fresh insight about this act of sight. It is a book that plants a question (or perhaps subtly programs one into the reader’s mind): what are you not seeing, what power relations remain invisible, between the moment during which the “?” is hit on the keyboard and the moment it appears on the screen?


    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck. He has previously reviewed The People’s Platform by Astra Taylor and Social Media: A Critical Introduction by Christian Fuchs for boundary2.org.

    Back to the essay

  • Who Big Data Thinks We Are (When It Thinks We're Not Looking)

    Who Big Data Thinks We Are (When It Thinks We're Not Looking)

    Dataclysm: Who We Are (When We Think No One's Looking) (Crown, 2014)a review of Christian Rudder, Dataclysm: Who We Are (When We Think No One’s Looking) (Crown, 2014)
    by Cathy O’Neil
    ~
    Here’s what I’ve spent the last couple of days doing: alternatively reading Christian Rudder’s new book Dataclysm and proofreading a report by AAPOR which discusses the benefits, dangers, and ethics of using big data, which is mostly “found” data originally meant for some other purpose, as a replacement for public surveys, with their carefully constructed data collection processes and informed consent. The AAPOR folk have asked me to provide tangible examples of the dangers of using big data to infer things about public opinion, and I am tempted to simply ask them all to read Dataclysm as exhibit A.

    Rudder is a co-founder of OKCupid, an online dating site. His book mainly pertains to how people search for love and sex online, and how they represent themselves in their profiles.

    Here’s something that I will mention for context into his data explorations: Rudder likes to crudely provoke, as he displayed when he wrote this recent post explaining how OKCupid experiments on users. He enjoys playing the part of the somewhat creepy detective, peering into what OKCupid users thought was a somewhat private place to prepare themselves for the dating world. It’s the online equivalent of a video camera in a changing booth at a department store, which he defended not-so-subtly on a recent NPR show called On The Media, and which was written up here.

    I won’t dwell on that aspect of the story because I think it’s a good and timely conversation, and I’m glad the public is finally waking up to what I’ve known for years is going on. I’m actually happy Rudder is so nonchalant about it because there’s no pretense.

    Even so, I’m less happy with his actual data work. Let me tell you why I say that with a few examples.

    Who Are OKCupid Users?

    I spent a lot of time with my students this summer saying that a standalone number wouldn’t be interesting, that you have to compare that number to some baseline that people can understand. So if I told you how many black kids have been stopped and frisked this year in NYC, I’d also need to tell you how many black kids live in NYC for you to get an idea of the scope of the issue. It’s a basic fact about data analysis and reporting.

    When you’re dealing with populations on dating sites and you want to conclude things about the larger culture, the relevant “baseline comparison” is how well the members of the dating site represent the population as a whole. Rudder doesn’t do this. Instead he just says there are lots of OKCupid users for the first few chapters, and then later on after he’s made a few spectacularly broad statements, on page 104 he compares the users of OKCupid to the wider internet users, but not to the general population.

    It’s an inappropriate baseline, made too late. Because I’m not sure about you but I don’t have a keen sense of the population of internet users. I’m pretty sure very young kids and old people are not well represented, but that’s about it. My students would have known to compare a population to the census. It needs to happen.

    How Do You Collect Your Data?

    Let me back up to the very beginning of the book, where Rudder startles us by showing us that the men that women rate “most attractive” are about their age whereas the women that men rate “most attractive” are consistently 20 years old, no matter how old the men are.

    Actually, I am projecting. Rudder never actually specifically tells us what the rating is, how it’s exactly worded, and how the profiles are presented to the different groups. And that’s a problem, which he ignores completely until much later in the book when he mentions that how survey questions are worded can have a profound effect on how people respond, but his target is someone else’s survey, not his OKCupid environment.

    Words matter, and they matter differently for men and women. So for example, if there were a button for “eye candy,” we might expect women to choose more young men. If my guess is correct, and the term in use is “most attractive”, then for men it might well trigger a sexual concept whereas for women it might trigger a different social construct; indeed I would assume it does.

    Since this isn’t a porn site, it’s a dating site, we are not filtering for purely visual appeal; we are looking for relationships. We are thinking beyond what turns us on physically and asking ourselves, who would we want to spend time with? Who would our family like us to be with? Who would make us be attractive to ourselves? Those are different questions and provoke different answers. And they are culturally interesting questions, which Rudder never explores. A lost opportunity.

    Next, how does the recommendation engine work? I can well imagine that, once you’ve rated Profile A high, there is an algorithm that finds Profile B such that “people who liked Profile A also liked Profile B”. If so, then there’s yet another reason to worry that such results as Rudder described are produced in part as a result of the feedback loop engendered by the recommendation engine. But he doesn’t explain how his data is collected, how it is prompted, or the exact words that are used.

    Here’s a clue that Rudder is confused by his own facile interpretations: men and women both state that they are looking for relationships with people around their own age or slightly younger, and that they end up messaging people slightly younger than they are but not many many years younger. So forty year old men do not message twenty year old women.

    Is this sad sexual frustration? Is this, in Rudder’s words, the difference between what they claim they want and what they really want behind closed doors? Not at all. This is more likely the difference between how we live our fantasies and how we actually realistically see our future.

    Need to Control for Population

    Here’s another frustrating bit from the book: Rudder talks about how hard it is for older people to get a date but he doesn’t correct for population. And since he never tells us how many OKCupid users are older, nor does he compare his users to the census, I cannot infer this.

    Here’s a graph from Rudder’s book showing the age of men who respond to women’s profiles of various ages:

    dataclysm chart 1

    We’re meant to be impressed with Rudder’s line, “for every 100 men interested in that twenty year old, there are only 9 looking for someone thirty years older.” But here’s the thing, maybe there are 20 times as many 20-year-olds as there are 50-year-olds on the site? In which case, yay for the 50-year-old chicks? After all, those histograms look pretty healthy in shape, and they might be differently sized because the population size itself is drastically different for different ages.

    Confounding

    One of the worst examples of statistical mistakes is his experiment in turning off pictures. Rudder ignores the concept of confounders altogether, which he again miraculously is aware of in the next chapter on race.

    To be more precise, Rudder talks about the experiment when OKCupid turned off pictures. Most people went away when this happened but certain people did not:

    dataclysm chart 2

    Some of the people who stayed on went on a “blind date.” Those people, which Rudder called the “intrepid few,” had a good time with people no matter how unattractive they were deemed to be based on OKCupid’s system of attractiveness. His conclusion: people are preselecting for attractiveness, which is actually unimportant to them.

    But here’s the thing, that’s only true for people who were willing to go on blind dates. What he’s done is select for people who are not superficial about looks, and then collect data that suggests they are not superficial about looks. That doesn’t mean that OKCupid users as a whole are not superficial about looks. The ones that are just got the hell out when the pictures went dark.

    Race

    This brings me to the most interesting part of the book, where Rudder explores race. Again, it ends up being too blunt by far.

    Here’s the thing. Race is a big deal in this country, and racism is a heavy criticism to be firing at people, so you need to be careful, and that’s a good thing, because it’s important. The way Rudder throws it around is careless, and he risks rendering the term meaningless by not having a careful discussion. The frustrating part is that I think he actually has the data to have a very good discussion, but he just doesn’t make the case the way it’s written.

    Rudder pulls together stats on how men of all races rate women of all races on an attractiveness scale of 1-5. It shows that non-black men find their own race attractive and non-black men find black women, in general, less attractive. Interesting, especially when you immediately follow that up with similar stats from other U.S. dating sites and – most importantly – with the fact that outside the U.S., we do not see this pattern. Unfortunately that crucial fact is buried at the end of the chapter, and instead we get this embarrassing quote right after the opening stats:

    And an unintentionally hilarious 84 percent of users answered this match question:

    Would you consider dating someone who has vocalized a strong negative bias toward a certain race of people?

    in the absolute negative (choosing “No” over “Yes” and “It depends”). In light of the previous data, that means 84 percent of people on OKCupid would not consider dating someone on OKCupid.

    Here Rudder just completely loses me. Am I “vocalizing” a strong negative bias towards black women if I am a white man who finds white women and Asian women hot?

    Especially if you consider that, as consumers of social platforms and sites like OKCupid, we are trained to rank all the products we come across to ultimately get better offerings, it is a step too far for the detective on the other side of the camera to turn around and point fingers at us for doing what we’re told. Indeed, this sentence plunges Rudder’s narrative deeply into the creepy and provocative territory, and he never fully returns, nor does he seem to want to. Rudder seems to confuse provocation for thoughtfulness.

    This is, again, a shame. A careful conversation about the issues of what we are attracted to, what we can imagine doing, and how we might imagine that will look to our wider audience, and how our culture informs those imaginings, are all in play here, and could have been drawn out in a non-accusatory and much more useful way.


    _____

    Cathy O’Neil is a data scientist and mathematician with experience in academia and the online ad and finance industries. She is one of the most prominent and outspoken women working in data science today, and was one of the guiding voices behind Occupy Finance, a book produced by the Occupy Wall Street Alt Banking group. She is the author of “On Being a Data Skeptic” (Amazon Kindle, 2013), and co-author with Rachel Schutt of Doing Data Science: Straight Talk from the Frontline (O’Reilly, 2013). Her Weapons of Math Destruction is forthcoming from Random House. She appears on the weekly Slate Money podcast hosted by Felix Salmon. She maintains the widely-read mathbabe blog, on which this review first appeared.

    Back to the essay

  • Frank Pasquale — Capital’s Offense: Law’s Entrenchment of Inequality (On Piketty, “Capital in the 21st Century”)

    Frank Pasquale — Capital’s Offense: Law’s Entrenchment of Inequality (On Piketty, “Capital in the 21st Century”)

    a review of Thomas Piketty, Capital in the Twenty-First Century (Harvard University Press, 2014)

    by Frank Pasquale

    ~

    Thomas Piketty’s Capital in the Twenty-First Century has succeeded both commercially and as a work of scholarship. Capital‘s empirical research is widely praised among economists—even by those who disagree with its policy prescriptions.  It is also the best-selling book in the century-long history of Harvard University Press, and a rare work of scholarship to reach the top spot on Amazon sales rankings.[1]

    Capital‘s main methodological contribution is to bring economic, sociological, and even literary perspectives to bear in a work of economics.[2] The book bridges positive and normative social science, offering strong policy recommendations for increased taxation of the wealthiest. It is also an exploration of historical trends.[3] In Capital, fifteen years of careful archival research culminate in a striking thesis: capitalism exacerbates inequality over time. There is no natural tendency for markets themselves, or even ordinary politics, to slow accumulation by top earners.[4]

    This review explains Piketty’s analysis and its relevance to law and social theory, drawing lessons for the re-emerging field of political economy. Piketty’s focus on long-term trends in inequality suggests that many problems traditionally explained as sector-specific (such as varied educational outcomes) are epiphenomenal with regard to increasingly unequal access to income and capital. Nor will a narrowing of purported “skills gaps” do much to improve economic security, since opportunity to earn money via labor matters far less in a world where capital is the key to enduring purchasing power. Policymakers and attorneys ignore Piketty at their peril, lest isolated projects of reform end up as little more than rearranging deck chairs amidst titanically unequal opportunities.

    Inequality, Opportunity, and the Rigged Game

    Capital weaves together description and prescription, facts and values, economics, politics, and history, with an assured and graceful touch. So clear is Piketty’s reasoning, and so compelling the enormous data apparatus he brings to bear, that few can doubt he has fundamentally altered our appreciation of the scope, duration, and intensity of inequality.[5]

    Piketty’s basic finding is that, absent extraordinary political interventions, the rate of return on capital (r) is greater than the rate of growth of the economy generally (g), which Piketty expresses via the now-famous formula r > g.[6] He finds that this relationship persists over time, and in the many countries with reliable data on wealth and income.[7] This simple inequality relationship has many troubling implications, especially in light of historical conflicts between capital and labor.

    Most persons support themselves primarily by wages—that is, what they earn from their labor. As capital takes more of economic output (an implication of r > g persisting over time), less is left for labor. Thus if we are concerned about unequal incomes and living standards, we cannot simply hope for a rising tide of growth to lift the fortunes of those in the bottom quintiles of the income and wealth distribution.  As capital concentrates, its owners take an ever larger share of income—unless law intervenes and demands some form of redistribution.[8] As the chart below (by Bard economist Pavlina Tcherneva, based on Piketty’s data) shows, we have now reached the point where the US economy is not simply distributing the lion’s share of economic gains to top earners; it is actively redistributing extant income of lower decile earners upwards:

    chart of doom

    In 2011, 93% of the gains in income during the economic “recovery” went to the top 1%.  From 2009 to 2011, “income gains to the top 1% … were 121% of all income increases,” because “incomes to the bottom 99% fell by 0.4%.”[9] The trend continued through 2012.

    Fractal inequality prevails up and down the income scale.[10] The top 15,000 tax returns in the US reported an average taxable income of $26 million in 2005—at least 400 times greater than the median return.[11] Moreover, Larry Bartels’s book, Unequal Democracy, graphs these trends over decades.[12] Bartels shows that, from 1945-2007, the 95th percentile did much better than those at lower percentiles.[13] He then shows how those at the 99.99th percentile did spectacularly better than those at the 99.9th, 99.5th, 99th, and 95th percentiles.[14] There is some evidence that even within that top 99.99th percentile, inequality reigned.  In 2005, the “Fortunate 400″—the 400 households with the highest earnings in the U.S.—made on average $213.9 million apiece, and the cutoff for entry into this group was a $100 million income—about four times the average income of $26 million prevailing in the top 15,000 returns.[15] As Danny Dorling observed in a recent presentation at the RSA, for those at the bottom of the 1%, it can feel increasingly difficult to “keep up with the Joneses,” Adelsons, and Waltons. Runaway incomes at the very top leave those slightly below the “ultra-high net worth individual” (UHNWI) cut-off ill-inclined to spread their own wealth to the 99%.

    Thus inequality was well-documented in these, and many other works, by the time Piketty published Capital—indeed, other authors often relied on the interim reports released by Piketty and his team of fellow inequality researchers over the past two decades.[16] The great contribution of Capital is to vastly expand the scope of the inquiry, over space and time. The book examines records in France going back to the 19th century, and decades of data in Germany, Japan, Great Britain, Sweden, India, China, Portugal, Spain, Argentina, Switzerland, and the United States.[17]

    The results are strikingly similar. The concentration of capital (any asset that generates income or gains in monetary value) is a natural concomitant of economic growth under capitalism—and tends to intensify if growth slows or stops.[18] Inherited fortunes become more important than those earned via labor, since the “miracle of compound interest” overwhelms any particularly hard-working person or ingenious idea. Once fortunes grow large enough, their owners can simply live off the interest and dividends they generate, without ever drawing on the principal. At the “escape velocity” enjoyed by some foundations and ultra-rich individuals, annual expenses are far less than annual income, precipitating ever-greater principal. This is Warren Buffett’s classic “snowball” of wealth—and we should not underestimate its ability to purchase the political favors that help constitute Buffettian “moats” around the businesses favored by the likes of Berkshire-Hathaway.[19]  Dynasties form and entrench their power.  If they can make capital pricey enough, even extraordinary innovations may primarily benefit their financers.

    Deepening the Social Science of Political Economy

    Just as John Rawls’s Theory of Justice laid a foundation for decades of writing on social justice, Piketty’s work is so generative that one could envision whole social scientific fields revitalized by it.[20] Political economy is the most promising, a long tradition of (as Piketty puts it) studying the “ideal role of the state in the economic and social organization of a country.”[21] Integrating the long-divided fields of politics and economics, a renewal of modern political economy could unravel “wicked problems” neither states nor markets alone can address.[22]

    But the emphasis in Piketty’s definition of political economy on “a country,” versus countries, or the world, is in tension with the global solutions he recommends for the regulation of capital. The dream of neoliberal globalization was to unite the world via markets.[23] Anti-globalization activists have often advanced a rival vision of local self-determination, predicated on overlaps between political and economic boundaries. State-bound political economy could theorize those units. But the global economy is, at present, unforgiving of autarchy and unlikely to move towards it.

    Capital tends to slip the bonds of states, migrating to tax havens. In the rarefied world of the global super-rich, financial privacy is a purchasable commodity.  Certainly there are always risks of discovery, or being taken advantage of by a disreputable tax shelter broker or shady foreign bank.  But for many wealthy individuals, tax havenry has been a rite of passage on the way to membership in a shadowy global elite. Piketty’s proposed global wealth tax would need international enforcement—for even the Foreign Accounts Tax Compliance Act (FATCA) imposed via America’s fading hegemony (and praised by Piketty) has only begun to address the problem of hidden (or runaway) wealth (and income).[24]

    It will be very difficult to track down the world’s hidden fortunes and tax them properly. Had Piketty consulted more legal sources, he may have acknowledged the problem more adequately in Capital. He recommends “automatic information exchange” among tax authorities, which is an excellent principle to improve enforcement. But actually implementing this principle could require fine-grained regulation of IT systems, deployment of whole new types of surveillance, and even uniform coding (via, say, standard legal entity identifiers, or LEIs) globally. More frankly acknowledging the difficulty of shepherding such legislation globally could have led to a more convincing (and comprehensive) examination of the shortcomings of globalized capitalism.

    In several extended interviews on Capital (with CNN Money, Econtalk, The New York Times, Huffington Post, and the New Republic, among others), Piketty pledges fealty to markets, praising their power to promote production and innovation. Never using the term “industrial policy” in his book, Piketty hopes that law may make the bounty of extant economic arrangements accessible to all, rather than changing the nature of those arrangements. But we need to begin to ask whether our very process of creating goods and services itself impedes better distribution of them.

    Unfortunately, mainstream economics itself often occludes this fundamental question. When distributive concerns arise, policymakers can either substantively intervene to reshape the benefits and burdens of commerce (a strategy economists tend to derogate as dirigisme), or may, post hoc, use taxes and transfer programs to redistribute income and wealth. For establishment economists, redistribution (happening after initial allocations by “the market”) is almost always considered more efficient than “distortion” of markets by regulation, public provision, or “predistribution.”[25]

    Tax law has historically been our primary way of arranging such redistribution, and Piketty makes it a focus of the concluding part of his book, called “Regulating Capital.” Piketty laments the current state of tax reporting and enforcement. Very wealthy individuals have developed complex webs of shell entities to hide their true wealth and earnings.[26] As one journalist observed, “Behind a New York City deed, there may be a Delaware LLC, which may be managed by a shell company in the British Virgin Islands, which may be owned by a trust in the Isle of Man, which may have a bank account in Liechtenstein managed by the private banker in Geneva. The true owner behind the structure might be known only to the banker.”[27] This is the dark side of globalization: the hidden structures that shield the unscrupulous from accountability.[28]

    The most fundamental tool of tax secrecy is separation: between persons and their money, between corporations and the persons who control them, between beneficial and nominal controllers of wealth. When money can pass between countries as easily as digital files, skilled lawyers and accountants can make it impossible for tax authorities to uncover the beneficial owners of assets (and the income streams generated by those assets).

    Piketty believes that one way to address inequality is strict enforcement of laws like America’s FATCA.[29] But the United States cannot accomplish much without pervasive global cooperation.  Thus the international challenge of inequality haunts Capital. As money concentrates in an ever smaller global “superclass” (to use David J. Rothkopf’s term), it’s easier for it to escape any ruling authority.[30] John Chung has characterized today’s extraordinary concentrations of wealth as a “death of reference” in our monetary system and its replacement with “a total relativity.”[31] He notes that “[i]n 2007, the average amount of annual compensation for the top twenty-five highest paid hedge fund managers was $892 million;” in the past few years, individual annual incomes in the group have reached two, three, or four billion dollars.  Today’s greatest hoards of wealth are digitized, as easily moved and hidden as digital files.

    We have no idea what taxes may be due from trillions of dollars in offshore wealth, or to what purposes it is directed.[32] In less-developed countries, dictators and oligarchs smuggle ill-gotten gains abroad.  Groups like Global Financial Integrity and the Tax Justice Network estimate that illicit financial flows out of poor countries (and into richer ones, often via tax havens) are ten times greater than the total sum of all development aid—nearly $1 trillion per year.  Given that the total elimination of extreme global poverty could cost about $175 billion per year for twenty years, this is not a trivial loss of funds—completely apart from what the developing world loses in the way of investment when its wealthiest residents opt to stash cash in secrecy jurisdictions.[33]

    An adviser to the Tax Justice Network once said that assessing money kept offshore is an “exercise in night vision,” like trying to measure “the economic equivalent of an astrophysical black hole.”[34] Shell corporations can hide connections between persons and their money, between corporations and the persons who control them, between beneficial and nominal owners. When enforcers in one country try to connect all these dots, there is usually another secrecy jurisdiction willing to take in the assets of the conniving. As the Tax Justice Network’s “TaxCast” exposes on an almost monthly basis, victories for tax enforcement in one developed country tend to be counterbalanced by a slide away from transparency elsewhere.

    Thus when Piketty recommends that “the only way to obtain tangible results is to impose automatic sanctions not only on banks but also on countries that refuse to require their financial institutions” to report on wealth and income to proper taxing authorities, one has to wonder: what super-institution will impose the penalties? Is this to be an ancillary function of the WTO?[35] Similarly, equating the imposition of a tax on capital with “the stroke of a pen” (568) underestimates the complexity of implementing such a tax, and the predictable forms of resistance that the wealth defense industry will engage in.[36] All manner of societal and cultural, public and private, institutions will need to entrench such a tax if it is to be a stable corrective to the juggernaut of r > g.[37]

    Given how much else the book accomplishes, this demand may strike some as a cavil—something better accomplished by Piketty’s next work, or by an altogether different set of allied social scientists.  But if Capital itself is supposed to model (rather than merely call for) a new discipline of political economy, it needs to provide more detail about the path from here to its prescriptions. Philosophers like Thomas Pogge and Leif Wenar, and lawyers like Terry Fisher and Talha Syed, have been quite creative in thinking through the actual institutional arrangements that could lead to better distribution of health care, health research, and revenues from natural resources.[38] They are not cited in Capital¸but their work could have enriched its institutional analysis greatly.

    An emerging approach to financial affairs, known as the Legal Theory of Finance (LTF), also offers illumination here, and should guide future policy interventions.  Led by Columbia Law Professor Katharina Pistor, an interdisciplinary research team of social scientists and attorneys have documented the ways in which law is constitutive of so-called financial markets.[39] Revitalizing the tradition of legal realism, Pistor has demonstrated the critical role of law in generating modern finance. Though law to some extent shapes all markets, in finance, its role is most pronounced.  The “products” traded are very little more than legal recognitions of obligations to buy or sell, own or owe. Their value can change utterly based on tiny changes to the bankruptcy code, SEC regulations, or myriad other laws and regulations.

    The legal theory of finance changes the dialogue about regulation of wealth.  The debate can now move beyond stale dichotomies like “state vs. market,” or even “law vs. technology.” While deregulationists mock the ability of regulators to “keep up with” the computational capacities of global banking networks, it is the regulators who made the rules that made the instantaneous, hidden transfer of financial assets so valuable in the first place. Such rules are not set in stone.

    The legal theory of finance also enables a more substantive dialogue about the central role of law in political economy. Not just tax rules, but also patent, trade, and finance regulation need to be reformed to make the wealthy accountable for productively deploying the wealth they have either earned or taken. Legal scholars have a crucial role to play in this debate—not merely as technocrats adjusting tax rules, but as advisors on a broad range of structural reforms that could ensure the economy’s rewards better reflected the relative contributions of labor, capital, and the environment.[40] Lawyers had a much more prominent role in the Federal Reserve when it was more responsive to workers’ concerns.[41]

    Imagined Critics as Unacknowledged Legislators

    A book is often influenced by its author’s imagined critics. Piketty, decorous in his prose style and public appearances, strains to fit his explosive results into the narrow range of analytical tools and policy proposals that august economists won’t deem “off the wall.”[42] Rather than deeply considering the legal and institutional challenges to global tax coordination, Piketty focuses on explaining in great detail the strengths and limitations of the data he and a team of researchers have been collecting for over a decade. But a renewed social science of political economy depends on economists’ ability to expand their imagined audience of critics, to those employing qualitative methodologies, to attorneys and policy experts working inside and outside the academy, and to activists and journalists with direct knowledge of the phenomena addressed.  Unfortunately, time that could have been valuably directed to that endeavor—either in writing Capital, or constructively shaping the extraordinary publicity the book received—has instead been diverted to shoring up the book’s reputation as rigorous economics, against skeptics who fault its use of data.

    To his credit, Piketty has won these fights on the data mavens’ own terms. The book’s most notable critic, Chris Giles at the Financial Times, tried to undermine Capital‘s conclusions by trumping up purported ambiguities in wealth measurement. His critique was rapidly dispatched by many, including Piketty himself.[43] Indeed, as Neil Irwin observed, “Giles’s results point to a world at odds not just with Mr. Piketty’s data, but also with that by other scholars and with the intuition of anyone who has seen what townhouses in the Mayfair neighborhood of London are selling for these days.”[44]

    One wonders if Giles reads his own paper. On any given day one might see extreme inequality flipping from one page to the next. For example, in a special report on “the fragile middle,” Javier Blas noted that no more than 12% of Africans earned over $10 per day in 2010—a figure that has improved little, if at all, since 1980.[45] Meanwhile, in the House & Home section on the same day, Jane Owen lovingly described the grounds of the estate of “His Grace Henry Fitzroy, the 12th Duke of Grafton.” The grounds cost £40,000 to £50,000 a year to maintain, and were never “expected to do anything other than provide pleasure.”[46] England’s revanchist aristocracy makes regular appearances in the Financial TimesHow to Spend It” section as well, and no wonder: as Oxfam reported in March, 2014, Britain’s five richest families have more wealth than its twelve million poorest people.[47]

    Force and Capital

    The persistence of such inequalities is as much a matter of law (and the force behind it to, say, disperse protests and selectively enforce tax regulations), as it is a natural outgrowth of the economic forces driving r and g. To his credit, Piketty does highlight some of the more grotesque deployments of force on behalf of capital. He begins Part I (“Income and Capital”) and ends Part IV (“Regulating Capital”) by evoking the tragic strike at the Lonmin Mine in South Africa in August 2012.  In that confrontation, “thirty-four strikers were shot dead” for demanding pay of about $1,400 a month (there were making about $700).[48] Piketty deploys the story to dramatize conflict over the share of income going to capital versus labor. But it also illustrates dynamics of corruption. Margaret Kimberley of Black Agenda Report claims that the union involved was coopted thanks to the wealth of the man who once ran it.[49] The same dynamics shine through documentaries like Big Men (on Ghana), or the many nonfiction works on oil exploitation in Africa. [50]

    Piketty observes that “foreign companies and stockholders are at least as guilty as unscrupulous African elites” in promoting the “pillage” of the continent.[51] Consider the state of Equatorial Guinea, which struck oil in 1995. By 2006, Equatoguineans had the third highest per capita income in the world, higher than many prosperous European countries.[52] Yet the typical citizen remains very poor. [53]  In the middle of the oil boom, an international observer noted that “I was unable to see any improvements in the living standards of ordinary people. In 2005, nearly half of all children under five were malnourished,” and “[e]ven major cities lack[ed] clean water and basic sanitation.”[54] The government has not demonstrated that things have improved much since them, despite ample opportunity to do so.  Poorly paid soldiers routinely shake people down for bribes, and the country’s president, Teodoro Obiang, has paid Moroccan mercenaries for his own protection.  A 2009 book noted that tensions in the country had reached a boiling point, as the “local Bubi people of Malabo” felt “invaded” by oil interests, other regions were “abandoned,” and self-determination movements decried environmental and human rights abuses.[55]

    So who did benefit from Equatorial Guinea’s oil boom?  Multinational oil companies, to be sure, though we may never know exactly how much profit the country generated for them—their accounting was (and remains) opaque.  The Riggs Bank in Washington, D.C. gladly handled accounts of President Obiang, as he became very wealthy.  Though his salary was reported to be $60,000 a year, he had a net worth of roughly $600 million by 2011.[56] (Consider, too, that such a fortune would not even register on recent lists of the world’s 1,500 or so billionaires, and is barely more than 1/80th the wealth of a single Koch brother.) Most of the oil companies’ payments to him remain shrouded in secrecy, but a few came to light in the wake of US investigations.  For example, a US Senate report blasted him for personally taking $96 million of his nation’s $130 million in oil revenue in 1998, when a majority of his subjects were malnourished.[57]

    Obiang’s sordid record has provided a rare glimpse into some of the darkest corners of the global economy.  But his story is only the tip of an iceberg of a much vaster shadow economy of illicit financial flows, secrecy jurisdictions, and tax evasion. Obiang could afford to be sloppy: as the head of a sovereign state whose oil reserves gave it some geopolitical significance, he knew that powerful patrons could shield him from the fate of an ordinary looter.  Other members of the hectomillionaire class (and plenty of billionaires) take greater precautions.  They diversify their holdings into dozens or hundreds of entities, avoiding public scrutiny with shell companies and pliant private bankers.  A hidden hoard of tens of trillions of dollars has accumulated, and likely throws off hundreds of billions of dollars yearly in untaxed interest, dividends, and other returns.[58] This drives a wedge between a closed-circuit economy of extreme wealth and the ordinary patterns of exchange of the world’s less fortunate.[59]

    The Chinese writer and Nobel Peace Prize winner Liu Xiaobo once observed that corruption in Beijing had led to an officialization of the criminal and the criminalization of the official.[60] Persisting even in a world of brutal want and austerity-induced suffering, tax havenry epitomizes that sinister merger, and Piketty might have sharpened his critique further by focusing on this merger of politics and economics, of private gain and public governance. Authorities promote activities that would have once been proscribed; those who stand in the way of such “progress” might be jailed (or worse).  In Obiang’s Equatorial Guinea, we see similar dynamics, as the country’s leader extracts wealth at a volume that could only be dreamed of by a band of thieves.

    Obiang’s curiously double position, as Equatorial Guinea’s chief law maker and law breaker, reflects a deep reality of the global shadow economy.  And just as “shadow banks” are rivalling more regulated banks in terms of size and influence, shadow economy tactics are starting to overtake old standards. Tax avoidance techniques that were once condemned are becoming increasingly acceptable.  Campaigners like UK Uncut and the Tax Justice Network try to shame corporations for opportunistically allocating profits to low-tax jurisdictions.[61] But CEOs still brag about their corporate tax unit as a profit center.

    When some of Republican presidential candidate Mitt Romney’s recherché tax strategies were revealed in 2012, Barack Obama needled him repeatedly.  The charges scarcely stuck, as Romney’s core constituencies aimed to emulate rather than punish their standard-bearer.[62] Obama then appointed a Treasury Secretary (Jack Lew), who had himself utilized a Cayman Islands account.  Lew was the second Obama Treasury secretary to suffer tax troubles: Tim Geithner, his predecessor, was also accused of “forgetting” to pay certain taxes in a self-serving way.  And Obama’s billionaire Commerce Secretary Penny Pritzker was no stranger to complex tax avoidance strategies.[63]

    Tax attorneys may characterize Pritzker, Lew, Geithner, and Romney as different in kind from Obiang.  But any such distinctions they make will likely need to be moral, rather than legal, in nature.  Sure, these American elites operated within American law—but Obiang is the law of Equatorial Guinea, and could easily arrange for an administrative agency to bless his past actions (even developed legal systems permit retroactive rulemaking) or ensure the legality of all future actions (via safe harbors).  The mere fact that a tax avoidance scheme is “legal” should not count for much morally—particularly as those who gain from prior US tax tweaks use their fortunes to support the political candidacies of those who would further push the law in their favor.

    Shadowy financial flows exemplify the porous boundary between state and market.  The book Tax Havens: How Globalization Really Works argues that the line between savvy tax avoidance and illegal tax evasion (or strategic money transfers and forbidden money laundering) is blurring.[64] Between our stereotypical mental images of dishonest tycoons sipping margaritas under the palm trees of a Caribbean tax haven, and a state governor luring a firm by granting it a temporary tax abatement, lie hundreds of subtler scenarios.  Dingy rows of Delaware, Nevada, and Wyoming file cabinets can often accomplish the same purpose as incorporating in Belize or Panama: hiding the real beneficiaries of economic activity.[65] And as one wag put it to journalist Nicholas Shaxson, “the most important tax haven in the world is an island”—”Manhattan.”[66]

    In a world where “tax competition” is a key to neoliberal globalization, it is hard to see how a global wealth tax (even if set at the very low levels Piketty proposes) supports (rather than directly attacks) existing market order. Political elites are racing to reduce tax liability to curry favor with the wealthy companies and individuals they hope to lure, serve, and bill.  The ultimate logic of that competition is a world made over in the image of Obiang’s Equatorial Guinea: crumbling infrastructure and impoverished citizenries coexisting with extreme luxury for a global extractive elite and its local enablers.  Books like Third World America, Oligarchy, and Captive Audience have already started chronicling the failure of the US tax system to fund roads, bridges, universal broadband internet connectivity, and disaster preparation.[67] As tax avoiding elites parley their gains into lobbying for rules that make tax avoidance even easier, self-reinforcing inequality seems all but inevitable.  Wealthy interests can simply fund campaigns to reduce their taxes, or to reduce the risk of enforcement to a nullity. As Ben Kunkel pointedly asks, “How are the executive committees of the ruling class in countries across the world to act in concert to impose Piketty’s tax on just this class?”[68]

    US history is instructive here. Congress passed a tax on the top 0.1% of earners in 1894, only to see the Supreme Court strike the tax down in a five to four decision.  After the 16th Amendment effectively repealed that Supreme Court decision, Congress steadily increased the tax on high income households.  From 1915 to 1918, the highest rate went from 7% to 77%, and over fifty-six tax brackets were set.  When high taxes were maintained for the wealthy after the war, tax evasion flourished.  At this point, as Jeffrey Winters writes, the government had to choose whether to “beef up law enforcement against oligarchs … , or abandon the effort and instead squeeze the same resources from citizens with far less material clout to fight back.”[69] Enforcement ebbed and flowed. But since then, what began by targeting the very wealthy has grown to include “a mass tax that burdens oligarchs at the same effective rate as their office staff and landscapers.”[70]

    The undertaxation of America’s wealthy has helped them capture key political processes, and in turn demand even less taxation.  The dynamic of circularity teaches us that there is no stable, static equilibrium to be achieved between regulators and regulated. The government is either pushing industry to realize some public values in its activities (say, by investing in sustainable growth), or industry is pushing its regulators to promote its own interests.[71] Piketty may worry that, if he too easily accepts this core tenet of politico-economic interdependence, he’ll be dismissed as a statist socialist. But until political economists do so, their work cannot do justice to the voices of those prematurely dead as a result of the relentless pursuit of profit—ranging from the Lonmin miners, to those crushed at Rana Plaza, to the spike of suicides provoked by European austerity and Indian microcredit gone wrong, to the thousands of Americans who will die early because they are stuck in states that refuse to expand Medicaid.[72] Contemporary political economy can only mature if capitalism’s ghosts constrain our theory and practice as pervasively as communism’s specter does.

    Renewing Political Economy

    Piketty has been compared to Alexis de Tocqueville: a French outsider capable of discerning truths about the United States that its own sages were too close to observe.  The function social equality played in Tocqueville’s analysis, is taken up by economic inequality in Piketty’s:  a set of self-reinforcing trends fundamentally reshaping the social order.[73] I’ve written tens of thousands of words on this inequality, but the verbal itself may be outmatched in the face of the numbers and force behind these trends.[74] As film director Alex Rivera puts it, in an interview with The New Inquiry:

    I don’t think we even have the vocabulary to talk about what we lose as contemporary virtualized capitalism produces these new disembodied labor relations. … The broad, hegemonic clarity is the knowledge that a capitalist enterprise has the right to seek out the cheapest wage and the right to configure itself globally to find it. … The next stage in this process…is for capital to configure itself to enable every single job to be put on the global market through the network.[75]

    Amazon’s “Mechanical Turk” has begun that process, supplying “turkers” to perform tasks at a penny each.[76] Uber, Lyft, TaskRabbit, and various “gig economy” imitators assure that micro-labor is on the rise, leaving micro-wages in its wake.[77] Workers are shifting from paid vacation to stay-cation to “nano-cation” to “paid time off” to hoarding hours to cover the dry spells when work disappears.[78] These developments are all predictable consequences of a globalization premised on maximizing finance rents, top manager compensation, and returns to shareholders.

    Inequality is becoming more outrageous than even caricaturists used to dare. The richest woman in the world (Gina Rinehart) has advised fellow Australians to temper their wage demands, given that they are competing against Africans willing to work for two dollars day.[79] Or consider the construct of Dogland, from Korzeniewicz and Moran’s 2009 book, Unveiling Inequality:

    The magnitude of global disparities can be illustrated by considering the life of dogs in the United States. According to a recent estimate … in 2007-2008 the average yearly expenses associated with owning a dog were $1425 … For sake of argument, let us pretend that these dogs in the US constitute their own nation, Dogland, with their average maintenance costs representing the average income of this nation of dogs.

    By such a standard, their income would place Dogland squarely as a middle-income nation, above countries such as Paraguay and Egypt. In fact, the income of Dogland would place its canine inhabitants above more than 40% of the world population. … And if we were to focus exclusively on health care expenditures, the gap becomes monumental: the average yearly expenditures in Dogland would be higher than health care expenditures in countries that account for over 80% of the world population.[80]

    Given disparities like this, wages cannot possibly reflect just desert: who can really argue that a basset hound, however adorable, has “earned” more than a Bangladeshi laborer? Cambridge economist Ha Joon Chang asks us to compare the job and the pay of transport workers in Stockholm and Calcutta. “Skill” has little to do with it. The former, drivers on clean and well-kept roads, may easily be paid fifty times more than the latter, who may well be engaged in backbreaking, and very skilled, labor to negotiate passengers among teeming pedestrians, motorbikes, trucks, and cars.[81]

    Once “skill-biased technological change” is taken off the table, the classic economic rationale for such differentials focuses on the incentives necessary to induce labor. In Sweden, for example, the government assures that a person is unlikely to starve, no matter how many hours a week he or she works. By contrast, in India, 42% of the children under five years old are malnourished.[82] So while it takes $15 or $20 an hour just to get the Swedish worker to show up, the typical Indian can be motivated to labor for much less. But of course, at this point the market rationale for the wage differential breaks down entirely, because the background set of social expectations of earnings absent work is epiphenomenal of state-guaranteed patterns of social insurance. The critical questions are: how did the Swedes generate adequate goods and services for their population, and the social commitment to redistribution necessary in order to assure that unemployment is not a death sentence? And how can such social arrangements create basic entitlements to food, housing, health care, and education, around the world?

    Piketty’s proposals for regulating capital would be more compelling if they attempted to answer questions like those, rather than focusing on the dry, technocratic aim of tax-driven wealth redistribution. Moreover, even within the realm of tax law and policy, Piketty will need to grapple with several enforcement challenges if a global wealth tax is to succeed. But to its great credit, Capital adopts a methodology capacious enough to welcome the contributions of legal academics and a broad range of social scientists to the study (and remediation) of inequality.[83] It is now up to us to accept the invitation, realizing that if we refuse, accelerating inequality will undermine the relevance—and perhaps even the very existence—of independent legal authority.


    _____

    Frank Pasquale (@FrankPasquale) is a Professor of Law at the University of Maryland Carey School of Law. His forthcoming book, The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press, 2015), develops a social theory of reputation, search, and finance.  He blogs regularly at Concurring Opinions. He has received a commission from Triple Canopy to write and present on the political economy of automation. He is a member of the Council for Big Data, Ethics, and Society, and an Affiliate Fellow of Yale Law School’s Information Society Project.

    Back to the essay
    _____

    [1] Dennis Abrams, Piketty’s “Capital”: A Monster Hit for Harvard U Press, Publishing Perspectives, at http://publishingperspectives.com/2014/04/pilkettys-capital-a-monster-hit-for-harvard-u-press/ (Apr. 29, 2014).

    [2] Intriguingly, one leading economist who has done serious work on narrative in the field, Dierdre McCloskey, offers a radically different (and far more positive) perspective on the nature of economic growth under capitalism. Evan Thomas, Has Thomas Piketty Met His Match?, http://www.spectator.co.uk/features/9211721/unequal-battle/. But this is to be expected as richer methodologies inform economic analysis. Sometimes the best interpretive social science leads not to consensus, but to ever sharper disagreement about the nature of the phenomena it describes and evaluates. Rather than trying to bury normative differences in jargon or flatten them into commensurable cost-benefit calculations, it surfaces them.

    [3] As Thomas Jessen Adams argues, “to understand how inequality has been overcome in the past, we must understand it historically.” Adams, The Theater of Inequality, at http://nonsite.org/feature/the-theater-of-inequality. Adams critiques Piketty for failing to engage historical evidence properly. In this review, I celebrate the book’s bricolage of methodological approaches as the type of problem-driven research promoted by Ian Shapiro.

    [4] Thomas Piketty, Capital in the Twenty-First Century 17 (Arthur Goldhammer trans., 2014).

    [5] Doug Henwood, The Top of the World, Book Forum, Apr. 2014,  http://www.bookforum.com/inprint/021_01/12987; Suresh Naidu, Capital Eats the World, Jacobin (May 30, 2014), https://www.jacobinmag.com/2014/05/capital-eats-the-world/.

    [6] Thomas Piketty, Capital in the Twenty-First Century 25 (Arthur Goldhammer trans., 2014).

    [7] Id.

    [8] As Piketty observes, war and revolution can also serve this redistributive function. Piketty, supra n. 3, at 20. Since I (and the vast majority of attorneys) do not consider violence a legitimate tool of social change, I do not include these options in my discussion of Piketty’s book.

    [9] Frank Pasquale, Access to Medicine in an Era of Fractal Inequality, 19 Annals of Health Law 269 (2010).

    [10] Charles R. Morris, The Two Trillion Dollar Meltdown: Easy Money, High Rollers, and the Great Credit Crash 139-40 (2009); see also Edward N. Wolff, Top Heavy: The Increasing Inequality of Wealth in America and What Can Be Done About It 36 (updated ed. 2002).

    [11] Yves Smith, Yes, Virginia, the Rich Continue to Get Richer: The Top 1% Get 121% of Income Gains Since 2009, Naked Capitalism (Feb. 13, 2013), http://www.nakedcapitalism.com/2013/02/yes-virginia-the-rich-continue-to-get-richer-the-1-got-121-of-income-gains-since-2009.html#XxsV2mERu5CyQaGE.99.

    [12] Larry M. Bartels, Unequal Democracy: The Political Economy of the New Gilded Age 8,10 (2010).

    [13] Id. at 8.

    [14] Id. at 10.

    [15] Tom Herman, There’s Rich, and There’s the ‘Fortunate 400′, Wall St. J., Mar. 5, 2008, http://online.wsj.com/article/SB120468366051012473.html.

    [16] See Thomas Piketty & Emmanuel Saez, The Evolution of Top Incomes: A Historical and International Perspective, 96 Am. Econ. Rev. 200, 204 (2006). 

    [17] Piketty, supra note 4, at 17. Note that, given variations in the data, Piketty is careful to cabin the “geographical and historical boundaries of this study” (27), and must “focus primarily on the wealthy countries and proceed by extrapolation to poor and emerging countries” (28).

    [18] Id. at 46, 571 (“In this book, capital is defined as the sum total of nonhuman assets that can be owned and exchanged on some market. Capital includes all forms of real property (including residential real estate) as well as financial and professional capital (plants, infrastructure, machinery, patents, and so on) used by firms and government agencies.”).

    [19] Alice Schroeder, The Snowball: Warren Buffett and the Business of Life (Bantam-Dell, 2008); Adam Levine-Weinberg, Warren Buffett Loves a Good Moat, at http://www.fool.com/investing/general/2014/06/30/warren-buffett-loves-a-good-moat.aspx.

    [20] John Rawls, A Theory of Justice (1971).

    [21] Piketty, supra note 4, at 540.

    [22] Atul Gawande, Something Wicked This Way Comes, New Yorker (June 28, 2012), http://www.newyorker.com/news/daily-comment/something-wicked-this-way-comes.

    [23] Philip Mirowski, Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown (2013).

    [24] The Foreign Account Tax Compliance Act (FATCA) was passed in 2010 as part of the Hiring Incentives to Restore Employment Act, Pub. L. No. 111-147, 124 Stat. 71 (2010), codified in sections 1471 to 1474 of the Internal Revenue Code, 26 U.S.C. §§ 1471-1474.  The law is effective as of 2014. It requires foreign financial institutions (FFIs) to report financial information about accounts held by United States persons, or pay a withholding tax. Id.

    [25] Christopher William Sanchirico, Deconstructing the New Efficiency Rationale, 86 Cornell L. Rev. 1003, 1005 (2001).

    [26] Nicholas Shaxson, Treasure Islands: Uncovering the Damage of Offshore Banking and Tax Havens (2012); Jeanna Smialek, The 1% May be Richer than You Think, Bloomberg, Aug. 7, 2014, at http://www.bloomberg.com/news/2014-08-06/the-1-may-be-richer-than-you-think-research-shows.html (collecting economics research).

    [27] Andrew Rice, Stash Pad: The New York real-estate market is now the premier destination for wealthy foreigners with rubles, yuan, and dollars to hide, N.Y. Mag., June 29, 2014, at http://nymag.com/news/features/foreigners-hiding-money-new-york-real-estate-2014-6/#.

    [28] Ronen Palan, Richard Murphy, and Christian Chavagneux, Tax Havens: How Globalization Really Works 272 (2009) (“[m]ore than simple conduits for tax avoidance and evasion, tax havens actually belong to the broad world of finance, to the business of managing the monetary resources of individuals, organizations, and countries.  They have become among the most powerful instruments of globalization, one of the principal causes of global financial instability, and one of the large political issues of our times.”).

    [29] 26 U.S.C. § 1471-1474 (2012); Itai Grinberg, Beyond FATCA: An Evolutionary Moment for the International Tax System (Georgetown Law Faculty, Working Paper No. 160, 2012), available at http://scholarship.law.georgetown.edu/cgi/viewcontent.cgi?article=1162&context=fwps_papers.

    [30] David Rothkopf, Superclass: The Global Power Elite and the World They Are Making (2009).

    [31] John Chung, Money as Simulacrum: The Legal Nature and Reality of Money, 5 Hasting Bus. L.J. 109,149 (2009).

    [32] James S. Henry, Tax Just. Network, The Price Of Offshore Revisited: New Estimates For “Missing” Global Private Wealth, Income, Inequality, And Lost Taxes 3 (2012), available at http://www.taxjustice.net/cms/upload/pdf/Price_of_Offshore_Revisited_120722.pdf; Scott Highman et al., Piercing the Secrecy of Offshore Tax Havens, Wash. Post (Apr. 6, 2013), http://www.washingtonpost.com/investigations/piercing-the-secrecy-of-offshore-tax-havens/2013/04/06/1551806c-7d50-11e2-a044-676856536b40_story.html.

    [33] Dev Kar & Devon Cartwright‐Smith, Center for Int’l Pol’y, Illicit Financial Flows from Developing Countries: 2002-2006 (2012); Jeffrey Sachs, The End of Poverty: Economic Possibilities for Our Time (2006); Ben Harack, How Much Would it Cost to End Extreme Poverty in the World?, Vision Earth, (Aug. 26, 2011), http://www.visionofearth.org/economics/ending-poverty/how-much-would-it-cost-to-end-extreme-poverty-in-the-world/.

    [34] Henry, supra note 68.

    [35] Piketty, supra note 4, at 523.

    [36] Jeffrey Winters coined the term “wealth defense industry” in his book, Oligarchy. See Frank Pasquale, Understanding Wealth Defense: Direct Action from the 0.1%, at http://www.concurringopinions.com/archives/2011/11/understanding-wealth-defense-direct-action-from-the-0-1.html.

    [37] For a similar argument, focusing on the historical specificity of the US parallel to the trente glorieuses, see  Thomas Jessen Adams, The Theater of Inequality, http://nonsite.org/feature/the-theater-of-inequality.

    [38] Thomas Pogge, The Health Impact Fund: Boosting Pharmaceutical Innovation Without Obstructing Free Access, 18 Cambridge Q. Healthcare Ethics 78 (2008) (proposing global R&D  fund);William Fisher III, Promise to Keep: Technology, Law, and the Future of Entertainment (2007); William W. Fisher & Talha Syed, Global Justice in Healthcare: Developing Drugs for the Developing World, 40 U.C. Davis L. Rev. 581 (2006).

    [39] Katharina Pistor, A Legal Theory of Finance, 41 J. Comp. Econ. 315 (2013); Law in Finance, 41 J. Comp. Econ (2013). Several other articles in the same journal issue discuss the implications of LTF for derivatives, foreign currency exchange, and central banking.

    [40] University of Chicago Law Professor Eric A. Posner and economist Glen Weyl recognize this in their review of Piketty, arguing that “the fundamental problem facing American capitalism is not the high rate of return on capital relative to economic growth that Piketty highlights, but the radical deviation from the just rewards of the marketplace that have crept into our society and increasingly drives talented students out of innovation and into finance.”  Posner & Weyl, Thomas Piketty Is Wrong: America Will Never Look Like a Jane Austen Novel, The New Republic, July 31, 2014, at http://www.newrepublic.com/article/118925/pikettys-capital-theory-misunderstands-inherited-wealth-today. See also Timothy A. Canova, The Federal Reserve We Need, 21 American Prospect 9 (October 2010), at http://prospect.org/article/federal-reserve-we-need.

    [41] Timothy Canova, The Federal Reserve We Need: It’s the Fed We Once Had, at http://prospect.org/article/federal-reserve-we-need; Justin Fox, How Economics PhDs Took Over the Federal Reserve, at http://blogs.hbr.org/2014/02/how-economics-phds-took-over-the-federal-reserve/.

    [42] Jack M. Balkin, From Off the Wall to On the Wall: How the Mandate Challenge Went Mainstream, Atlantic (June 4, 2012, 2:55 PM), http://www.theatlantic.com/national/archive/2012/06/from-off-the-wall-to-on-the-wall-how-the-mandate-challenge-went-mainstream/258040/ (Jack Balkin has described how certain arguments go from being ‘off the wall‘ to respectable in constitutional thought; economists have yet to take up that deflationary nomenclature for the evolution of ideas in their own field’s intellectual history. That helps explain the rising power of economists vis a vis lawyers, since the latter field’s honesty about the vagaries of its development diminishes its authority as a ‘science.’).  For more on the political consequences of the philosophy of social science, see Jamie Cohen-Cole, The Open Mind: Cold War Politics and the Sciences of Human Nature (2014), and Joel Isaac, Working Knowledge: Making the Human Sciences from Parsons to Kuhn (2012).

    [43] Chris Giles, Piketty Findings Undercut by Errors, Fin. Times (May 23, 2014, 7:00 PM), http://www.ft.com/intl/cms/s/2/e1f343ca-e281-11e3-89fd-00144feabdc0.html#axzz399nSmEKj; Thomas Piketty, Addendum: Response to FT, Thomas Piketty (May 28, 2014), http://piketty.pse.ens.fr/files/capital21c/en/Piketty2014TechnicalAppendixResponsetoFT.pdf; Felix Salmon, The Piketty Pessimist, Reuters (April 25, 2014), http://blogs.reuters.com/felix-salmon/2014/04/25/the-piketty-pessimist/.

    [44] Neil Irwin, Everything You Need to know About Thomas Piketty vs. The Financial Times, N.Y. Times (May 30, 2014), http://www.nytimes.com/2014/05/31/upshot/everything-you-need-to-know-about-thomas-piketty-vs-the-financial-times.html

    [45] Javier Blas, The Fragile Middle: Rising Inequality in Africa Weighs on New Consumers, Fin. Times (Apr. 18, 2014), http://www.ft.com/intl/cms/s/0/49812cde-c566-11e3-89a9-00144feabdc0.html#axzz399nSmEKj.

    [46] Jane Owen, Duke of Grafton Uses R&B to Restore Euston Hall’s Pleasure Grounds, Fin. Times (Apr. 18, 2014, 2:03 PM), http://www.ft.com/intl/cms/s/2/b49f6dd8-c3bc-11e3-870b-00144feabdc0.html#slide0.

    [47] Larry Elliott, Britain’s Five Richest Families Worth More Than Poorest 20%, Guardian, Mar. 16, 2014, http://www.theguardian.com/business/2014/mar/17/oxfam-report-scale-britain-growing-financial-inequality#101.

    [48] Piketty, supra note 4, at 570.

    [49] Margaret Kimberley, Freedom Rider: Miners Shot Down, Black Agenda Report (June 4, 2014), http://www.blackagendareport.com/content/freedom-rider-miners-shot-down.

    [50] Peter Maass, Crude World: The Violent Twilight of Oil (2009); Nicholas Shaxson, Poisoned Wells: The Dirty Politics of African Oil (2008).

    [51] Piketty, supra note 4, at 539.

    [52] Jad Mouawad, Oil Corruption in Equatorial Guinea, N.Y. Times Green Blog (July 9, 2009, 7:01 AM), http://green.blogs.nytimes.com/2009/07/09/oil-corruption-in-equatorial-guinea; Tina Aridas & Valentina Pasquali, Countries with the Highest GDP Average Growth, 2003–2013, Global Fin. (Mar. 7, 2013), http://www.gfmag.com/component/content/article/119-economic-data/12368-countries-highest-gdp-growth.html#axzz2W8zLMznX; CIA, The World Factbook 184 (2007).

    [53] Interview with President Teodoro Obiang of Equatorial Guinea, CNN’s Amanpour (CNN broadcast Oct. 5, 2012), transcript available at http://edition.cnn.com/TRANSCRIPTS/1210/05/ampr.01.html.

    [54] Peter Maass, A Touch of Crude, Mother Jones, Jan. 2005,http://www.motherjones.com/politics/2005/01/obiang-equatorial-guinea-oil-riggs.

    [55] Geraud Magrin & Geert van Vliet, The Use of Oil Revenues in Africa, in Governance of Oil in Africa: Unfinished Business 114 (Jacques Lesourne ed., 2009).

    [56] Interview with President Teodoro Obiang of Equatorial Guinea, supra note 89 .

    [57] S. Minority Staff of Permanent Subcomm. on Investigations, Comm. on Gov’t Affairs, 108th Cong., Rep. on Money Laundering and Foreign Corruption: Enforcement and Effectiveness of the Patriot Act 39-40 (Subcomm. Print 2004).

    [58] Henry, supra note 68 , at 6, 19-20.

    [59] Frank Pasquale, Closed Circuit Economics, New City Reader, Dec. 3, 2010, at 3, at http://neildonnelly.net/ncr/08_Business/NCR_Business_%5BF%5D_web.pdf.

    [60] Liu Xiaobo, No Enemies, No Hatred 102 (Perry Link, trans., 2012).

    [61] Jesse Drucker, Occupy Wall Street Stylists Pursue U.K. Tax Dodgers, Bloomberg News (June 11, 2013), http://www.businessweek.com/news/2013-06-11/occupy-wall-street-stylists-pursue-u-dot-k-dot-tax-dodgers.

    [62] Daniel J. Mitchell, Tax Havens Should Be Emulated, Not Prosecuted, CATO Inst. (Apr. 13, 2009, 12:36 PM), http://www.cato.org/blog/tax-havens-should-be-emulated-not-prosecuted.

    [63] Janet Novack, Pritzker Family Baggage: Tax Saving Offshore Trusts, Forbes (May 2, 2013, 8:20 PM), http://www.forbes.com/sites/janetnovack/2013/05/02/pritzker-family-baggage-tax-saving-offshore-trusts/.

    [64] Ronen Palan et al., Tax Havens: How Globalization Really Works (2013); see also Carolyn Nordstrom, Global Outlaws: Crime, Money, and Power in the Contemporary World (2007), and Loretta Napoleoni, Rogue Economics (2009).

    [65] Palan et al., supra note 100 .

    [66] Shaxson, supra note 86 , at 24.

    [67] Arianna Huffington, Third World America: How Our Politicians Are Abandoning the Middle Class and Betraying the American Dream (2011); Jeffrey A. Winters, Oligarchy (2011); Susan B. Crawford, Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age (2014).

    [68] Benjamin Kunkel, Paupers and Richlings, 36 London Rev. Books 17 (2014) (reviewing Thomas Piketty, Capital in the Twenty-First Century).

    [69] Jeffrey A. Winters, Oligarchy and Democracy, Am. Interest, Sept. 28, 2011, http://www.the-american-interest.com/articles/2011/9/28/oligarchy-and-democracy/.

    [70] Id.

    [71]  James K. Galbraith, The Predator State: How Conservatives Abandoned the Free Market and Why Liberals Should, Too (2009).

    [72] Alex Duval Smith, South Africa Lonmin Mine Massacre Puts Nationalism Back on Agenda, Guardian (Aug. 29, 2012), http://www.theguardian.com/global-development/poverty-matters/2012/aug/29/south-africa-lonmin-mine-massacre-nationalisation; Charlie Campbell, Dying for Some New Clothes: Bangladesh’s Rana Plaza Tragedy, Time (Apr. 26, 2013), http://world.time.com/2013/04/26/dying-for-some-new-clothes-the-tragedy-of-rana-plaza/; David Stuckler, The Body Economic: Why Austerity Kills xiv (2013); Soutik Biswas, India’s Micro-Finance Suicide Epidemic, BBC (Dec. 16, 2010), http://www.bbc.com/news/world-south-asia-11997571; Michael P. O’Donnell, Further Erosion of Our Moral Compass: Failure to Expand Medicaid to Low-Income People in All States, 28 Am. J. Health Promotion iv (2013); Sam Dickman et al., Opting Out of Medicaid Expansion; The Health and Financial Impacts, Health Affairs Blog (Jan. 30, 2014), http://healthaffairs.org/blog/2014/01/30/opting-out-of-medicaid-expansion-the-health-and-financial-impacts/.

    [73] It would be instructive to compare political theorists’ varying models of Tocqueville’s predictive efforts, with Piketty’s sweeping r > g.  See, e.g., Roger Boesche, Why Could Tocqueville Predict So Well?, 11 Political Theory 79 (1983) (“Democracy in America endeavors to demonstrate how language, literature, the relations of masters and servants, the status of women, the family,  property, politics, and so forth, must change and align themselves in a new, symbiotic configuration as a result of the historical thrust toward equality”); Jon Elster, Alexis de Tocqueville:  the First Social Scientist (2012).

    [74] See, e.g., Frank Pasquale, Access to Medicine in an Era of Fractal Inequality, 19 Annals of Health Law 269 (2010); Frank Pasquale, The Cost of Conscience: Quantifying our Charitable Burden in an Era of Globalization, at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=584741 (2004); Frank Pasquale, Diagnosing Finance’s Failures: From Economic Idealism to Lawyerly Realism, 6 India L. J. 2 (2012).

    [75] Malcolm Harris interview of Alex Rivera, Border Control, New Inquiry (July 2, 2012), http://thenewinquiry.com/features/border-control/.

    [76] Trebor Scholz, Digital Labor (Palgrave, forthcoming, 2015); Frank Pasquale, Banana Republic.com, Jotwell (Jan. 14, 2011), http://cyber.jotwell.com/banana-republic-com/.

    [77] The Rise of Micro-Labor, On Point with Tom Ashbrook (NPR Apr. 3, 2012, 10:00 AM), http://onpoint.wbur.org/2012/04/03/micro-labor-websites.

    [78] Vacation Time, On Point with Tom Ashbrook (NPR June 22, 2012, 10:00 AM), http://onpoint.wbur.org/2012/06/22/vacation-time.

    [79] Peter Ryan, Aussies Must Compete with $2 a Day Workers: Rinehart, ABC News (Sept. 25, 2012, 2:56 PM), http://www.abc.net.au/news/2012-09-05/rinehart-says-aussie-workers-overpaid-unproductive/4243866.

    [80] Roberto Patricio Korzeniewicz & Timothy Patrick Moran, Unveiling Inequality, at xv (2012).

    [81] Ha Joon Chang, 23 Things They Don’t Tell You About Capitalism 98 (2012).

    [82] Jason Burke, Over 40% of Indian Children Are Malnourished, Report Finds, Guardian (Jan. 10, 2012), http://www.theguardian.com/world/2012/jan/10/child-malnutrition-india-national-shame.

    [83] Paul Farmer observes that “an understanding of poverty must be linked to efforts to end it.” Farmer, In the Company of the Poor, at http://www.pih.org/blog/in-the-company-of-the-poor.  The same could be said of extreme inequality.

  • Drones

    Drones

    8746586571_471353116d_bDavid Golumbia and David Simpson begin a conversation, inviting comment below or via email to boundary 2:

    What are we talking about when we talk about drones? Is it that they carry weapons (true of only a small fraction of UAVs), that they have remote, mobile surveillance capabilities (true of most UAVs, but also of many devices not currently thought of as drones), or that they have or may someday have forms of operational autonomy (a goal of many forms of robotics research)? Is it the technology itself, or the fact that it is currently being deployed largely by the world’s dominant powers, or the way it is being deployed? Is it the use of drones in specific military contexts, or the existence of those military conflicts per se (that is, if we endorsed a particular conflict, would the use of drones in that scenario be acceptable)? Is it that military use of drones leads to civilian casualties, despite the fact that other military tactics almost certainly lead to many more casualties (the total number of all persons, combatant and non-combatant, killed by drones to date by US operations worldwide is estimated at under 4000; the number of civilian casualties in the Iraq conflict alone even by conservative estimates exceeds 100,000 and may be as many as 500,000 or even more), a reduction in total casualties that forms part of the arguments used by some military and international law analysts to suggest that drone use is not merely acceptable but actually required under international law, which mandates that militaries use the least amount of lethal force available to them that will effectively achieve their goals? If we object to drones based on their use in targeted killings, do we accept their use for surveillance? If we object only to their use in targeted killing, does that objection proceed from the fact that drones fly, or do we actually object to all forms of automated or partly-automated lethal force, along the lines of the Stop Killer Robots campaign, whose scope goes well beyond drones, and yet does not include non-lethal drones? How do we define drones so as to capture what is objectionable about them on humanitarian and civil society grounds, given how rapidly the technology is advancing and how difficult it already is to distinguish some drones from other forms of technology, especially for surveillance? What do we do about the proliferating “positive” use cases for drones (journalism, remote information about forest fires and other environmental problems, for example), which are clearly being developed in part so as to sell drone technology in general to the public, but at least in some cases appear to describe vital functions that other technology cannot fulfill?

    David Golumbia

    _____

    What resources can we call upon, invent or reinvent in order to bring effective critical attention to the phenomenon of drone warfare? Can we revivify the functions of witness and testimony to protest or to curtail the spread of robotic lethal violence? What alliances can be pursued with the radical journalism sector (Medea Benjamin, Jeremy Scahill)? Is drone warfare inevitably implicated in a seamlessly continuous surveillance culture wherein all information is or can be weaponized? A predictable development in the command-control-communication-intelligence syndrome articulated some time ago by Donna Haraway? Can we hope to devise any enforceable boundaries between positive and destructive uses of the technology? Does it bring with it a specific aesthetics, whether for those piloting the drones or those on the receiving end? What is the profile of psychological effects (disorders?) among those observing and then killing at a distance? And what are the political obligations of a Congress and a Presidency able to turn to drone technology as arguably the most efficient form yet devised for deploying state terrorism? What are the ethical obligations of a superpower (or indeed a local power) that can now wage war with absolutely no risk to its own combatants?

    David Simpson

  • All Hitherto Existing Social Media

    All Hitherto Existing Social Media

    Social Media: A Critical Introduction (Sage, 2013)a review of Christian Fuchs, Social Media: A Critical Introduction
    by Zachary Loeb
    ~
    Legion are the books and articles describing the social media that has come before. Yet the tracts focusing on Friendster, LiveJournal, or MySpace now appear as throwbacks, nostalgically immortalizing the internet that was and is now gone. On the cusp of the next great amoeba-like expansion of the internet (wearable technology and the “internet of things”) it is a challenging task to analyze social media as a concept while recognizing that the platforms being focused upon—regardless of how permanent they seem—may go the way of Friendster by the end of the month. Granted, social media (and the companies whose monikers act as convenient shorthand for it) is an important topic today. Those living in highly digitized societies can hardly avoid the tendrils of social media (even if a person does not use a particular platform it may still be tracking them), but this does not mean that any of us fully understand these platforms, let alone have a critical conception of them. It is into this confused and confusing territory that Christian Fuchs steps with his Social Media: A Critical Introduction.

    It is a book ostensibly targeted at students. Though when it comes to social media—as Fuchs makes clear—everybody has quite a bit to learn.

    By deploying an analysis couched in Marxist and Critical Theory, Fuchs aims not simply to describe social media as it appears today, but to consider its hidden functions and biases, and along the way to describe what social media could become. The goal of Fuchs’s book is to provide readers—the target audience is students, after all—with the critical tools and proper questions with which to approach social media. While Fuchs devotes much of the book to discussing specific platforms (Google, Facebook, Twitter, WikiLeaks, Wikipedia), these case studies are used to establish a larger theoretical framework which can be applied to social media beyond these examples. Affirming the continued usefulness of Marxist and Frankfurt School critiques, Fuchs defines the aim of his text as being “to engage with the different forms of sociality on the internet in the context of society” (6) and emphasizes that the “critical” questions to be asked are those that “are concerned with questions of power” (7).

    Thus a critical analysis of social media demands a careful accounting of the power structures involved not just in specific platforms, but in the larger society as a whole. So though Fuchs regularly returns to the examples of the Arab Spring and the Occupy Movement, he emphasizes that the narratives that dub these “Twitter revolutions” often come from a rather non-critical and generally pro-capitalist perspective that fail to embed adequately uses of digital technology in their larger contexts.

    Social media is portrayed as an example, like other media, of “techno-social systems” (37) wherein the online platforms may receive the most attention but where the, oft-ignored, layer of material technologies is equally important. Social media, in Fuchs’s estimation, developed and expanded with the growth of “Web 2.0” and functions as part of the rebranding effort that revitalized (made safe for investments) the internet after the initial dot.com bubble. As Fuchs puts it, “the talk about novelty was aimed at attracting novel capital investments” (33). What makes social media a topic of such interest—and invested with so much hope and dread—is the degree to which social media users are considered as active creators instead of simply consumers of this content (Fuchs follows much recent scholarship and industry marketing in using the term “prosumers” to describe this phenomenon; the term originates from the 1970s business-friendly futurology of Alvin Toffler’s The Third Wave). Social media, in Fuchs’s description, represents a shift in the way that value is generated through labor, and as a result an alteration in the way that large capitalist firms appropriate surplus value from workers. The social media user is not laboring in a factory, but with every tap of the button they are performing work from which value (and profit) is skimmed.

    Without disavowing the hope that social media (and by extension the internet) has liberating potential, Fuchs emphasizes that such hopes often function as a way of hiding profit motives and capitalist ideologies. It is not that social media cannot potentially lead to “participatory democracy” but that “participatory culture” does not necessarily have much to do with democracy. Indeed, as Fuchs humorously notes: “participatory culture is a rather harmless concept mainly created by white boys with toys who love their toys” (58). This “love their toys” sentiment is part of the ideology that undergirds much of the optimism around social media—which allows for complex political occurrences (such as the Arab Spring) to be reduced to events that can be credited to software platforms.

    What Fuchs demonstrates at multiple junctures is the importance of recognizing that the usage of a given communication tool by a social movement does not mean that this tool brought about the movement: intersecting social, political and economic factors are the causes of social movements. In seeking to provide a “critical introduction” to social media, Fuchs rejects arguments that he sees as not suitably critical (including those of Henry Jenkins and Manuel Castells), arguments that at best have been insufficient and at worst have been advertisements masquerading as scholarship.

    Though the time people spend on social media is often portrayed as “fun” or “creative,” Fuchs recasts these tasks as work in order to demonstrate how that time is exploited by the owners of social media platforms. By clicking on links, writing comments, performing web searches, sending tweets, uploading videos, and posting on Facebook, social media users are performing unpaid labor that generates a product (in the form of information about users) that can then be sold to advertisers and data aggregators; this sale generates profits for the platform owner which do not accrue back to the original user. Though social media users are granted “free” access to a service, it is their labor on that platform that makes the platform have any value—Facebook and Twitter would not have a commodity to sell to advertisers if they did not have millions of users working for them for free. As Fuchs describes it, “the outsourcing of work to consumers is a general tendency of contemporary capitalism” (111).

    screen shot of Karl Marx Community Facebook Page
    screen shot of a Karl Marx Community Page on Facebook

    While miners of raw materials and workers in assembly plants are still brutally exploited—and this unseen exploitation forms a critical part of the economic base of computer technology—the exploitation of social media users is given a gloss of “fun” and “creativity.” Fuchs does not suggest that social media use is fully akin to working in a factory, but that users carry the factory with them at all times (a smart phone, for example) and are creating surplus value as long as they are interacting with social media. Instead of being a post-work utopia, Fuchs emphasizes that “the existence of the internet in its current dominant capitalist form is based on various forms of labour” (121) and the enrichment of internet firms is reliant upon the exploitation of those various forms of labor—central amongst these being the social media user.

    Fuchs considers five specific platforms in detail so as to illustrate not simply the current state of affairs but also to point towards possible alternatives. Fuchs analyzes Google, Facebook, Twitter, WikiLeaks and Wikipedia as case studies of trends to encourage and trends of which to take wary notice. In his analysis of the three corporate platforms (Google, Facebook and Twitter) Fuchs emphasizes the ways in which these social media companies (and the moguls who run them) have become wealthy and powerful by extracting value from the labor of users and by subjecting users to constant surveillance. The corporate platforms give Fuchs the opportunity to consider various social media issues in sharper relief: labor and monopolization in terms of Google, surveillance and privacy issues with Facebook, the potential for an online public sphere and Twitter. Despite his criticisms, Fuchs does not dismiss the value and utility of what these platforms offer, as is captured in his claim that “Google is at the same time the best and the worst thing that has ever happened on the internet” (147). The corporate platforms’ successes are owed at least partly to their delivering desirable functions to users. The corrective for which Fuchs argues is increased democratic control of these platforms—for the labor to be compensated and for privacy to pertain to individual humans instead of to businesses’ proprietary methods of control. Indeed, one cannot get far with a “participatory culture” unless there is a similarly robust “participatory democracy,” and part of Fuchs’s goal is to show that these are not at all the same.

    WikiLeaks and Wikipedia both serve as real examples that demonstrate the potential of an “alternative” internet for Fuchs. Though these Wiki platforms are not ideal they contain within themselves the seeds for their own adaptive development (“WikiLeaks is its own alternative”—232), and serve for Fuchs as proof that the internet can move in a direction akin to a “commons.” As Fuchs puts it, “the primary political task for concerned citizens should therefore be to resist the commodification of everything and to strive for democratizing the economy and the internet” (248), a goal he sees as at least partly realized in Wikipedia.

    While the outlines of the internet’s future may seem to have been written already, Fuchs’s book is an argument in favor of the view that the code can still be altered. A different future relies upon confronting the reality of the online world as it currently is and recognizing that the battles waged for control of the internet are proxy battles in the conflict between capitalism and an alternative approach. In the conclusion of the book Fuchs eloquently condenses his view and the argument that follows from it in two simple sentences: “A just society is a classless society. A just internet is a classless internet” (257). It is a sentiment likely to spark an invigorating discussion, be it in a classroom, at a kitchen table, or in a café.

    * * *

    While Social Media: A Critical Introduction is clearly intended as a text book (each chapter ends with a “recommended readings and exercises” section), it is written in an impassioned and engaging style that will appeal to anyone who would like to see a critical gaze turned towards social media. Fuchs structures his book so that his arguments will remain relevant even if some of the platforms about which he writes vanish. Even the chapters in which Fuchs focuses on a specific platform are filled with larger arguments that transcend that platform. Indeed one of the primary strengths of Social Media is that Fuchs skillfully uses the familiar examples of social media platforms as a way of introducing the reader to complex theories and thinkers (from Marx to Habermas).

    Whereas Fuchs accuses some other scholars of subtly hiding their ideological agendas, no such argument can be made regarding Fuchs himself. Social Media is a Marxist critique of the major online platforms—not simply because Fuchs deploys Marx (and other Marxist theorists) to construct his arguments, but because of his assumption that the desirable alternative for the internet is part and parcel of a desirable alternative to capitalism. Such a sentiment can be found at several points throughout the book, but is made particularly evident by lines such as these from the book’s conclusion: “There seem to be only two options today: (a) continuance and intensification of the 200-year-old barbarity of capitalism or (b) socialism” (259)—it is a rather stark choice. It is precisely due to Fuchs’s willingness to stake out, and stick to, such political positions that this text is so effective.

    And yet, it is the very allegiance to such positions that also presents something of a problem. While much has been written of late—in the popular press in addition to by scholars—regarding issues of privacy and surveillance, Fuchs’s arguments about the need to consider users as exploited workers will likely strike many readers as new, and thus worthwhile in their novelty if nothing else. Granted, to fully go along with Fuchs’s critique requires readers to already be in agreement or at least relatively sympathetic with Fuchs political and ethical positions. This is particularly true as Fuchs excels at making an argument about media and technology, but devotes significantly fewer pages to ethical argumentation.

    The lines (quoted earlier) “A just society is a classless society. A just internet is a classless internet” (257) serve as much as a provocation as a conclusion. For those who ascribe to a similar notion of “a just society” Fuchs book will likely function as an important guide to thinking about the internet; however, to those whose vision of “a just society” is fundamentally different from his, Fuchs’s book may be less than convincing. Social Media does not present a complete argument about how one defines a “just society.” Indeed, the danger may be that Fuchs’s statements in praise of a “classless society” may lead to some dismissing his arguments regarding the way in which the internet has replicated a “class society.” Likewise, it is easy to imagine a retort being offered that the new platforms of “the sharing economy” represent the birth of this “classless society” (though it is easy to imagine Fuchs pointing out, as have other critics from the left, that the “sharing economy” is simply more advertising lingo being used to hide the same old capitalist relations). This represents something of a peculiar challenge when it comes to Social Media, as the political commitment of the book is simultaneously what makes it so effective and that which threatens the book’s potential political efficacy.

    Thus Social Media presents something of a conundrum: how effective is a critical introduction if its conclusion offers a heads-and-tails choice between “barbarity of capitalism or…socialism”? Such a choice feels slightly as though Fuchs is begging the question. While it is curious that Fuchs does not draw upon critical theorists’ writings about the culture industry, the main issues with Social Media seem to be reflections of this black-and-white choice. Thus it is something of a missed chance that Fuchs does not draw upon some of the more serious critics of technology (such as Ellul or Mumford)—whose hard edged skepticism would nevertheless likely not accept Fuchs’s Marxist orientation. Such thinkers might provide a very different perspective on the choice between “capitalism” and “socialism”—arguing that “technique” or “the megamachine” can function quite effectively in either. Though Fuchs draws heavily upon thinkers in the Marxist tradition it may be that another set of insights and critiques might have been gained by bringing in other critics of technology (Hans Jonas, Peter Kropotkin, Albert Borgmann)—especially as some of these thinkers had warned that Marxism may overvalue the technological as much as capitalism does. This is not to argue in favor of any of these particular theorists, but to suggest that Fuchs’s claims would have been strengthened by devoting more time to considering the views of those who were critical of technology, capitalism and of Marxism. Social Media does an excellent job of confronting the ideological forces on its right flank; it could have benefited from at least acknowledging the critics to its left.

    Two other areas that remain somewhat troubling are in regards to Fuchs’s treatment of Wiki platforms and of the materiality of technology. The optimism with which Fuchs approaches WikiLeaks and Wikipedia is understandable given the dourness with which he approaches the corporate platforms, and yet his hopes for them seem somewhat exaggerated. Fuchs claims “Wikipedians are prototypical contemporary communists” (243), partially to suggest that many people are already engaged in commons based online activities and yet it is an argument that he simultaneously undermines by admitting (importantly) the fact that Wikipedia’s editor base is hardly representative of all of the platform’s users (it’s back to the “white boys with toys who love their toys”), and some have alleged that putatively structureless models of organization like Wikipedia’s actually encourage oligarchical forms of order. Which is itself not to say anything about the role that editing “bots” play on the platform or the degree to which Wikipedia is reliant upon corporate platforms (like Google) for promotion. Similarly, without ignoring its value, the example of WikiLeaks seems odd at a moment when the organization seems primarily engaged in a rearguard self-defense whilst the leaks that have generated the most interest of late has been made to journalists at traditional news sources (Edward Snowden’s leaks to Glenn Greenwald, who was writing for The Guardian when the leaks began).

    The further challenge—and this is one that Fuchs is not alone in contending with—is the trouble posed by the materiality of technology. An important aspect of Social Media is that Fuchs considers the often-unseen exploitation and repression upon which the internet relies: miners, laborers who build devices, those who recycle or live among toxic e-waste. Yet these workers seem to disappear from the arguments in the later part of the book, which in turn raises the following question: even if every social media platform were to be transformed into a non-profit commons-based platform that resists surveillance, manipulation, and the exploitation of its users, is such a platform genuinely just if to use it one must rely on devices whose minerals were mined in warzones, assembled in sweatshops, and which will eventually go to an early grave in a toxic dump? What good is a “classless (digital) society” without a “classless world?” Perhaps the question of a “capitalist internet” is itself a distraction from the fact that the “capitalist internet” is what one gets from capitalist technology. Granted, given Fuchs’s larger argument it may be fair to infer that he would portray “capitalist technology” as part of the problem. Yet, if the statement “a just society is a classless society” is to be genuinely meaningful than this must extend not just to those who use a social media platform but to all of those involved from the miner to the manufacturer to the programmer to the user to the recycler. To pose the matter as a question, can there be participatory (digital) democracy that relies on serious exploitation of labor and resources?

    Social Media: A Critical Introduction provides exactly what its title promises—a critical introduction. Fuchs has constructed an engaging and interesting text that shows the continuing validity of older theories and skillfully demonstrates the way in which the seeming newness of the internet is itself simply a new face on an old system. While Fuchs has constructed an argument that resolutely holds its position it is from a stance that one does not encounter often enough in debates around social media and which will provide readers with a range of new questions with which to wrestle.

    It remains unclear in what ways social media will develop in the future, but Christian Fuchs’s book will be an important tool for interpreting these changes—even if what is in store is more “barbarity.”
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck. He previously reviewed The People’s Platform by Astra Taylor for boundary2.org.
    Back to the essay

  • From the Decision to the Digital

    From the Decision to the Digital

    Laruelle: Against the Digital

    a review of Alexander R. Galloway, Laruelle: Against the Digital

    by Andrew Culp

    ~
    Alexander R. Galloway’s forthcoming Laruelle: Against the Digital is a welcome and original entry in the discussion of French theorist François Laruelle’s thought. The book is at once both pedagogical and creative: it succinctly summarizes important aspects of Laruelle’s substantial oeuvre by placing his thought within the more familiar terrain of popular philosophies of difference (most notably the work of Gilles Deleuze and Alain Badiou) and creatively extends Laruelle’s work through a series of fourteen axioms.

    The book is a bridge between current Anglophone scholarship on Laruelle, which largely treats Laruelle’s non-standard philosophy through an extension of problematics common to contemporary continental philosophy (Mullarkey 2006, Mullarkey and Smith 2012, Smith 2013, Gangle 2013, Kolozova 2014), and such scholarship’s maturation, which blazes new territory because it takes thought to be “an exercise in perpetual innovation” (Brassier 2003, 25). As such, Laruelle: Against the Digital stands out from other scholarship in that it is not primarily a work of exposition or application of the axioms laid out by Laruelle. This approach is apparent from the beginning, where Galloway declares that he is not a foot soldier in Laruelle’s army and he does not proceed by way of Laurelle’s “non-philosophical” method (a method so thoroughly abstract that Laruelle appears to be the inheritor of French rationalism, though in his terminology, philosophy should remain only as “raw material” to carry thinking beyond philosophy’s image of thought). The significance of Galloway’s Laruelle is that he instead produces his own axioms, which follow from non-philosophy but are of his own design, and takes aim at a different target: the digital.

    The Laruellian Kernel

    Are philosophers no better than creationists? Philosophers may claim to hate irrationalist leaps of faith, but Laruelle locates such leaps precisely in philosophers’ own narcissistic origin stories. This argument follows from Chapter One of Galloway’s Laruelle, which outlines how all philosophy begins with the world as ‘fact.’ For example: the atomists begin with change, Kant with empirical judgment, and Fichte with the principle of identity. And because facts do not speak for themselves, philosophy elects for itself a second task — after establishing what ‘is’ — inventing a form of thought to reflect on the world. Philosophy thus arises out of a brash entitlement: the world exists to be thought. Galloway reminds us of this through Gottfried Leibniz, who tells us that “everything in the world happens for a specific reason” (and it is the job of philosophers to identify it), and Alfred North Whitehead, who alternatively says, “no actual entity, then no reason” (so it is up to philosophers to find one).

    For Laruelle, various philosophies are but variations on a single approach that first begins by positing how the world presents itself, and second determines the mode of thought that is the appropriate response. Between the two halves, Laruelle finds a grand division: appearance/presence, essence/instance, Being/beings. Laruelle’s key claim is that philosophy cannot think the division itself. The consequence is that such a division is tantamount to cheating, as it wills thought into being through an original thoughtless act. This act of thoughtlessly splitting of the world in half is what Laruelle calls “the philosophical decision.”

    Philosophy need not wait for Laruelle to be demoted, as it has already done this for itself; no longer the queen of the sciences, philosophy seems superfluous to the most harrowing realities of contemporary life. The recent focus on Laruelle did indeed come from a reinvigoration of philosophy that goes under the name ‘speculative realism.’ Certainly there are affinities between Laruelle and these philosophers — the early case was built by Ray Brassier, who emphasizes that Laruelle earnestly adopts an anti-correlationalist position similar to the one suggested by Quentin Meillassoux and distances himself from postmodern constructivism as much as other realists, all by positing the One as the Real. It is on the issue of philosophy, however, that Laruelle is most at odds with the irascible thinkers of speculative realism, for non-philosophy is not a revolt against philosophy nor is it a patronizing correction of how others see reality. 1 Galloway argues that non-philosophy should be considered materialist. He attributes to Laruelle a mix of empiricism, realism, and materialism but qualifies non-philosophy’s approach to the real as not a matter of the givenness of empirical reality but of lived experience (vécu) (Galloway, Laruelle, 24-25). The point of non-philosophy is to withdraw from philosophy by short-circuiting the attempt to reflect on what supposedly exists. To be clear: such withdrawal is not an anti-philosophy. Non-philosophy suspends philosophy, but also raids it for its own rigorous pursuit: an axiomatic investigation of the generic. 2

    From Decision to Digital

    A sharp focus on the concept of “the digital” is Galloway’s main contribution — a concept not in the forefront of Laruelle’s work, but of great interest to all of us today. Drawing from non-philosophy’s basic insight, Galloway’s goal in Laruelle is to demonstrate the “special connection” shared by philosophy and digital (15). Galloway asks his readers to consider a withdrawal from digitality that is parallel to the non-philosophical withdrawal from philosophy.

    Just as Laruelle discovered the original division to which philosophy must remain silent, Galloway finds that the digital is the “basic distinction that makes it a possible to make any distinction at all” (Laruelle, 26). Certainly the digital-analog opposition survives this reworking, but not as one might assume. Gone are the usual notions of online-offline, new-old, stepwise-continuous variation, etc. To maintain these definitions presupposes the digital, or as Galloway defines it, “the capacity to divide things and make distinctions between them” (26). Non-philosophy’s analogy for the digital thus becomes the processes of distinction and decision themselves.

    The dialectic is where Galloway provocatively traces the history of digitality. This is because he argues that digitality is “not so much 0 and 1” but “1 and 2” (Galloway, Laruelle, 26). Drawing on Marxist definitions of the dialectical process, he defines the movement from one to two as analysis, while the movement from two to one is synthesis (26-27). In this way, Laruelle can say that, “Hegel is dead, but he lives on inside the electric calculator” (Introduction aux sciences génériques, 28, qtd in Galloway, Laruelle, 32). Playing Badiou and Deleuze off of each other, as he does throughout the book, Galloway subsequently outlines the political stakes between them — with Badiou establishing clear reference points through the argument that analysis is for leftists and synthesis for reactionaries, and Deleuze as a progenitor of non-philosophy still too tied to the world of difference but shrewd enough to have a Spinozist distaste for both movements of the dialectic (Laruelle, 27-30). Galloway looks to Laruelle to get beyond Badiou’s analytic leftism and Deleuze’s “Spinozist grand compromise” (30). His proposal is a withdrawal in the name of indecision that demands abstention from digitality’s attempt to “encode and simulate anything whatsoever in the universe” (31).

    Insufficiency

    Insufficiency is the idea into which Galloway sharpens the stakes of non-philosophy. In doing so, he does to Laruelle what Deleuze does to Spinoza. While Deleuze refashions philosophy into the pursuit of adequate knowledge, the eminently practical task of understanding the conditions of chance encounters enough to gain the capacity to influence them, Galloway makes non-philosophy into the labor of inadequacy, a mode of thought that embraces the event of creation through a withdrawal from decision. If Deleuze turns Spinoza into a pragmatist, then Galloway turns Laruelle into a nihilist.

    There are echoes of Massimo Cacciari, Giorgio Agamben, and Afro-pessimism in Galloway’s Laruelle. This is because he uses nihilism’s marriage of withdrawal, opacity, and darkness as his orientation to politics, ethics, and aesthetics. From Cacciari, Galloway borrows a politics of non-compromise. But while the Italian Autonomist Marxist milieu of which Cacciari’s negative thought is characteristic emphasizes subjectivity, non-philosophy takes the subject to be one of philosophy’s dirty sins and makes no place for it. Yet Galloway is not shy about bringing up examples, such as Bartleby, Occupy, and other figures of non-action. Though as in Agamben, Galloway’s figures only gain significance in their insufficiency. “The more I am anonymous, the more I am present” Galloway repeats from Tiqqun to axiomatically argue the centrality of opacity (233-236). There is also a strange affinity between Galloway and Afro-pessimists, who both oppose the integrationist tendencies of representational systems ultimately premised on the exclusion, exploitation, and elimination of blackness. In spite of potential differences, they both define blackness as absolute foreclosure to being; from which, Galloway is determined to “channel that great saint of radical blackness, Toussaint Louveture,” in order to bring about a “cataclysm of human color” through the “blanket totality of black” that “renders color invalid” and brings about “a new uchromia, a new color utopia rooted in the generic black universe” (188-189). What remains an open question is: how does such a formulation of the generic depart from the philosophy of difference’s becoming-minor, whereby the liberation must first pass through the figures of the woman, the fugitive, and the foreigner?

    Actually Existing Digitality

    One could read Laruelle not as urging thought to become more practical, but to become less so. Evidence for such a claim comes in his retreat to dense abstract writing and a strong insistence against providing examples. Each is an effect of non-philosophy’s approach, which is both rigorous and generic. Although possibly justified, there are those who stylistically object to Laruelle for taking too many liberties with his prose; most considerations tend make up for such flights of fancy by putting non-philosophy in communication with more familiar philosophies of difference (Mullarkey 2006; Kolozova 2014). Yet the strangeness of the non-philosophical method is not a stylistic choice intended to encourage reflection. Non-philosophy is quite explicitly not a philosophy of difference — Laruelle’s landmark Philosophies of Difference is an indictment of Hegel, Heidegger, Nietzsche, Derrida, and Deleuze. To this end, non-philosophy does not seek to promote thought through marginality, Otherness, or any other form of alterity.

    Readers who have henceforth been frustrated with non-philosophy’s impenetrability may be more attracted to the second part of Galloway’s Laruelle. In part two, Galloway addresses actually existing digitality, such as computers and capitalism. This part also includes a contribution to the ethical turn, which is premised on a geometrically neat set of axioms whereby ethics is the One and politics is the division of the One into two. He develops each chapter through numerous examples, many of them concrete, that helps fold non-philosophical terms into discussions with long-established significance. For instance, Galloway makes his way through a chapter on art and utopia with the help of James Turrell’s light art, Laruelle’s Concept of Non-Photography, and August von Briesen’s automatic drawing (194-218). The book is over three hundred-pages long, so most readers will probably appreciate the brevity of many of the chapters in part two. The chapters are short enough to be impressionistic while implying that treatments as fully rigorous as what non-philosophy often demands may be much longer.

    Questions

    While his diagrammatical thinking is very clear, I find it more difficult to determine during Galloway’s philosophical expositions whether he is embracing or criticizing a concept. The difficulty of such determinations is compounded by the ambivalence of the non-philosophical method, which adopts philosophy as its raw material while simultaneously declaring that philosophical concepts are insufficient. My second fear is that while Galloway is quite adept at wielding his reworked concept of ‘the digital,’ his own trademark rigor may be lost when taken up by less judicious scholars. In particular, his attack on digitality could form the footnote for a disingenuous defense of everything analog.

    There is also something deeper at stake: What if we are in the age of non-representation? From the modernists to Rancière and Occupy, we have copious examples of non-representational aesthetics and politics. But perhaps all previous philosophy has only gestured at non-representational thought, and non-philosophy is the first to realize this goal. If so, then a fundamental objection could be raised about both Galloway’s Laruelle and non-philosophy in general: is non-philosophy properly non-thinking or is it just plain not thinking? Galloway’s axiomatic approach is a refreshing counterpoint to Laruelle’s routine circumlocution. Yet a number of the key concepts that non-philosophy provides are still frustratingly elusive. Unlike the targets of Laruelle’s criticism, Derrida and Deleuze, non-philosophy strives to avoid the obscuring effects of aporia and paradox — so is its own use of opacity simply playing coy, or to be understood purely as a statement that the emperor has no clothes? While I am intrigued by anexact concepts such as ‘the prevent,’ and I understand the basic critique of the standard model of philosophy, I am still not sure what non-philosophy does. Perhaps that is an unfair question given the sterility of the One. But as Hardt and Negri remind us in the epigraph to Empire, “every tool is a weapon if you hold it right.” We now know that non-philosophy cuts — what remains to be seen, is where and how deeply.
    _____

    Andrew Culp is a Visiting Assistant Professor of Rhetoric Studies at Whitman College. He specializes in cultural-communicative theories of power, the politics of emerging media, and gendered responses to urbanization. In his current project, Escape, he explores the apathy, distraction, and cultural exhaustion born from the 24/7 demands of an ‘always-on’ media-driven society. His work has appeared Radical Philosophy, Angelaki, Affinities, and other venues.

    _____

    Notes

    1. There are two qualifications worth mentioning: first, Laruelle presents non-philosophy as a scientific enterprise. There is little proximity between non-philosophy’s scientific approach and other sciences, such as techno-science, big science, scientific modernity, modern rationality, or the scientific method. Perhaps it is closest to Althusser’s science, but some more detailed specification of this point would be welcome.
    Back to the essay

    2. Galloway lays out the non-philosophy of generic immanence, The One, in Chapter Two of Laruelle. Though important, Galloway’s main contribution is not a summation of Laruelle’s version of immanence and thus not the focus of this review. Substantial summaries of this sort are already available, including Mullarkey 2006, and Smith 2013.
    Back to the essay

    Bibliography

    Brassier, Ray (2003) “Axiomatic Heresy: The Non-Philosophy of François Laruelle,” Radical Philosophy 121.
    Gangle, Rocco (2013) François Laruelle’s Philosophies of Difference (Edinburgh, UK: Edinburgh University Press).
    Kolozova, Katerina (2014) Cut of the Real (New York, USA: Columbia University Press).
    Hardt, Michael and Antonio Negri (2000) Empire (Cambridge, MA: Harvard University Press).
    Laruelle, François (2010/1986) Philosophies of Difference (London, UK and New York, USA: Continuum).
    Laruelle, François (2011) Concept of Non-Photography (Falmouth, UK: Urbanomic).
    Mullarkey, John (2006) Post-Continental Philosophy (London, UK: Continuum).
    Mullarkey, John and Anthony Paul Smith (eds) (2012) Laruelle and Non-Philosophy (Edinburgh, UK: Edinburgh University Press).
    Smith, Anthony Paul (2013) A Non-Philosophical Theory of Nature (New York, USA: Palgrave Macmillan).

  • Henry A. Giroux — The Responsibility of Intellectuals in the Shadow of the Atomic Plague

    Henry A. Giroux — The Responsibility of Intellectuals in the Shadow of the Atomic Plague

    by Henry A. Giroux

    Seventy years after the horror of Hiroshima, intellectuals negotiate a vastly changed cultural, political and moral geography. Pondering what Hiroshima means for American history and consciousness proves as fraught an intellectual exercise as taking up this critical issue in the years and the decades that followed this staggering inhumanity, albeit for vastly different reasons. Now that we are living in a 24/7 screen culture hawking incessant apocalypse, how we understand Foucault’s pregnant observation that history is always a history of the present takes on a greater significance, especially in light of the fact that historical memory is not simply being rewritten but is disappearing.1 Once an emancipatory pedagogical and political project predicated on the right to study, and engage the past critically, history has receded into a depoliticizing culture of consumerism, a wholesale attack on science, the glorification of military ideals, an embrace of the punishing state, and a nostalgic invocation of the greatest generation. Inscribed in insipid patriotic platitudes and decontextualized isolated facts, history under the reign of neoliberalism has been either cleansed of its most critical impulses and dangerous memories, or it has been reduced to a contrived narrative that sustains the fictions and ideologies of the rich and powerful. History has not only become a site of collective amnesia but has also been appropriated so as to transform “the past into a container full of colorful or colorless, appetizing or insipid bits, all floating with the same specific gravity.”2 Consequently, what intellectuals now have to say about Hiroshima and history in general is not of the slightest interest to nine tenths of the American population. While writers of fiction might find such a generalized, public indifference to their craft, freeing, even “inebriating” as Philip Roth has recently written, for the chroniclers of history it is a cry in the wilderness.3

    At same time the legacy of Hiroshima is present but grasped, as the existential anxieties and dread of nuclear annihilation that racked the early 1950s to a contemporary fundamentalist fatalism embodied in collective uncertainty, a predilection for apocalyptic violence, a political economy of disposability, and an expanding culture of cruelty that has fused with the entertainment industry. We’ve not produced a generation of war protestors or government agitators to be sure, but rather a generation of youth who no longer believe they have a future that will be any different from the present.4 That such connections tying the past to the present are lost signal not merely the emergence of a disimagination machine that wages an assault on historical memory, civic literacy, and civic agency. It also points to a historical shift in which the perpetual disappearance of that atomic moment signals a further deepening in our own national psychosis.

    If, as Edward Glover once observed, “Hiroshima and Nagasaki had rendered actual the most extreme fantasies of world destruction encountered in the insane or in the nightmares of ordinary people,” the neoliberal disimagination machine has rendered such horrific reality a collective fantasy driven by the spectacle of violence, nourished by sensationalism, and reinforced by scourge of commodified and trivialized entertainment.5 The disimagination machine threatens democratic public life by devaluing social agency, historical memory, and critical consciousness and in doing so it creates the conditions for people to be ethically compromised and politically infantilized. Returning to Hiroshima is not only necessary to break out of the moral cocoon that puts reason and memory to sleep but also to rediscover both our imaginative capacities for civic literacy on behalf of the public good, especially if such action demands that we remember as Robert Jay Lifton and Greg Mitchell remark “Every small act of violence, then, has some connection with, if not sanction from, the violence of Hiroshima and Nagasaki.”6

    On Monday August 6, 1945 the United States unleashed an atomic bomb on Hiroshima killing 70,000 people instantly and another 70,000 within five years—an opening volley in a nuclear campaign visited on Nagasaki in the days that followed.7 In the immediate aftermath, the incineration of mostly innocent civilians was buried in official government pronouncements about the victory of the bombings of both Hiroshima and Nagasaki. The atomic bomb was celebrated by those who argued that its use was responsible for concluding the war with Japan. Also applauded was the power of the bomb and the wonder of science in creating it, especially “the atmosphere of technological fanaticism” in which scientists worked to create the most powerful weapon of destruction then known to the world.8 Conventional justification for dropping the atomic bombs held that “it was the most expedient measure to securing Japan’s surrender [and] that the bomb was used to shorten the agony of war and to save American lives.”9 Left out of that succinct legitimating narrative were the growing objections to the use of atomic weaponry put forth by a number of top military leaders and politicians, including General Dwight Eisenhower, who was then the Supreme Allied Commander in Europe, former President Herbert Hoover, and General Douglas MacArthur, all of whom argued it was not necessary to end the war.10 A position later proven to be correct.

    For a brief time, the Atom Bomb was celebrated as a kind of magic talisman entwining salvation and scientific inventiveness and in doing so functioned to “simultaneously domesticate the unimaginable while charging the mundane surroundings of our everyday lives with a weight and sense of importance unmatched in modern times.”11 In spite of the initial celebration of the effects of the bomb and the orthodox defense that accompanied it, whatever positive value the bomb may have had among the American public, intellectuals, and popular media began to dissipate as more and more people became aware of the massive deaths along with suffering and misery it caused.12

    Kenzaburo Oe, the Nobel Prize winner for Literature, noted that in spite of attempts to justify the bombing “from the instant the atomic bomb exploded, it [soon] became the symbol of human evil, [embodying] the absolute evil of war.”13 What particularly troubled Oe was the scientific and intellectual complicity in the creation of and in the lobbying for its use, with acute awareness that it would turn Hiroshima into a “vast ugly death chamber.”14 More pointedly, it revealed a new stage in the merging of military actions and scientific methods, indeed a new era in which the technology of destruction could destroy the earth in roughly the time it takes to boil an egg. The bombing of Hiroshima extended a new industrially enabled kind of violence and warfare in which the distinction between soldiers and civilians disappeared and the indiscriminate bombing of civilians was normalized. But more than this, the American government exhibited a ‘total embrace of the atom bomb,” that signalled support for the first time of a “notion of unbounded annihilation [and] “the totality of destruction.”15

    Hiroshima designated the beginning of the nuclear era in which as Oh Jung points out “Combatants were engaged on a path toward total war in which technological advances, coupled with the increasing effectiveness of an air strategy, began to undermine the ethical view that civilians should not be targeted… This pattern of wholesale destruction blurred the distinction between military and civilian casualties.”16 The destructive power of the bomb and its use on civilians also marked a turning point in American self-identity in which the United States began to think of itself as a superpower, which as Robert Jay. Lifton points out refers to “a national mindset–put forward strongly by a tight-knit leadership group–that takes on a sense of omnipotence, of unique standing in the world that grants it the right to hold sway over all other nations.”17 The power of the scientific imagination and its murderous deployment gave birth simultaneously to the American disimagination machine with its capacity to rewrite history in order to render it an irrelevant relic best forgotten.

    What remains particularly ghastly about the rationale for dropping two atomic bombs was the attempt on the part of its defenders to construct a redemptive narrative through a perversion of humanistic commitment, of mass slaughter justified in the name of saving lives and winning the war.18 This was a humanism under siege, transformed into its terrifying opposite and placed on the side of what Edmund Wilson called the Faustian possibility of a grotesque “plague and annihilation.”19 In part, Hiroshima represented the achieved transcendence of military metaphysics now a defining feature of national identity, its more poisonous and powerful investment in the cult of scientism, instrumental rationality, and technological fanaticism—and the simultaneous marginalization of scientific evidence and intellectual rigour, even reason itself. That Hiroshima was used to redefine America’s “national mission and its utopian possibilities”20 was nothing short of what the late historian Howard Zinn called a “devastating commentary on our moral culture.”21 More pointedly it serves as a grim commentary on our national sanity. In most of these cases, matters of morality and justice were dissolved into technical questions and reductive chauvinism relating matters of governmentally massaged efficiency, scientific “expertise”, and American exceptionalism. As Robert Jay Lifton and Greg Mitchell stated, the atom bomb was symbolic of the power of post-war America rather than a “ruthless weapon of indiscriminate destruction” which conveniently put to rest painful questions concerning justice, morality, and ethical responsibility. They write:

    Our official narrative precluded anything suggesting atonement. Rather the bomb itself had to be “redeemed”: As “a frightening manifestation of technological evil … it needed to be reformed, transformed, managed, or turned into the vehicle of a promising future,” [as historian M. Susan] Lindee argued. “It was necessary, somehow, to redeem the bomb.” In other words, to avoid historical and moral responsibility, we acted immorally and claimed virtue. We sank deeper, that is, into moral inversion.22

    This narrative of redemption was soon challenged by a number of historians who argued that the dropping of the atom bomb had less to do with winning the war than with an attempt to put pressure on the Soviet Union to not expand their empire into territory deemed essential to American interests.23 Protecting America’s superiority in a potential Soviet-American conflict was a decisive factor in dropping the bomb. In addition, the Truman administration needed to provide legitimation to Congress for the staggering sums of money spent on the Manhattan Project in developing the atomic weapons program and for procuring future funding necessary to continue military appropriations for ongoing research long after the war ended.24 Howard Zinn goes even further asserting that the government’s weak defense for the bombing of Hiroshima was not only false but was complicitous with an act of terrorism. Refusing to relinquish his role as a public intellectual willing to hold power accountable, he writes “Can we … comprehend the killing of 200,000 people to make a point about American power?”25 A number of historians, including Gar Alperowitz and Tsuyoshi Hasegawa, also attempted to deflate this official defense of Hiroshima by providing counter-evidence that the Japanese were ready to surrender as a result of a number of factors including the nonstop bombing of 26 cities before Hiroshima and Nagasaki, the success of the naval and military blockade of Japan, and the Soviet’s entrance into the war on August 9th.26

    The narrative of redemption and the criticism it provoked are important for understanding the role that intellectuals assumed at this historical moment to address what would be the beginning of the nuclear weapons era and how that role for critics of the nuclear arms race has faded somewhat at the beginning of the twenty-first century. Historical reflection on this tragic foray into the nuclear age reveals the decades long dismantling of a culture’s infrastructure of ideas, its growing intolerance for critical thought in light of the pressures placed on media, on universities and increasingly isolated intellectuals to support comforting mythologies and official narratives and thus cede the responsibility to give effective voice to unpopular realities.

    Within a short time after the dropping of the atom bombs on Hiroshima and Nagasaki, John Hersey wrote a devastating description of the misery and suffering caused by the bomb. Removing the bomb from abstract arguments endorsing matters of technique, efficiency, and national honor, Hersey first published in The New Yorker and later in a widely read book an exhausting and terrifying description of the bombs effects on the people of Hiroshima, portraying in detail the horror of the suffering caused by the bomb. There is one haunting passage that not only illustrates the horror of the pain and suffering, but also offers a powerful metaphor for the blindness that overtook both the victims and the perpetrators. He writes:

    On his way back with the water, [Father Kleinsorge] got lost on a detour around a fallen tree, and as he looked for his way through the woods, he heard a voice ask from the underbrush, ‘Have you anything to drink?’ He saw a uniform. Thinking there was just one soldier, he approached with the water. When he had penetrated the bushes, he saw there were about twenty men, they were all in exactly the same nightmarish state: their faces were wholly burned, their eye sockets were hollow, the fluid from their melted eyes had run down their cheeks. Their mouths were mere swollen, pus-covered wounds, which they could not bear to stretch enough to admit the spout of the teapot.27

    The nightmarish image of fallen soldiers staring with hollow sockets, eyes liquidated on cheeks and mouths swollen and pus-filled stands as a warning to those who would refuse blindly the moral witnessing necessary to keep alive for future generations the memory of the horror of nuclear weapons and the need to eliminate them. Hersey’s literal depiction of mass violence against civilians serves as a kind of mirrored doubling, referring at one level to nations blindly driven by militarism and hyper-nationalism. At another level, perpetrators become victims who soon mimic their perpetrators, seizing upon their own victimization as a rationale to become blind to their own injustices.

    Pearl Harbor enabled Americans to view themselves as the victims but then assumed the identity of the perpetrators and became willfully blind to the United States’ own escalation of violence and injustice. Employing both a poisonous racism and a weapon of mad violence against the Japanese people, the US government imagined Japan as the ultimate enemy, and then pursued tactics that blinded the American public to its own humanity and in doing so became its own worst enemy by turning against its most cherished democratic principles. In a sense, this self-imposed sightlessness functioned as part of what Jacques Derrida once called a societal autoimmune response, one in which the body’s immune system attacked its own bodily defenses.28 Fortunately, this state of political and moral blindness did not extend to a number of critics for the next fifty years who railed aggressively against the dropping of the atomic bombs and the beginning of the nuclear age.

    Responding to Hersey’s article on the bombing of Hiroshima published in The New Yorker, Mary McCarthy argued that he had reduced the bombing to the same level of journalism used to report natural catastrophes such as “fires, floods, and earthquakes” and in doing so had reduced a grotesque act of barbarism to “a human interest story” that had failed to grasp the bomb’s nihilism, and the role that “bombers, the scientists, the government” and others played in producing this monstrous act.29 McCarthy was alarmed that Hersey had “failed to consider why it was used, who was responsible, and whether it had been necessary.”30 McCarthy was only partly right. While it was true that Hersey didn’t tackle the larger political, cultural and social conditions of the event’s unfolding, his article provided one of the few detailed reports at the time of the horrors the bomb inflicted, stoking a sense of trepidation about nuclear weapons along with a modicum of moral outrage over the decision to drop the bomb—dispositions that most Americans had not considered at the time. Hersey was not alone. Wilfred Burchett, writing for the London Daily Express, was the first journalist to provide an independent account of the suffering, misery, and death that engulfed Hiroshima after the bomb was dropped on the city. For Burchett, the cataclysm and horror he witnessed first-hand resembled a vision of hell that he aptly termed “the Atomic Plague.” He writes:

    Hiroshima does not look like a bombed city. It looks as if a monster steamroller had passed over it and squashed it out of existence. I write these facts as dispassionately as I can in the hope that they will act as a warning to the world. In this first testing ground of the atomic bomb I have seen the most terrible and frightening desolation in four years of war. It makes a blitzed Pacific island seem like an Eden. The damage is far greater than photographs can show.31

    In the end in spite of such accounts, fear and moral outrage did little to put an end to the nuclear arms race, but it did prompt a number of intellectuals to enter into the public realm to denounce the bombing and the ongoing advance of a nuclear weapons program and the ever-present threat of annihilation it posed. In the end, fear and moral outrage did little to put an end to the nuclear arms race, but it did prompt a number of intellectuals to enter into the public realm to denounce the bombing and the ongoing advance of a nuclear weapons program and the ever-present threat of annihilation it posed.

    A number of important questions emerge from the above analysis, but two issues in particular stand out for me in light of the role that academics and public intellectuals have played in addressing the bombing of Hiroshima and the emergence of a nuclear weapons on a global scale, and the imminent threat of human annihilation posed by the continuing existence and danger posed by the potential use of such weapons. The first question focuses on what has been learned from the bombing of Hiroshima and the second question concerns the disturbing issue of how violence and hence Hiroshima itself have become normalized in the collective American psyche.

    In the aftermath of the bombing of Hiroshima, there was a major debate not just about how the emergence of the atomic age and the moral, economic, scientific, military, and political forced that gave rise to it. There was also a heated debate about the ways in which the embrace of the atomic age altered the emerging nature of state power, gave rise to new forms of militarism, put American lives at risk, created environmental hazards, produced an emergent surveillance state, furthered the politics of state secrecy, and put into play a series of deadly diplomatic crisis, reinforced by the logic of brinkmanship and a belief in the totality of war.32

    Hiroshima not only unleashed immense misery, unimaginable suffering, and wanton death on Japanese civilians, it also gave rise to anti-democratic tendencies in the United States government that put the health, safety, and liberty of the American people at risk. Shrouded in secrecy, the government machinery of death that produced the bomb did everything possible to cover up the most grotesque effects of the bomb on the people of Hiroshima and Nagasaki but also the dangerous hazards it posed to the American people. Lifton and Mitchell argue convincingly that if the development of the bomb and its immediate effects were shrouded in concealment by the government that before long concealment developed into a cover up marked by government lies and the falsification of information.33 With respect to the horrors visited upon Hiroshima and Nagasaki, films taken by Japanese and American photographers were hidden for years from the American public for fear that they would create both a moral panic and a backlash against the funding for nuclear weapons.34 For example, the Atomic Energy Commission lied about the extent and danger of radiation fallout going so far as to mount a campaign claiming that “fallout does not constitute a serious hazard to any living thing outside the test site.”35 This act of falsification took place in spite of the fact that thousands of military personal were exposed to high levels of radiation within and outside of the test sites.

    In addition, the Atomic Energy Commission in conjunction with the Departments of Defense, Department of Veterans’ Affairs, the Central Intelligence Agency, and other government departments engaged in a series of medical experiments designed to test the effects of different levels radiation exposure on military personal, medical patients, prisoners, and others in various sites. According to Lifton and Mitchell, these experiments took the shape of exposing people intentionally to “radiation releases or by placing military personnel at or near ground zero of bomb tests.”36 It gets worse. They also note that “from 1945 through 1947, bomb-grade plutonium injections were given to thirty-one patients [in a variety of hospitals and medical centers] and that all of these “experiments were shrouded in secrecy and, when deemed necessary, in lies….the experiments were intended to show what type or amount of exposure would cause damage to normal, healthy people in a nuclear war.”37 Some of the long lasting legacies of the birth of the atomic bomb also included the rise of plutonium dumps, environmental and health risks, the cult of expertise, and the subordination of the peaceful development technology to a large scale interest in using technology for the organized production of violence. Another notable development raised by many critics in the years following the launch of the atomic age was the rise of a government mired in secrecy, the repression of dissent, and the legitimation for a type of civic illiteracy in which Americans were told to leave “the gravest problems, military and social, completely in the hands of experts and political leaders who claimed to have them under control.”38

    All of these anti-democratic tendencies unleashed by the atomic age came under scrutiny during the latter half of the twentieth century. The terror of a nuclear holocaust, an intense sense of alienation from the commanding institutions of power, and deep anxiety about the demise of the future spawned growing unrest, ideological dissent, and massive outbursts of resistance among students and intellectuals all over the globe from the sixties until the beginning of the twenty-first century calling for the outlawing of militarism, nuclear production and stockpiling, and the nuclear propaganda machine. Literary writers extending from James Agee to Kurt Vonnegut, Jr. condemned the death-saturated machinery launched by the atomic age. Moreover, public intellectuals from Dwight Macdonald and Bertrand Russell to Helen Caldicott, Ronald Takaki, Noam Chomsky, and Howard Zinn, fanned the flames of resistance to both the nuclear arms race and weapons as well as the development of nuclear technologies. Others such as George Monbiot, an environmental activist, have supported the nuclear industry but denounced the nuclear arms race. In doing so, he has argued that “The anti-nuclear movement … has misled the world about the impacts of radiation on human health [producing] claims … ungrounded in science, unsupportable when challenged and wildly wrong [and] have done other people, and ourselves, a terrible disservice.”39

    In addition, in light of the nuclear crises that extend from the Three Mile accident in 1979, the Chernobyl disaster in 1986 and the more recent Fukushima nuclear disaster in 2011, a myriad of social movements along with a number of mass demonstrations against nuclear power have developed and taken place all over the world.40 While deep moral and political concerns over the legacy of Hiroshima seemed to be fading in the United States, the tragedy of 9/11 and the endlessly replayed images of the two planes crashing into the twin towers of the World Trade Center resurrected once again the frightening image of what Colonel Paul Tibbetts, Jr., the Enola Gay’s pilot, referred to as “that awful cloud… boiling up, mushrooming, terrible and incredibly tall” after “Little Boy,” a 700-pound uranium bomb was released over Hiroshima. Though this time, collective anxieties were focused not on the atomic bombing of Hiroshima and its implications for a nuclear Armageddon but on the fear of terrorists using a nuclear weapon to wreak havoc on Americans. But a decade later even that fear, however parochially framed, seems to have been diminished if not entirely, erased even though it has produced an aggressive attack on civil liberties and given even more power to an egregious and dangerous surveillance state.

    Atomic anxiety confronts a world in which 9 states have nuclear weapons and a number of them such as North Korea, Pakistan, and India have threatened to use them. James McCluskey points out that “there are over 20,0000 nuclear weapons in existence, sufficient destructive power to incinerate every human being on the planet three times over [and] there are more than 2000 held on hair trigger alert, already mounted on board their missiles and ready to be launched at a moment’s notice.”41 These weapons are far more powerful and deadly than the atomic bomb and the possibility that they might be used, even inadvertently, is high. This threat becomes all the more real in light of the fact that the world has seen a history of miscommunications and technological malfunctions, suggesting both the fragility of such weapons and the dire stupidity of positions defending their safety and value as a nuclear deterrent.42 The 2014 report, To Close for Comfort—Cases of Near Nuclear Use and Options for Policy not only outlines a history of such near misses in great detail, it also makes terrifyingly clear that “the risk associated with nuclear weapons is high.”43 It is also worth noting that an enormous amount of money is wasted to maintain these weapons and missiles, develop more sophisticated nuclear weaponries, and invest in ever more weapons laboratories. McCluskey estimates world funding for such weapons at $1trillion per decade while Arms Control Today reported in 2012 that yearly funding for U.S. nuclear weapons activity was $31 billion.44

    In the United States, the mushroom cloud connected to Hiroshima is now connected to much larger forces of destruction, including a turn to instrumental reason over moral considerations, the normalization of violence in America, the militarization of local police forces, an attack on civil liberties, the rise of the surveillance state, a dangerous turn towards state secrecy under President Obama, the rise of the carceral state, and the elevation of war as a central organizing principle of society. Rather than stand in opposition to preventing a nuclear mishap or the expansion of the arms industry, the United States places high up on the list of those nations that could trigger what Amy Goodman calls that “horrible moment when hubris, accident or inhumanity triggers the next nuclear attack.”45 Given the history of lies, deceptions, falsifications, and retreat into secrecy that characterizes the American government’s strangulating hold by the military-industrial-surveillance complex, it would be naïve to assume that the U.S. government can be trusted to act with good intentions when it comes to matters of domestic and foreign policy. State terrorism has increasingly become the DNA of American governance and politics and is evident in government cover ups, corruption, and numerous acts of bad faith. Secrecy, lies, and deception have a long history in the United States and the issue is not merely to uncover such instances of state deception but to connect the dots over time and to map the connections, for instance, between the actions of the NSA in the early aftermath of the attempts to cover up the inhumane destruction unleashed by the atomic bomb on Hiroshima and Nagasaki and the role the NSA and other intelligence agencies play today in distorting the truth about government policies while embracing an all-compassing notion of surveillance and squelching of civil liberties, privacy, and freedom.

    Hiroshima symbolizes the fact that the United States commits unspeakable acts making it easier to refuse to rely on politicians, academics, and alleged experts who refuse to support a politics of transparency and serve mostly to legitimate anti-democratic, if not totalitarian policies. Questioning a monstrous war machine whose roots lie in Hiroshima is the first step in declaring nuclear weapons unacceptable ethically and politically. This suggests a further mode of inquiry that focuses on how the rise of the military-industrial complex contributes to the escalation of nuclear weapons and what can we learn by tracing it roots to the development and use of the atom bomb. Moreover, it raises questions about the role played by intellectuals both in an out of the academy in conspiring to build the bomb and hide its effects from the American people? These are only some of the questions that need to be made visible, interrogated, and pursued in a variety of sites and public forums.

    One crucial issue today is what role might intellectuals and matters of civic courage, engaged citizenship, and the educative nature of politics play as part of a sustained effort to resurrect the memory of Hiroshima as both a warning and a signpost for rethinking the nature of collective struggle, reclaiming the radical imagination, and producing a sustained politics aimed at abolishing nuclear weapons forever? One issue would be to revisit the conditions that made Hiroshima and Nagasaki possible, to explore how militarism and a kind of technological fanaticism merged under the star of scientific rationality. Another step forward would be to make clear what the effects of such weapons are, to disclose the manufactured lie that such weapons make us safe. Indeed, this suggests the need for intellectuals, artists, and other cultural workers to use their skills, resources, and connections to develop massive educational campaigns.

    Such campaigns not only make education, consciousness, and collective struggle the center of politics, but also systemically work to both inform the public about the history of such weapons, the misery and suffering they have caused, and how they benefit the financial, government, and corporate elite who make huge amounts of money off the arms race and the promotion of nuclear deterrence and the need for a permanent warfare state. Intellectuals today appear numbed by ever developing disasters, statistics of suffering and death, the Hollywood disimagination machine with its investment in the celluloid Apocalypse for which only superheroes can respond, and a consumer culture that thrives on self-interests and deplores collective political and ethical responsibility.

    There are no rationales or escapes from the responsibility of preventing mass destruction due to nuclear annihilation; the appeal to military necessity is no excuse for the indiscriminate bombing of civilians whether in Hiroshima or Afghanistan. The sense of horror, fear, doubt, anxiety, and powerless that followed Hiroshima and Nagasaki up until the beginning of the 21st century seems to have faded in light of both the Hollywood apocalypse machine, the mindlessness of celebrity and consumer cultures, the growing spectacles of violence, and a militarism that is now celebrated as one of the highest ideals of American life. In a society governed by militarism, consumerism, and neoliberal savagery, it has become more difficult to assume a position of moral, social, and political responsibility, to believe that politics matters, to imagine a future in which responding to the suffering of others is a central element of democratic life. When historical memory fades and people turn inward, remove themselves from politics, and embrace cynicism over educated hope, a culture of evil, suffering, and existential despair. Americans now life amid a culture of indifference sustained by an endless series of manufactured catastrophes that offer a source of entertainment, sensation, and instant pleasure.

    We live in a neoliberal culture that subordinates human needs to the demand for unchecked profits, trumps exchange values over the public good, and embraces commerce as the only viable model of social relations to shape the entirety of social life. Under such circumstances, violence becomes a form of entertainment rather than a source of alarm, individuals no longer question society and become incapable of translating private troubles into larger public considerations. In the age following the use of the atom bomb on civilians, talk about evil, militarism, and the end of the world once stirred public debate and diverse resistance movements, now it promotes a culture of fear, moral panics, and a retreat into the black hole of the disimagination machine. The good news is that neoliberalism now makes clear that it cannot provide a vision to sustain society and works largely to destroy it. It is a metaphor for the atom bomb, a social, political, and moral embodiment of global destruction that needs to be stopped before it is too late. The future will look much brighter without the glow of atomic energy and the recognition that the legacy of death and destruction that extends from Hiroshima to Fukushima makes clear that no one can be a bystander if democracy is to survive.

    notes:
    1. This reference refers to a collection of interviews with Michel Foucault originally published by Semiotext(e). Michel Foucault, “What our present is?” Foucault Live: Collected Interviews, 1961–1984, ed. Sylvere Lotringer, trans. Lysa Hochroth and John Johnston,(New York: Semiotext(e), 1989 and 1996), 407–415.
    Back to the essay

    2. Zygmunt Bauman and Leonidas Donskis, Moral Blindness: The loss of Sensitivity in Liquid Modernity, (Cambridge, UK: Polity Press, 2013), p. 33.
    Back to the essay

    3. Daniel Sandstrom Interviews Philip Roth, “My Life as a Writer,” New York Times (March 2, 2014). Online: http://www.nytimes.com/2014/03/16/books/review/my-life-as-a-writer.html
    Back to the essay

    4. Of course, the Occupy Movement in the United States and the Quebec student movement are exceptions to this trend. See, for instance, David Graeber, The Democracy Project: A History, A Crisis, A Movement, (New York, NY,: The Random House Publishing Group, 2013) and Henry A. Giroux, Neoliberalism’s War Against Higher Education (Chicago: Haymarket, 2014).
    Back to the essay

    5. Of course, the Occupy Movement in the United States and the Quebec student movement are exceptions to this trend. See, for instance, David Graeber, The Democracy Project: A History, A Crisis, A Movement, (New York, NY,: The Random House Publishing Group, 2013) and Henry A. Giroux, Neoliberalism’s War Against Higher Education (Chicago: Haymarket, 2014).
    Back to the essay

    6. Ibid., Lifton and Mitchell, p. 345.
    Back to the essay

    7. Jennifer Rosenberg, “Hiroshima and Nagasaki (Part 2),” About.com –20th Century History (March 28, 201). Online: http://history1900s.about.com/od/worldwarii/a/hiroshima_2.htm. A more powerful atom bomb was dropped on Nagasaki on August 9, 1945 and by the end of the year an estimated 70,000 had been killed. For the history of the making of the bomb, see the monumental: Richard Rhodes, The Making of the Atomic Bomb, Anv Rep edition (New York: Simon & Schuster, 2012.
    Back to the essay

    8. The term “technological fanaticism” comes from Michael Sherry who suggested that it produced an increased form of brutality. Cited in Howard Zinn, The Bomb. (New York. N.Y.: City Lights, 2010), pp. 54-55.
    Back to the essay

    9. Oh Jung, “Hiroshima and Nagasaki: The Decision to Drop the Bomb,” Michigan Journal of History Vol 1. No. 2 (Winter 2002). Online:
    http://michiganjournalhistory.files.wordpress.com/2014/02/oh_jung.pdf.
    Back to the essay

    10. See, in particular, Ronald Takaki, Hiroshima: Why America Dropped the Atomic Bomb, (Boston: Back Bay Books, 1996). http://michiganjournalhistory.files.wordpress.com/2014/02/oh_jung.pdf.
    Back to the essay

    11. Peter Bacon Hales, Outside the Gates of Eden: The Dream Of America From Hiroshima To Now. (Chicago. IL.: University of Chicago Press, 2014), p. 17.
    Back to the essay

    12. Paul Ham, Hiroshima Nagasaki: The Real Story of the Atomic Bombings and Their Aftermath (New York: Doubleday, 2011).
    Back to the essay

    13. Kensaburo Oe, Hiroshima Notes (New York: Grove Press, 1965), p. 114.
    Back to the essay

    14. Ibid., Oe, Hiroshima Notes, p. 117.
    Back to the essay

    15. Robert Jay Lifton and Greg Mitchell, Hiroshima in America, (New York, N.Y.: Avon Books, 1995). p. 314-315. 328.
    Back to the essay

    16. Ibid., Oh Jung, “Hiroshima and Nagasaki: The Decision to Drop the Bomb.”
    Back to the essay

    17. Robert Jay Lifton, “American Apocalypse,” The Nation (December 22, 2003), p. 12.
    Back to the essay

    18. For an interesting analysis of how the bomb was defended by the New York Times and a number of high ranking politicians, especially after John Hersey’s Hiroshima appeared in The New Yorker, see Steve Rothman, “The Publication of “Hiroshima” in The New Yorker,” Herseyheroshima.com, (January 8, 1997). Online: http://www.herseyhiroshima.com/hiro.php
    Back to the essay

    19. Wilson cited in Lifton and Mitchell, Hiroshima In America, p. 309.
    Back to the essay

    20. Ibid., Peter Bacon Hales, Outside The Gates of Eden: The Dream Of America From Hiroshima To Now, p. 8.
    Back to the essay

    21. Ibid., Zinn, The Bomb, p. 26.
    Back to the essay

    22. Ibid., Robert Jay Lifton and Greg Mitchell, Hiroshima In America.
    Back to the essay

    23. For a more recent articulation of this argument, see Ward Wilson, Five Myths About Nuclear Weapons (new York: Mariner Books, 2013).
    Back to the essay

    24. Ronald Takaki, Hiroshima: Why America Dropped the Atomic Bomb, (Boston: Back Bay Books, 1996), p. 39
    Back to the essay

    25. Ibid, Zinn, The Bomb, p. 45.
    Back to the essay

    26. See, for example, Ibid., Haseqawa; Gar Alperowitz’s, Atomic Diplomacy Hiroshima and Potsdam: The Use of the Atomic Bomb and the American Confrontation with Soviet Power (London: Pluto Press, 1994) and also Gar Alperowitz, The Decision to Use the Atomic Bomb (New York: Vintage, 1996). Ibid., Ham.
    Back to the essay

    27. John Hersey, Hiroshima (New York: Alfred A. Knopf, 1946), p. 68.
    Back to the essay

    28. Giovanna Borradori, ed, “Autoimmunity: Real and Symbolic Suicides–a dialogue with Jacques Derrida,” in Philosophy in a Time of Terror: Dialogues with Jurgen Habermas and Jacques Derrida (Chicago: University of Chicago Press, 2004), pp. 85-136.
    Back to the essay

    29. Mary McCarthy, “The Hiroshima “New Yorker”,” The New Yorker (November, 1946).
    http://americainclass.org/wp-content/uploads/2013/03/mccarthy_onhiroshima.pdf
    Back to the essay

    30. Ibid., Ham, Hiroshima Nagasaki, p. 469.
    Back to the essay

    31. George Burchett & Nick Shimmin, eds. Memoirs of a Rebel Journalist: The Autobiography of Wilfred Burchett, (UNSW Press, Sydney, 2005), p.229.
    Back to the essay

    32. For an informative analysis of the deep state and a politics driven by corporate power, see Bill Blunden, “The Zero-Sum Game of Perpetual War,” Counterpunch (September 2, 2014). Online: http://www.counterpunch.org/2014/09/02/the-zero-sum-game-of-perpetual-war/
    Back to the essay

    33. The following section relies on the work of both Lifton and Mitchell, Howard Zinn, and M. Susan Lindee.
    Back to the essay

    34. Greg Mitchell, “The Great Hiroshima Cover-up,” The Nation, (August 3, 2011). Online:
    http://www.thenation.com/blog/162543/great-hiroshima-cover#. Also see, Greg Mitchell, “Part 1: Atomic Devastation Hidden For Decades,” WhoWhatWhy (March 26, 2014). Online: http://whowhatwhy.com/2014/03/26/atomic-devastation-hidden-decades; Greg Mitchell, “Part 2: How They Hid the Worst Horrors of Hiroshima,” WhoWhatWhy, (March 28, 2014). Online:
    http://whowhatwhy.com/2014/03/28/part-2-how-they-hid-the-worst-horrors-of-hiroshima/; Greg Mitchell, “Part 3: Death and Suffering, in Living Color,” WhoWhatWhy (March 31, 2014). Online: http://whowhatwhy.com/2014/03/31/death-suffering-living-color/
    Back to the essay

    35. Ibid., Robert Jay Lifton and Greg Mitchell, Hiroshima In America, p. 321.
    Back to the essay

    36. Ibid., Robert Jay Lifton and Greg Mitchell, Hiroshima In America, p. 322.
    Back to the essay

    37. Ibid. Robert Jay Lifton and Greg Mitchell, Hiroshima In America, p. 322-323.
    Back to the essay

    38. Ibid. Robert Jay Lifton and Greg Mitchell, Hiroshima In America, p. 336.
    Back to the essay

    39. George Monbiot, “Evidence Meltdown,” The Guardian (April 5, 2011). Online: http://www.monbiot.com/2011/04/04/evidence-meltdown/
    Back to the essay

    40. Patrick Allitt, A Climate of Crisis: America in the Age of Environmentalism (New York: Penguin, 2015); Horace Herring, From Energy Dreams to Nuclear Nightmares: Lessons from the Anti-nuclear Power Movement in the 1970s (Chipping Norton, UK: Jon Carpenter Publishing, 2006; Alain Touraine, Anti-Nuclear Protest: The Opposition to Nuclear Energy in France (Cambridge, UK: Cambridge University Press, 1983); Stephen Croall, The Anti-Nuclear Handbook New York: Random House, 1979). On the decade that enveloped the anti-nuclear moment with a series of crisis, see Philip Jenkins, Decade of Nightmares: The End of the Sixties and the Making of Eighties America (New York: Oxford University Press, 2008).
    Back to the essay

    41. James McCluskey, “Nuclear Crisis: Can the Sane Prevail in Time?” Truthout (June 10, 2014). Online: http://www.truth-out.org/opinion/item/24273
    Back to the essay

    42. See, for example, the list of crisis, near misses, and nuclear war mongering that characterizes United States foreign policy in the last few decades, see, Noam Chomsky, “How Many Minutes to Midnight? Hiroshima Day 2014.” Truthout (August 5, 2014). Online: http://www.truth-out.org/news/item/25388-how-many-minutes-to-midnight-hiroshima-day-2014
    Back to the essay

    43. Patricia Lewis, Heather Williams, Benoît Pelopidas and Sasan Aghlani, To Close for Comfort —Cases of Near Nuclear Use and Options for Policy (London: Chatham House, 2014). Online: http://www.chathamhouse.org/sites/files/chathamhouse/home/chatham/public_html/sites/default/files/20140428TooCloseforComfortNuclearUseLewisWilliamsPelopidasAghlani.pdf
    Back to the essay

    44. Jim McCluskey, “Nuclear Deterrence: The Lie to End All Lies,” Truthout (Oct 29, 2012). Online: http://www.truth-out.org/opinion/item/12381
    Back to the essay

    45. Amy Goodman, “Hiroshima and Nagasaki, 69 Year Later,” TruthDig (August 6, 2014). Online: http://www.truthdig.com/report/item/hiroshima_and_nagasaki_69_years_later_20140806
    Back to the essay

  • The Eversion of the Digital Humanities

    The Eversion of the Digital Humanities

    image
    by Brian Lennon

    on The Emergence of the Digital Humanities by Steven E. Jones

    1

    Steven E. Jones begins his Introduction to The Emergence of the Digital Humanities (Routledge, 2014) with an anecdote concerning a speaking engagement at the Illinois Institute of Technology in Chicago. “[M]y hosts from the Humanities department,” Jones tells us,

    had also arranged for me to drop in to see the fabrication and rapid-prototyping lab, the Idea Shop at the University Technology Park. In one empty room we looked into, with schematic drawings on the walls, a large tabletop machine jumped to life and began whirring, as an arm with a router moved into position. A minute later, a student emerged from an adjacent room and adjusted something on the keyboard and monitor attached by an extension arm to the frame for the router, then examined an intricately milled block of wood on the table. Next door, someone was demonstrating finely machined parts in various materials, but mostly plastic, wheels within bearings, for example, hot off the 3D printer….

    What exactly, again, was my interest as a humanist in taking this tour, one of my hosts politely asked?1

    It is left almost entirely to more or less clear implication, here, that Jones’s humanities department hosts had arranged the expedition at his request, and mainly or even only to oblige a visitor’s unusual curiosity, which we are encouraged to believe his hosts (if “politely”) found mystifying. Any reader of this book must ask herself, first, if she believes this can really have occurred as reported: and if the answer to that question is yes, if such a genuinely unlikely and unusual scenario — the presumably full-time, salaried employees of an Institute of Technology left baffled by a visitor’s remarkable curiosity about their employer’s very raison d’être — warrants any generalization at all. For that is how Jones proceeds: by generalization, first of all from a strained and improbably dramatic attempt at defamiliarization, in the apparent confidence that this anecdote illuminating the spirit of the digital humanities will charm — whom, exactly?

    It must be said that Jones’s history of “digital humanities” is refreshingly direct and initially, at least, free of obfuscation, linking the emergence of what it denotes to events in roughly the decade preceding the book’s publication, though his reading of those events is tendentious. It was the “chastened” retrenchment after the dot-com bubble in 2000, Jones suggests (rather, just for example, than the bubble’s continued inflation by other means) that produced the modesty of companies like our beloved Facebook and Twitter, along with their modest social networking platform-products, as well as the profound modesty of Google Inc. initiatives like Google Books (“a development of particular interest to humanists,” we are told2) and Google Maps. Jones is clearer-headed when it comes to the disciplinary history of “digital humanities” as a rebaptism of humanities computing and thus — though he doesn’t put it this way — a catachrestic asseveration of traditional (imperial-nationalist) philology like its predecessor:

    It’s my premise that what sets DH apart from other forms of media studies, say, or other approaches to the cultural theory of computing, ultimately comes through its roots in (often text-based) humanities computing, which always had a kind of mixed-reality focus on physical artifacts and archives.3

    Jones is also clear-headed on the usage history of “digital humanities” as a phrase in the English language, linking it to moments of consolidation marked by Blackwell’s Companion to Digital Humanities, the establishment of the National Endowment for the Humanities Office for the Digital Humanities, and higher-education journalism covering the annual Modern Language Association of America conventions. It is perhaps this sensitivity to “digital humanities” as a phrase whose roots lie not in original scholarship or cultural criticism itself (as was still the case with “deconstruction” or “postmodernism,” even at their most shopworn) but in the dependent, even parasitic domains of reference publishing, grant-making, and journalism that leads Jones to declare “digital humanities” a “fork” of humanities computing, rather than a Kuhnian paradigm shift marking otherwise insoluble structural conflict in an intellectual discpline.

    At least at first. Having suggested it, Jones then discards the metaphor drawn from the tree structures of software version control, turning to “another set of metaphors” describing the digital humanities as having emerged not “out of the primordial soup” but “into the spotlight” (Jones, 5). We are left to guess at the provenance of this second metaphor, but its purpose is clear: to construe the digital humanities, both phenomenally and phenomenologically, as the product of a “shift in focus, driven […] by a new set of contexts, generating attention to a range of new activities” (5).

    Change; shift; new, new, new. Not a branch or a fork, not even a trunk: we’re now in the ecoverse of history and historical time, in its collision with the present. The appearance and circulation of the English-language phrase “digital humanities” can be documented — that is one of the things that professors of English like Jones do especially well, when they care to. But “changes in the culture,” much more broadly, within only the last ten years or so? No scholar in any discipline is particularly well trained, well positioned, or even well suited to diagnosing those; and scholars in English studies won’t be at the top of anyone’s list. Indeed, Jones very quickly appeals to “author William Gibson” for help, settling on the emergence of the digital humanities as a response to what Gibson called “the eversion of cyberspace,” in its ostensibly post-panopticist colonization of the physical world.6 It makes for a rather inarticulate and self-deflating statement of argument, in which on its first appearance eversion, ambiguously, appears to denote the response as much as its condition or object:

    My thesis is simple: I think that the cultural response to changes in technology, the eversion, provides an essential context for understanding the emergence of DH as a new field of study in the new millennium.7

    Jones offers weak support for the grandiose claim that “we can roughly date the watershed moment when the preponderant collective perception changed to 2004–2008″ (21). Second Life “peaked,” we are told, while World of Warcraft “was taking off”; Nintendo introduced the Wii; then Facebook “came into its own,” and was joined by Twitter and Foursquare, then Apple’s iPhone. Even then (and setting aside the question of whether such benchmarking is acceptable evidence), for the most part Jones’s argument, such as it is, is that something is happening because we are talking about something happening.

    But who are we? Jones’s is the typical deference of the scholar to the creative artist, unwilling to challenge the latter’s utter dependence on meme engineering, at least where someone like Gibson is concerned; and Jones’s subsequent turn to the work of a scholar like N. Katherine Hayles on the history of cybernetics comes too late to amend the impression that the order of things here is marked first by gadgets, memes, and conversations about gadgets and memes, and only subsequently by ideas and arguments about ideas. The generally unflattering company among whom Hayles is placed (Clay Shirky, Nathan Jurgenson) does little to move us out of the shallows, and Jones’s profoundly limited range of literary reference, even within a profoundly narrowed frame — it’s Gibson, Gibson, Gibson all the time, with the usual cameos by Bruce Sterling and Neal Stephenson — doesn’t help either.

    Jones does have one problem with the digital humanities: it ignores games. “My own interest in games met with resistance from some anonymous peer reviewers for the program for the DH 2013 conference, for example,” he tells us (33). “[T]he digital humanities, at least in some quarters, has been somewhat slow to embrace the study of games” (59). “The digital humanities could do worse than look to games” (36). And so on: there is genuine resentment here.

    But nobody wants to give a hater a slice of the pie, and a Roman peace mandates that such resentment be sublated if it is to be, as we say, taken seriously. And so in a magical resolution of that tension, the digital humanities turns out to be constituted by what it accidentally ignores or actively rejects, in this case — a solution that sweeps antagonism under the rug as we do in any other proper family. “[C]omputer-based video games embody procedures and structures that speak to the fundamental concerns of the digital humanities” (33). “Contemporary video games offer vital examples of digital humanities in practice” (59). If gaming “sounds like what I’ve been describing as the agenda of the digital humanities, it’s no accident” (144).

    Some will applaud Jones’s niceness on this count. It may strike others as desperately friendly, a lingering under a big tent as provisional as any other tent, someday to be replaced by a building, if not by nothing. Few of us will deny recognition to Second Life, World of Warcraft, Wii, Facebook, Twitter, etc. as cultural presences, at least for now. But Jones’s book is also marked by slighter and less sensibly chosen benchmarks, less sensibly chosen because Jones’s treatment of them, in a book whose ambition is to preach to the choir, simply imputes their cultural presence. Such brute force argument drives the pathos that Jones surely feels, as a scholar — in the recognition that among modern institutions, it is only scholarship and the law that preserve any memory at all — into a kind of melancholic unconscious, from whence his objects return to embarrass him. “[A]s I write this,” we read, “QR codes show no signs yet of fading away” (41). Quod erat demonstrandum.

    And it is just there, in such a melancholic unconscious, that the triumphalism of the book’s title, and the “emergence of the digital humanities” that it purports to mark, claim, or force into recognition, straightforwardly gives itself away. For the digital humanities will pass away, and rather than being absorbed into the current order of things, as digital humanities enthusiasts like to believe happened to “high theory” (it didn’t happen), the digital humanities seems more likely, at this point, to end as a blank anachronism, overwritten by the next conjuncture in line with its own critical mass of prognostications.

    2

    To be sure, who could deny the fact of significant “changes in the culture” since 2000, in the United States at least, and at regular intervals: 2001, 2008, 2013…? Warfare — military in character, but when that won’t do, economic; of any interval, but especially when prolonged and deliberately open-ended; of any intensity, but especially when flagrantly extrajudicial and opportunistically, indeed sadistically asymmetrical — will do that to you. No one who sets out to historicize the historical present can afford to ignore the facts of present history, at the very least — but the fact is that Jones finds such facts unworthy of comment, and in that sense, for all its pretense to worldliness, The Emergence of the Digital Humanities is an entirely typical product of the so-called ivory tower, wherein arcane and plain speech alike are crafted to euphemize and thus redirect and defuse the conflicts of the university with other social institutions, especially those other institutions who command the university to do this or do that. To take the ambiguity of Jones’s thesis statement (as quoted above) at its word: what if the cultural response that Jones asks us to imagine, here, is indeed and itself the “eversion” of the digital humanities, in one of the metaphorical senses he doesn’t quite consider: an autotomy or self-amputation that, as McLuhan so enjoyed suggesting in so many different ways, serves to deflect the fact of the world as a whole?

    There are few moments of outright ignorance in The Emergence of the Digital Humanities — how could there be, in the security of such a narrow channel?6 Still, pace Jones’s basic assumption here (it is not quite an argument), we might understand the emergence of the digital humanities as the emergence of a conversation that is not about something — cultural change, etc. — as much as it is an attempt to avoid conversing about something: to avoid discussing such cultural change in its most salient and obvious flesh-and-concrete manifestations. “DH is, of course, a socially constructed phenomenon,” Jones tells us (7) — yet “the social,” here, is limited to what Jones himself selects, and selectively indeed. “This is not a question of technological determinism,” he insists. “It’s a matter of recognizing that DH emerged, not in isolation, but as part of larger changes in the culture at large and that culture’s technological infrastructure” (8). Yet the largeness of those larger changes is smaller than any truly reasonable reader, reading any history of the past decade, might have reason to expect. How pleasant that such historical change was “intertwined with culture, creativity, and commerce” (8) — not brutality, bootlicking, and bank fraud. Not even the modest and rather opportunistic gloom of Gibson’s 2010 New York Times op-ed entitled “Google’s Earth” finds its way into Jones’s discourse, despite the extended treatment that Gibson’s “eversion” gets here.

    From our most ostensibly traditional scholarly colleagues, toiling away in their genuine and genuinely book-dusty modesty, we don’t expect much respect for the present moment (which is why they often surprise us). But The Emergence of the Digital Humanities is, at least in ambition, a book about cultural change over the last decade. And such historiographic elision is substantive — enough so to warrant impatient response. While one might not want to say that nothing good can have emerged from the cultural change of the period in question, it would be infantile to deny that conditions have been unpropitious in the extreme, possibly as unpropitious as they have ever been, in U.S. postwar history — and that claims for the value of what emerges into institutionality and institutionalization, under such conditions, deserve extra care and, indeed defense in advance, if one wants not to invite a reasonably caustic skepticism.

    When Jones does engage in such defense, it is weakly argued. To construe the emergence of the digital humanities as non-meaninglessly concurrent with the emergence of yet another wave of mass educational automation (in the MOOC hype that crested in 2013), for example, is wrong not because Jones can demonstrate that their concurrence is the concurrence of two entirely segregated genealogies — one rooted in Silicon Valley ideology and product marketing, say, and one utterly and completely uncaused and untouched by it — but because to observe their concurrence is “particularly galling” to many self-identified DH practitioners (11). Well, excuse me for galling you! “DH practitioners I know,” Jones informs us, “are well aware of [the] complications and complicities” of emergence in an age of precarious labor, “and they’re often busy answering, complicating, and resisting such opportunistic and simplistic views” (10). Argumentative non sequitur aside, that sounds like a lot of work undertaken in self-defense — more than anyone really ought to have to do, if they’re near to the right side of history. Finally, “those outside DH,” Jones opines in an attempt at counter-critique, “often underestimate the theoretical sophistication of many in computing,” who “know better than many of their humanist critics that their science is provisional and contingent” (10): a statement that will only earn Jones super-demerits from those of such humanist critics — they are more numerous than the likes of Jones ever seem to suspect — who came to the humanities with scientific and/or technical aptitudes, sometimes with extensive educational and/or professional training and experience, and whose “sometimes world-weary and condescending skepticism” (10) is sometimes very well-informed and well-justified indeed, and certain to outlive Jones’s winded jabs at it.

    Jones is especially clumsy in confronting the charge that the digital humanities is marked by a forgetting or evasion of the commitment to cultural criticism foregrounded by other, older and now explicitly competing formations, like so-called new media studies. Citing the suggestion by “media scholar Nick Montfort” that “work in the digital humanities is usually considered to be the digitization and analysis of pre-digital cultural artifacts, not the investigation of contemporary computational media,” Jones remarks that “Montfort’s own work […] seems to me to belie the distinction,”7 as if Montfort — or anyone making such a statement — were simply deluded about his own work, or about his experience of a social economy of intellectual attention under identifiably specific social and historical conditions, or else merely expressing pain at being excluded from a social space to which he desired admission, rather than objecting on principle to a secessionist act of imagination.8

    3

    Jones tells us that he doesn’t “mean to gloss over the uneven distribution of [network] technologies around the world, or the serious social and political problems associated with manufacturing and discarding the devices and maintaining the server farms and cell towers on which the network depends” — but he goes ahead and does it anyway, and without apology or evident regret. “[I]t’s not my topic in this book,” we are told, “and I’ve deliberately restricted my focus to the already-networked world” (3). The message is clear: this is a book for readers who will accept such circumscription, in what they read and contemplate. Perhaps this is what marks the emergence of the digital humanities, in the re-emergence of license for restrictive intellectual ambition and a generally restrictive purview: a bracketing of the world that was increasingly discredited, and discredited with increasing ferocity, just by the way, in the academic humanities in the course of the three decades preceding the first Silicon Valley bubble. Jones suggests that “it can be too easy to assume a qualitative hierarchical difference in the impact of networked technology, too easy to extend the deeper biases of privilege into binary theories of the global ‘digital divide’” (4), and one wonders what authority to grant to such a pronouncement when articulated by someone who admits he is not interested, at least in this book, in thinking about how an — how any — other half lives. It’s the latter, not the former, that is the easy choice here. (Against a single, entirely inconsequential squib in Computer Business Review entitled “Report: Global Digital Divide Getting Worse,” an almost obnoxiously perfunctory footnote pits “a United Nations Telecoms Agency report” from 2012. This is not scholarship.)

    Thus it is that, read closely, the demand for finitude in the one capacity in which we are non-mortal — in thought and intellectual ambition — and the more or less cheerful imagination of an implied reader satisfied by such finitude, become passive microaggressions aimed at another mode of the production of knowledge, whose expansive focus on a theoretical totality of social antagonism (what Jones calls “hierarchical difference”) and justice (what he calls “binary theories”) makes the author of The Emergence of the Digital Humanities uncomfortable, at least on its pages.

    That’s fine, of course. No: no, it’s not. What I mean to say is that it’s unfair to write as if the author of The Emergence of the Digital Humanities alone bears responsibility for this particular, certainly overdetermined state of affairs. He doesn’t — how could he? But he’s getting no help, either, from most of those who will be more or less pleased by the title of his book, and by its argument, such as it is: because they want to believe they have “emerged” along with it, and with that tension resolved, its discomforts relieved. Jones’s book doesn’t seriously challenge that desire, its (few) hedges and provisos notwithstanding. If that desire is more anxious now than ever, as digital humanities enthusiasts find themselves scrutinized from all sides, it is with good reason.
    _____

    Brian Lennon is Associate Professor of English and Comparative Literature at Pennsylvania State University and the author of In Babel’s Shadow: Multilingual Literatures, Monolingual States (University of Minnesota Press, 2010).
    _____

    notes:
    1. Jones, 1.
    Back to the essay

    2. Jones, 4. “Interest” is presumed to be affirmative, here, marking one elision of the range of humanistic critical and scholarly attitudes toward Google generally and the Google Books project in particular. And of the unequivocally less affirmative “interest” of creative writers as represented by the Authors Guild, just for example, Jones has nothing to say: another elision.
    Back to the essay

    3. Jones, 13.
    Back to the essay

    4. See Gibson.
    Back to the essay

    5. Jones, 5.
    Back to the essay

    6. As eager as any other digital humanities enthusiast to accept Franco Moretti’s legitimation of DH, but apparently incurious about the intellectual formation, career and body of work that led such a big fish to such a small pond, Jones opines that Moretti’s “call for a distant reading” stands “opposed to the close reading that has been central to literary studies since the late nineteenth century” (Jones, 62). “Late nineteenth century” when exactly, and where (and how, and why)? one wonders. But to judge by what Jones sees fit to say by way of explanation — that is, nothing at all — this is mere hearsay.
    Back to the essay

    7. Jones, 5. See also Montfort.
    Back to the essay

    8. As further evidence that Montfort’s statement is a mischaracterization or expresses a misunderstanding, Jones suggests the fact that “[t]he Electronic Literature Organization itself, an important center of gravity for the study of computational media in which Montfort has been instrumental, was for a time housed at the Maryland Institute for Technology in the Humanities (MITH), a preeminent DH center where Matthew Kirschenbaum served as faculty advisor” (Jones, 5–6). The non sequiturs continue: “digital humanities” includes the study of computing and media because “self-identified practitioners doing DH” study computing and media (Jones, 6); the study of computing and media is also “digital humanities” because the study of computing and digital media might be performed at institutions like MITH or George Mason University’s Roy Rosenzweig Center for History and New Media, which are “digital humanities centers” (although the phrase “digital humanities” appears nowhere in their names); “digital humanities” also adequately describes work in “media archaeology” or “media history,” because such work has “continued to influence DH” (Jones, 6); new media studies is a component of the digital humanities because some scholars suggest it is so, and others cannot be heard to object, at least after one has placed one’s fingers in one’s ears; and so on.
    Back to the essay

    (feature image: “Bandeau – Manifeste des Digital Humanities,” uncredited; originally posted on flickr.)

  • The Lenses of Failure

    The Lenses of Failure

    The Art of Failure

    by Nathan Altice

    On Software’s Dark Souls II and Jesper Juul’s The Art of Failure

    ~

    I am speaking to a cat named Sweet Shalquoir. She lounges on a desk in a diminutive house near the center of Majula, a coastal settlement that harbors a small band of itinerant merchants, tradespeople, and mystics. Among Shalquoir’s wares is the Silvercat ring, whose circlet resembles a leaping, blue-eyed cat.

    ‘You’ve seen that gaping hole over there? Well, there’s nasty little vermin down there,’ Shalquoir says, observing my window shopping. ‘Although who you seek is even further below.’ She laughs. She knows her costly ring grants its wearer a cat-like affinity for lengthy drops. I check my inventory. Having just arrived in Majula, I have few souls on hand.

    I turn from Shalquoir and exit the house ringless. True to her word, a yawning chasm opens before me, its perimeter edged in slabbed stonework and crumbling statues but otherwise unmarked and unguarded. One could easily fall in while sprinting from house to house in search of Majula’s residents. Wary of an accidental fall, I nudge toward its edge.

    The pit has a mossy patina, as if it was once a well for giants that now lies parched after drinking centuries of Majula’s sun. Its surface is smooth save for a few distant torches sawing at the dark and several crossbeams that bisect its diameter at uneven intervals. Their configuration forms a makeshift spiral ladder. Corpses are slung across the beams like macabre dolls, warning wanderers fool enough to chase after nasty little vermin. But atop the first corpse gleams a pinprick of ethereal light, both a beacon to guide the first lengthy drop and a promise of immediate reward if one survives.

    Silvercat ring be damned, I think I can make it.

    I position myself parallel to the first crossbeam, eyes fixed on that glimmering point. I jump.

    The Jump

    [Dark Souls II screenshots source: ItsBlueLizardJello via YouTube]

    For a breathless second, I plunge toward the beam. My aim is true—but my body is weak. I collapse, sprawled atop the lashed wooden planks, inches from my coveted jewel. I evaporate into a green vapor as two words appear in the screen’s lower half: ‘YOU DIED.’

    Decisions such as these abound in Dark Souls II, the latest entry in developer From Software’s cult-to-crossover-hit series of games bearing the Souls moniker. The first, Demon’s Souls, debuted on the PlayStation 3 in 2009, attracting players with its understated lore, intricate level design, and relentless difficulty. Spiritual successor Dark Souls followed in 2011 and its direct sequel Dark Souls II released earlier this year.

    Each game adheres to standard medieval fantasy tropes: there are spellcasters, armor-clad knights, parapet-trimmed castles, and a variety of fire-spewing dragons. You select one out of several archetypal character classes (e.g., Cleric, Sorcerer, Swordsman), customize a few appearance options, then explore and fight through a series of interconnected, yet typically non-linear, locations populated by creatures of escalating difficulty. What distinguishes these games from the hundreds of other fantasy games those initial conditions could describe are their melancholy tone and their general disregard for player hand-holding. Your hero begins as little more than a voiceless, fragile husk with minimal direction and fewer resources. Merely surviving takes precedence over rescuing princesses or looting dungeons. The Souls games similarly reveal little about their settings or systems, driving some players to declare them among the worst games ever made while catalyzing others to revisit the game’s environs for hundreds of hours. Vibrant communities have emerged around the Souls series, partly in an effort to document the mechanics From Software purposefully obscures and partly to construct a coherent logic and lore from the scraps and minutiae the game provides.

    Dark Souls II Settings

    Unlike most action games, every encounter in Dark Souls II is potentially deadly, from the lowliest grunts to the largest boss creatures. To further raise the stakes, death has consequences. Slaying foes grants souls, the titular items that fuel both trade and character progression. Spending souls increases your survivability, whether you invest them directly in your character stats (e.g. Vitality) or a more powerful shield. However, dying forfeits any souls you are currently carrying and resets your progress to the last bonfire (i.e., checkpoint) you rested beside. The catch is that dying or resting resets any creatures you have previously slain, giving your quest a moribund, Sisyphean repetition that grinds impatient players to a halt. And once slain, you have one chance to recover your lost souls. A glowing green aura marks the site of your previous bereavement. Touch that mark before you die again and you regain your cache; fail to do so and you lose it forever. You will often fail to do so.

    What many Souls reviewers find refreshing about the game’s difficulty is actually a more forgiving variation of the death mechanics found in early ASCII-based games like Rogue (1980), Hack (1985), and NetHack (1987), wherein ‘permadeath’—i.e., death meant starting the game anew—was a central conceit. And those games were almost direct ‘ports’ of tabletop roleplaying progenitors like Dungeons & Dragons, whose early versions were skewed more toward the gritty realism of pulp literature than the godlike power fantasies of modern roleplaying games. A successful career in D&D meant accumulating enough treasure to eventually retire from dungeon-delving, so one could hire other hapless retainers to loot on your behalf. Death was frequent and expected because dungeons were dangerous places. And unless one’s Dungeon Master was particularly lenient, death was final. A fatal mistake meant re-rolling your character. In this sense, the Souls games stand apart from their videogame peers because of the conservatism of their design. Though countless games ape D&D’s generic fantasy setting and stat-based progress model, few adopt the existential dread of its early forms.

    Dark Souls II’s adherence to opaque systems and traditional difficulty has alienated players unaccustomed to the demands of earlier gaming models. For those repeatedly stymied by the game’s frustrations, several questions arise: Why put forth the effort in a game that feels so antagonistic toward its players? Is there any reward worth the frequent, unforgiving failure? Aren’t games supposed to be fun—and is failing fun?

    YOU DIED

    Games scholar Jesper Juul raises similar questions in The Art of Failure, the second book in MIT’s new Playful Thinking series. His central thesis is that games present players a ‘paradox of failure’: we do not like to fail, yet games perpetually make us do so; weirder still, we seek out games voluntarily, even though the only victory they offer is over a failure that they themselves create. Despite games’ reputation as frivolous fun, they can humiliate and infuriate us. Real emotions are at stake. And, as Juul argues, ‘the paradox of failure is unique in that when you fail in a game, it really means that you were in some way inadequate’ (7). So when my character plunges down the pit in Majula, the developers do not tell me ‘Your character died,’ even though I have named that character. Instead the games remind us, ‘YOU DIED.’ YOU, the player, the one holding the Xbox 360 controller.

    The strength of Juul’s argument is that he does not rely on a single discipline but instead approaches failure via four related ‘lenses’: philosophy, psychology, game design, and fiction (30). Each lens has its own brief chapter and accompanying game examples, and throughout Juul interjects anecdotes from his personal play experience alongside lessons he’s learned co-designing a number of experimental video games. The breadth of examples is wide, ranging from big-budget games like Uncharted 2, Meteos, and Skate 2 to more obscure works like Flywrench, September 12, and Super Real Tennis.

    Juul’s first lens (chapter 2) links up his paradox of failure to a longstanding philosophical quandary known as the ‘paradox of painful art.’ Like video games, art tends to elicit painful emotions from viewers, whether a tragic stage play or a disturbing novel, yet contrary to the notion that we seek to avoid pain, people regularly pursue such art—even enjoy it. Juul provides a summary of positions philosophers have offered to explain this behavior, categorized as follows: deflationary arguments skirt the paradox by claiming that art doesn’t actually cause us pain in the first place; compensatory arguments acknowledge the pain, but claim that the sum of painful vs. pleasant reactions to art yield a net positive; and a-hedonistic arguments deny that humans are solely pleasure-seekers—some of us pursue pain.

    Juul’s commonsense response is that we should not limit human motivation to narrow, atemporal explanations. Instead, a synthesis of categories is possible, because we can successfully manage multiple contradictory desires based on immediate and long-term (i.e., aesthetic) time frames. He writes, ‘Our moment-to-moment desire to avoid unpleasant experiences is at odds with a longer-term aesthetic desire in which we understand failure, tragedy, and general unpleasantness to be necessary for our experience’ (115). In Dark Souls II, I faced a particularly challenging section early on when my character, a sorcerer, was under-powered and under-equipped to face a strong, agile boss known as The Pursuer. I spent close to four hours running the same path to the boss, dying dozens of times, with no net progress.

    Facing the Pursuer

    For Juul, my continued persistence did not betray a masochistic personality flaw (not that I didn’t consider it), nor would he trivialize my frustration (which I certainly felt), nor would he argue that I was eking out more pleasure than pain during my repeated trials (I certainly wasn’t). Instead, I was tolerating immediate failure in pursuit of a distant aesthetic goal, one that would not arrive during that game session—or many sessions to come. And indeed, this is why Juul calls games the ‘art of failure,’ because ‘games hurt us and then induce an urgency to repair our self-image’ (45). I could only overcome the Pursuer if I learned to play better. Juul writes, ‘Failure is integral to the enjoyment of game playing in a way that it is not integral to the enjoyment of learning in general. Games are a perspective on failure and learning as enjoyment, or satisfaction’ (45). Failure is part of what makes a game a game.

    Chapter 3 proceeds to the psychological lens, allowing Juul to review the myriad ways we experience failure emotionally. For many games, the impact can be significant: ‘To play a game is to take an emotional gamble. The higher the stakes, in terms of time investment, public acknowledgement, and personal importance, the higher are the potential losses and rewards’ (57). Failure doesn’t feel good, but again, paradoxically, we must first accept responsibility for our failures in order to then learn from them. ‘Once we accept responsibility,’ Juul writes, ‘failure also concretely pushes us to search for new strategies and learning opportunities in a game’ (116). But why can’t we learn without the painful consequences? Because most of us need prodding to be the best players we can be. In the absence of failure, players will cheese and cheat their way to favorable outcomes (59).

    Juul concludes that games help us grow—‘we come away from any skill-based game changed, wiser, and possessing new skills’ (59)—but his more interesting point is how we buffer the emotional toll of failure by diverting or transforming it. ‘Self-defeating’ players react to failure by lessening their efforts, a laissez-faire attitude that makes failure expected and thus less painful. ‘Spectacular’ failures, on the other hand, elevate negativity to an aesthetic focal point. When I laugh at the quivering pile of polygons clipped halfway through the floor geometry by the Pursuer’s blade, I’m no longer lamenting my own failure but celebrating the game’s.

    Chapter 4 provides a broad view of how games are designed to make us fail and counters much conventional wisdom about prevailing design trends. For instance, many players complain that contemporary games are too easy, that we don’t fail enough, but Juul argues that those players are confusing failure with punishment. Failure is now designed to be more frequent than in the past, but punishment is far less severe. Death in early arcade or console games often meant total failure, resetting your progress to the beginning of the game. Death in Dark Souls II merely forfeits your souls in-hand—any spent souls, found items, gained levels, or cached equipment are permanent. Punishment certainly feels severe when you lose tens of thousands of souls, but the consequences are far less jarring than losing your final life in Ghost ’n’ Goblins.

    Juul outlines three different paths through which games lead us to success or failure—skill, chance, and labor—but notes that his categories are neither exhaustive nor mutually exclusive (75, 82). The first category is likely the most familiar for frequent game players: ‘When we fail in a game of skill, we are therefore marked as deficient in a straightforward way: as lacking the skills required to play the game’ (74). When our skills fail us, we only have ourselves to blame. Chance, however, ‘marks us in a different way…as being on poor terms with the gods, or as simply unlucky, which is still a personal trait that we would rather not have’ (75). With chance in play, failure gains a cosmic significance.

    Labor is one of the newer design paths, characterized by the low-skill, slow-grind style of play frequently maligned in Farmville and its clones, but also found in better-regarded titles like World of Warcraft (and RPGs in general). In these games, failure has its lowest stakes: ‘Lack of success in a game of labor therefore does not mark us as lacking in skill or luck, but at worst as someone lazy (or too busy). For those who are afraid of failure, this is close to an ideal state. For those who think of games as personal struggles for improvement, games of labor are anathema’ (79). Juul’s last point is an important lesson for critics quick to dismiss the ‘click-to-win’ genre outright. For players averse to personal or cosmic failure, games of labor are a welcome respite.

    Juul’s final lens (chapter 5) examines fictional failure. ‘Most video games,’ he writes, ‘represent our failures and successes by letting our performance be mirrored by a protagonist (or society, etc.) in the game’s fictional world. When we are unhappy to have failed, a fictional character is also unhappy’ (117). Beginning with this conventional case, Juul then discusses games that subvert or challenge the presumed alignment of player/character interests, asking whether games can be tragic or present situations where character failure might be the desired outcome. While Juul concedes that ‘the self-destruction of the protagonist remains awkward,’ complicity—a sense of player regret when facing a character’s repugnant actions—offers a ‘better variation’ of game tragedy (117). Juul argues that complicity is unique to games, an experience that is ‘more personal and stronger than simply witnessing a fictional character performing the same actions’ (113). When I nudge my character into Majula’s pit, I’m no longer a witness—I’m a participant.

    The Art of Failure’s final chapter focuses the prior lens’ viewpoints on failure into a humanistic concluding point: ‘Failure forces us to reconsider what we are doing, to learn. Failure connects us personally to the events in the game; it proves that we matter, that the world does no simply continue regardless of our actions’ (122). For those who already accept games as a meaningful, expressive medium, Juul’s conclusion may be unsurprising. But this kind of thoughtful optimism is also part of the book’s strength. Juul’s writing is approachable and jargon-free, and the Playful Thinking series’ focus on depth, readability, and pocket-size volumes makes The Art of Failure an ideal book to pass along to friends and colleagues that might question your ‘frivolous’ videogame hobby—or, more importantly, justify why you often spend hours swearing at the screen while purportedly in pursuit of ‘fun.’

    The final chapter also offers a tantalizingly brief analysis of how Juul’s lenses might refract outward, beyond games, to culture at large. Specifically targeting the now-widespread corporate practice of gamification, wherein game design principles are applied as motivators and performance measures for non-leisure activities (usually work), Juul reminds us that the technique often fails because workplace performance goals ‘rarely measure what they are supposed to measure’ (120). Games are ideal for performance measurement because of their peculiar teleology: ‘The value system that the goal of a game creates is not an artificial measure of the value of the player’s performance; the goal is what creates the value in the first place by assigning values to the possible outcomes of a game’ (121). This kind of pushback against digital idealism is an important reminder that games ‘are not a pixie dust of motivation to be sprinkled on any subject’ (10), and Juul leaves a lot of room for further development of his thesis beyond the narrow scope of videogames.

    For the converted, The Art of Failure provides cross-disciplinary insights into many of our unexamined play habits. While playing Dark Souls II, I frequently thought of Juul’s triumvirate of design paths. Dark Souls II is an exemplary hybrid—though much of your success is skill-based, chance and labor play significant roles. The algorithmic systems that govern item drops or boss attacks can often sway one’s fortunes toward success or failure, as many speedrunners would attest. And for as much ink is spilt about Dark Souls II being a ‘hardcore’ game with ‘old-school’ challenge, success can also be won through skill-less labor. Summoning high-level allies to clear difficult paths or simply investing hours grinding souls to level your character are both viable supplements for chance and skill.

    But what of games that do not fit these paths? How do they contend with failure? There is a rich tradition of experimental or independent artgames, notgames, game poems, and the like that are designed with no path to failure. Standout examples like Proteus, Dys4ia, and Your Lover Has Turned Into a Flock of Birds require no skills beyond operating a keyboard or mouse, do not rely on chance, and require little time investment. Unsurprisingly, games like these are often targeted as ‘non-games,’ and Juul’s analysis leaves little room for games that skirt these borderlines. There is a subtext in The Art of Failure that draws distinctions between ‘good’ and ‘bad’ design. Early on, Juul writes that ‘(good) games are designed such that they give us a fair chance’ (7) and ‘for something to be a good game, and a game at all, we expect resistance and the possibility of failure’ (12).

    There are essentialist, formalist assumptions guiding Juul’s thesis, leading him to privilege games’ ‘unique’ qualities at the risk of further marginalizing genres, creators, and hybrid play practices that already operate at the margins. To argue that complicity is unique to games or that games are the art of failure is to make an unwarranted leap into medium specificity and draw borderlines that need not be drawn. Certainly other media can draw us into complicity, a path well-trodden in cinema’s exploration of voyeurism (Rear Window, Blow-Up) and extreme horror (Saw, Hostel). Can’t games simply be particularly strong at complicity, rather than its sole purveyor?

    I’m similarly unconvinced that games are the quintessential art of failure. Critics often contend that video games are unique as a medium in that they require a certain skill threshold to complete. While it is true that finishing Super Mario Bros. is different than watching the entirety of The Godfather, we can use Juul’s own multi-path model to understand how we might fail at other media. The latter example certainly requires more labor—one can play dozens of Super Mario runs during The Godfather’s 175-minute runtime. Further, watching a film lauded as one of history’s greatest carries unique expectations that many viewers may fail to satisfy, from the societal pressure to agree on its quality to the comprehensive faculties necessary to follow its narrative. Different failures arise from different media—I’ve failed reading Infinite Jest more than I’ve failed completing Dark Souls II. And any visit to a museum will teach you that many people feel as though they fail at modern art. Tackling Dark Souls II’s Pursuer or Barnett Newman’s Onement, I can be equally daunting.

    When scholars ask, as Juul does, what games can do, they must be careful that by doing so they do not also police what games can be. Failure is a compelling lens through which to examine our relationship to play, but we needn’t valorize it as the only means to count as a game.
    _____


    Nathan Altice is an instructor of sound and game design at Virginia Commonwealth University and author of the platform study of the NES/Famicom, I AM ERROR (MIT, 2015). He writes at metopal.com and burns bridges at @circuitlions.

  • Adventures in Reading the American Novel

    Adventures in Reading the American Novel

    image

    by Sean J. Kelly

    on Reading the American Novel 1780-1865 by Shirley Samuels

    Shirley Samuels’s Reading the American Novel 1780-1865 (2012) is an installment of the Reading the Novel series edited by Daniel R. Schwarz, a series dedicated to “provid[ing] practical introductions to reading the novel in both the British and Irish, and the American traditions.” While the volume does offer a “practical introduction” to the American novel of the antebellum era—its major themes, cultural contexts, and modes of production—its primary focus is the expansion of the American literary canon, particularly with regard to nineteenth-century women writers. In this respect, Samuels’s book continues a strong tradition of feminist cultural and historicist criticism pioneered by such landmark studies as Jane Tompkins’s Sensational Designs: The Cultural Work of American Fiction 1790-1860 (1985) and Cathy N. Davidson’s Revolution and the Word: The Rise of the Novel in America (1986). Tompkins’s explicit goal was to challenge the view of American literary history codified by F.O. Matthiessen’s monumental work, American Renaissance: Art and Expression in the Age of Emerson and Whitman (1941). In particular, Tompkins was concerned with reevaluating what she wryly termed the “other American Renaissance,” namely the “entire body of work” 1 of popular female sentimental writers such as Harriet Beecher Stowe, Maria Cummins, and Susan Warner, whose narratives “offer powerful examples of the way a culture thinks about itself.” 2

    Recent decades have witnessed a growing scholarly interest in not only expanding the literary canon through the rediscovery of “lost” works by women writers such as Tabitha Gilman Tenney3
    and P.D. Manvill4, to name a few, but also reassessing how the study of nineteenth-century sentimentalism and material culture might complicate, extend, and enrich our present understandings of the works of such canonical figures as Cooper, Hawthorne, and Melville. In this critical vein, Samuels asks, “what happens when a student starts to read Nathaniel Hawthorne’s The Scarlet Letter (1850), not simply in relation to its Puritan setting but also in relation to the novels that surround it?” (160). Reading the American Novel engages in both of these critical enterprises—rediscovery and reassessment of nineteenth-century American literature—by promoting what she describes as “not a sequential, but a layered reading” (153). In her “Afterward,” Samuels explains:

    Such a reading produces a form of pleasure layered into alternatives and identities where metaphors of confinement or escape are often the most significant. What produces the emergence of spatial or visual relations often lies within the historical attention to geography, architecture, or music as elements in this fiction that might re-orient the reader. With such knowledge, the reader can ask the fiction to perform different functions. What happens here? The spatial imagining of towns and landscapes corresponds to the minute landscape of particular bodies in time. Through close attention to the movements of these bodies, the critic discovers not only new literatures, but also new histories” (153).

    It is this “richly textured” (2) type of reading—a set of hermeneutic techniques to be deployed tactically across textual surfaces (including primary texts, marginalia, geographical locations, and “particular bodies in time” [153])—that leads, eventually, to Samuels’s, and the reader’s, greatest discoveries. The reader may find Samuels’s approach to be a bit disorienting initially. This is because Reading the American Novel traces not the evolution of a central concept in the way that Elizabeth Barnes, in States of Sympathy: Seduction and Democracy in the American Novel (1997), follows the development of seduction from late eighteenth-century to the domestic fiction of the 1860s. Rather, Samuels introduces a constellation of loosely-related motifs or what she later calls “possibilities for reading” (152)—“reading by waterways, by configurations of home, by blood and contract” (152)—that will provide the anchoring points for the set of disparate and innovative readings that follow.

    Samuels’s introductory chapter, “Introduction to the American Novel: From Charles Brockden Brown’s Gothic Novels to Caroline Kirkland’s Wilderness,” considers the development of the novel from the standpoint of cultural production and consumption, arguing that a nineteenth-century audience would have “assumed that the novel must act in the world” (4). In addition, Samuels briefly introduces the various motifs, themes, and sites of conflict (e.g. “Violence and the Novel,” “Nationalism,” Landscapes and Houses,” “Crossing Borders,” “Water”) that will provide the conceptual frameworks for her layers of reading in the subsequent chapters. If her categories at first appear arbitrary, this is because, as Samuels points out, “the novel in the United States does not follow set patterns” (20). The complex conceptual topography introduced in Chapter 1 reflects the need for what she calls a “fractal critical attention, the ability to follow patterns that fold ideas into one another while admiring designs that appear to arise organically, as if without volition” (20).

    The second chapter of the book, “Historical Codes in Literary Analysis: The Writing Projects of Nathaniel Hawthorne, Elizabeth Stoddard, and Hannah Crafts,” examines the value of archival research by considering the ways in which “historical codes . . . include[ing] abstractions such as iconography as well as the minutiae derived from historical research . . . are there to be interpreted and deciphered as much as to be deployed” (28). Samuels’s reading of Hawthorne, for example, links the fragmentary status of the author’s late work, The Dolliver Romance (1863-1864), to the more general “ideological fragmentation” (28) apparent in Hawthorne’s emotional exchange of letters with his editor, James T. Fields, concerning the representation of President Lincoln and his “increasing material difficulty of holding a pen” (25).

    Samuels’s third chapter, “Women, Blood, and Contract: Land Claims in Lydia Maria Child, Catharine Sedgwick, and James Fenimore Cooper,” explores the prevalence of “contracts involving women and blood” (45) in three early nineteenth-century historical romances, Child’s Hobomok (1824), Cooper’s The Last of the Mohicans (1826), and Sedgwick’s Hope Leslie (1827). In these works, Samuels argues, the struggle over national citizenship and westward expansion is dramatized against the “powerfully absent immediate context” (45) of racial politics. She maintains that in such dramas “the gift of women’s blood” (62)—often represented in the guise of romantic desire and sacrifice— “both obscures and exposes the contract of land” (62).

    Chapter four, “Black Rivers, Red Letters, and White Whales: Mobility and Desire in Catharine Williams, Nathaniel Hawthorne, and Herman Melville,” extends Samuels’s meditation on the figure of women’s bodies in relation to “the promise or threat of reproduction” (68) in the narrative of national identity; however, in her readings of Williams’ Fall River (1834), Hawthorne’s The Scarlet Letter (1850), and Melville’s Moby Dick (1851), the focus shifts from issues of land and contracts to the representation of water as symbolic of “national dispossession” (68) and “anxieties about birth” (68).

    Samuels’s fifth chapter, “Promoting the Nation in James Fenimore Cooper and Harriet Beecher Stowe,” returns to the question of the historical romance, critically examining how Cooper’s 1841 novel, The Deerslayer, might be read as evidence of “ambivalent nationalism” (102), as it links “early American nationalism and capitalism to violence against women and children” (109). Samuels then considers the possibility of applying such ambivalence to Stowe’s abolitionist vision for the future of America limned in Uncle Tom’s Cabin (1852), a vision founded, in part, on Stowe’s conceptual remapping of the Puritan jeremiad onto the abolitionist discourse of divine retribution and national apocalypse (111-112). Because Stowe “set out to produce a history of the United States that would have become obsolete in the moment of its telling” (111), Samuels argues that we witness a break in the development of historical fiction caused by the Civil War, a “gap” during which “the purpose of nationalism with respect to the historical novel changes” (113).

    Chapter six, “Women’s Worlds in the Nineteenth-Century Novel: Susan B. Warner, Elizabeth Stuart Phelps, Fanny Fern, E.D.E.N. Southworth, Harriet Wilson, and Louisa May Alcott,” and the book’s Afterward—in my opinion, the strongest sections of the book—survey a wide variety of nineteenth-century American women writers, including: Warner, Fern, Southworth, Wilson, Alcott, Caroline Kirkland, and Julia Ward Howe, among others. These discussions explore the ways in which writing functions as a type of labor which “gives the woman a face with which to face the world” (145). Samuels seeks to challenge the over-simplification of “separate spheres” ideology (153) by offering careful critical attention to the ways in which the labor of writing shapes identities in a multiplicity of distinct cultural locations. Hence, Samuels writes: “It is difficult to summarize motifs that appear in women’s writing in the nineteenth century. To speak of women’s worlds in the novel raises the matter of: what women?” (143).

    Admittedly, there are moments when Samuels’s layered readings necessitate extended swaths of summary; the works that become the primary focus of Samuels’s analyses, such as Catharine Williams’ Fall River and the novels of Elizabeth Stuart Phelps and E.D.E.N. Southworth, may be unfamiliar to many readers. At other instances, the very intricacy, novelty, and ambitiousness of Samuels’s reading performances begin to challenge the reader’s desire for linear consistency. Her interpretive strategies, which prioritize reading at the margins, the textual rendering of historical codes, and provocative juxtapositions, produce, at times, a kind of tunneling effect. The reader is swept breathlessly along, relieved when the author pauses to say: “But to return to my opening question” (82). Ultimately however, Samuels’s critical approaches throughout this book pose an important challenge to our conventional ways of assigning value and significance to nineteenth-century popular fiction. By reading canonical works such as Moby Dick and The Scarlet Letter with and against the popular crime novel Fall River, for example, she is able to map similarities between all three works in order to create “a more complete fiction” (83). All of these novels, she writes, “lure New Englanders to die. To read them together is to recover the bodies of laboring women and men from watery depths” (83). This type of creative reading, to invoke Ralph Waldo Emerson’s phrase, allows us potentially to tease out significant conflicts and tensions in well-known works that might have otherwise remained invisible in a conventional reading. “What happens,” she asks, “when we remember that Captain Ahab is a father?” (83). Because Samuels offers not only insightful interpretations of nineteenth-century American novels but also introduces new and creative ways to read—and ways to think about the meaning of reading as a critical practice—Reading the American Novel must be viewed as a valuable addition to American literary scholarship.

    _____

    Sean J. Kelly is Associate Professor of English at Wilkes University. His articles on nineteenth-century American literature and culture have recently appeared in PLL, The Edgar Allan Poe Review, and Short Story.

    _____

    notes:
    1. Tompkins, Jane. Sensational Designs: The Cultural Work of American Fiction 1790-1860. New York: Oxford UP, 1985. 147
    Back to the essay

    2. Ibid. xi
    Back to the essay

    3. Tenney, Tabitha Gilman. Female Quixotism: Exhibited in the Romantic Opinions and Extravagant
    Adventures of Dorcasina Sheldon
    . 1801. Intro. Cathy N. Davidson. New York: Oxford UP, 1992.
    Back to the essay

    4. Manvill, P.D. Lucinda; Or, the Mountain Mourner: Being Recent Facts, in a Series of Letters, from Mrs.
    Manvill, in the State of New York, to Her Sister in Pennsylvania
    . 1807. Intro. Mischelle B. Anthony. Syracuse: Syracuse UP, 2009.
    Back to the essay