b2o: boundary 2 online

Reviews and analysis of scholarly books about digital technology and culture, as well as of articles, legal proceedings, videos, social media, digital humanities projects, and other emerging digital forms, offered from a humanist perspective, in which our primary intellectual commitment is to the deeply embedded texts, figures, themes, and politics that constitute human culture, regardless of the medium in which they occur.

  • Who Big Data Thinks We Are (When It Thinks We're Not Looking)

    Who Big Data Thinks We Are (When It Thinks We're Not Looking)

    Dataclysm: Who We Are (When We Think No One's Looking) (Crown, 2014)a review of Christian Rudder, Dataclysm: Who We Are (When We Think No One’s Looking) (Crown, 2014)
    by Cathy O’Neil
    ~
    Here’s what I’ve spent the last couple of days doing: alternatively reading Christian Rudder’s new book Dataclysm and proofreading a report by AAPOR which discusses the benefits, dangers, and ethics of using big data, which is mostly “found” data originally meant for some other purpose, as a replacement for public surveys, with their carefully constructed data collection processes and informed consent. The AAPOR folk have asked me to provide tangible examples of the dangers of using big data to infer things about public opinion, and I am tempted to simply ask them all to read Dataclysm as exhibit A.

    Rudder is a co-founder of OKCupid, an online dating site. His book mainly pertains to how people search for love and sex online, and how they represent themselves in their profiles.

    Here’s something that I will mention for context into his data explorations: Rudder likes to crudely provoke, as he displayed when he wrote this recent post explaining how OKCupid experiments on users. He enjoys playing the part of the somewhat creepy detective, peering into what OKCupid users thought was a somewhat private place to prepare themselves for the dating world. It’s the online equivalent of a video camera in a changing booth at a department store, which he defended not-so-subtly on a recent NPR show called On The Media, and which was written up here.

    I won’t dwell on that aspect of the story because I think it’s a good and timely conversation, and I’m glad the public is finally waking up to what I’ve known for years is going on. I’m actually happy Rudder is so nonchalant about it because there’s no pretense.

    Even so, I’m less happy with his actual data work. Let me tell you why I say that with a few examples.

    Who Are OKCupid Users?

    I spent a lot of time with my students this summer saying that a standalone number wouldn’t be interesting, that you have to compare that number to some baseline that people can understand. So if I told you how many black kids have been stopped and frisked this year in NYC, I’d also need to tell you how many black kids live in NYC for you to get an idea of the scope of the issue. It’s a basic fact about data analysis and reporting.

    When you’re dealing with populations on dating sites and you want to conclude things about the larger culture, the relevant “baseline comparison” is how well the members of the dating site represent the population as a whole. Rudder doesn’t do this. Instead he just says there are lots of OKCupid users for the first few chapters, and then later on after he’s made a few spectacularly broad statements, on page 104 he compares the users of OKCupid to the wider internet users, but not to the general population.

    It’s an inappropriate baseline, made too late. Because I’m not sure about you but I don’t have a keen sense of the population of internet users. I’m pretty sure very young kids and old people are not well represented, but that’s about it. My students would have known to compare a population to the census. It needs to happen.

    How Do You Collect Your Data?

    Let me back up to the very beginning of the book, where Rudder startles us by showing us that the men that women rate “most attractive” are about their age whereas the women that men rate “most attractive” are consistently 20 years old, no matter how old the men are.

    Actually, I am projecting. Rudder never actually specifically tells us what the rating is, how it’s exactly worded, and how the profiles are presented to the different groups. And that’s a problem, which he ignores completely until much later in the book when he mentions that how survey questions are worded can have a profound effect on how people respond, but his target is someone else’s survey, not his OKCupid environment.

    Words matter, and they matter differently for men and women. So for example, if there were a button for “eye candy,” we might expect women to choose more young men. If my guess is correct, and the term in use is “most attractive”, then for men it might well trigger a sexual concept whereas for women it might trigger a different social construct; indeed I would assume it does.

    Since this isn’t a porn site, it’s a dating site, we are not filtering for purely visual appeal; we are looking for relationships. We are thinking beyond what turns us on physically and asking ourselves, who would we want to spend time with? Who would our family like us to be with? Who would make us be attractive to ourselves? Those are different questions and provoke different answers. And they are culturally interesting questions, which Rudder never explores. A lost opportunity.

    Next, how does the recommendation engine work? I can well imagine that, once you’ve rated Profile A high, there is an algorithm that finds Profile B such that “people who liked Profile A also liked Profile B”. If so, then there’s yet another reason to worry that such results as Rudder described are produced in part as a result of the feedback loop engendered by the recommendation engine. But he doesn’t explain how his data is collected, how it is prompted, or the exact words that are used.

    Here’s a clue that Rudder is confused by his own facile interpretations: men and women both state that they are looking for relationships with people around their own age or slightly younger, and that they end up messaging people slightly younger than they are but not many many years younger. So forty year old men do not message twenty year old women.

    Is this sad sexual frustration? Is this, in Rudder’s words, the difference between what they claim they want and what they really want behind closed doors? Not at all. This is more likely the difference between how we live our fantasies and how we actually realistically see our future.

    Need to Control for Population

    Here’s another frustrating bit from the book: Rudder talks about how hard it is for older people to get a date but he doesn’t correct for population. And since he never tells us how many OKCupid users are older, nor does he compare his users to the census, I cannot infer this.

    Here’s a graph from Rudder’s book showing the age of men who respond to women’s profiles of various ages:

    dataclysm chart 1

    We’re meant to be impressed with Rudder’s line, “for every 100 men interested in that twenty year old, there are only 9 looking for someone thirty years older.” But here’s the thing, maybe there are 20 times as many 20-year-olds as there are 50-year-olds on the site? In which case, yay for the 50-year-old chicks? After all, those histograms look pretty healthy in shape, and they might be differently sized because the population size itself is drastically different for different ages.

    Confounding

    One of the worst examples of statistical mistakes is his experiment in turning off pictures. Rudder ignores the concept of confounders altogether, which he again miraculously is aware of in the next chapter on race.

    To be more precise, Rudder talks about the experiment when OKCupid turned off pictures. Most people went away when this happened but certain people did not:

    dataclysm chart 2

    Some of the people who stayed on went on a “blind date.” Those people, which Rudder called the “intrepid few,” had a good time with people no matter how unattractive they were deemed to be based on OKCupid’s system of attractiveness. His conclusion: people are preselecting for attractiveness, which is actually unimportant to them.

    But here’s the thing, that’s only true for people who were willing to go on blind dates. What he’s done is select for people who are not superficial about looks, and then collect data that suggests they are not superficial about looks. That doesn’t mean that OKCupid users as a whole are not superficial about looks. The ones that are just got the hell out when the pictures went dark.

    Race

    This brings me to the most interesting part of the book, where Rudder explores race. Again, it ends up being too blunt by far.

    Here’s the thing. Race is a big deal in this country, and racism is a heavy criticism to be firing at people, so you need to be careful, and that’s a good thing, because it’s important. The way Rudder throws it around is careless, and he risks rendering the term meaningless by not having a careful discussion. The frustrating part is that I think he actually has the data to have a very good discussion, but he just doesn’t make the case the way it’s written.

    Rudder pulls together stats on how men of all races rate women of all races on an attractiveness scale of 1-5. It shows that non-black men find their own race attractive and non-black men find black women, in general, less attractive. Interesting, especially when you immediately follow that up with similar stats from other U.S. dating sites and – most importantly – with the fact that outside the U.S., we do not see this pattern. Unfortunately that crucial fact is buried at the end of the chapter, and instead we get this embarrassing quote right after the opening stats:

    And an unintentionally hilarious 84 percent of users answered this match question:

    Would you consider dating someone who has vocalized a strong negative bias toward a certain race of people?

    in the absolute negative (choosing “No” over “Yes” and “It depends”). In light of the previous data, that means 84 percent of people on OKCupid would not consider dating someone on OKCupid.

    Here Rudder just completely loses me. Am I “vocalizing” a strong negative bias towards black women if I am a white man who finds white women and Asian women hot?

    Especially if you consider that, as consumers of social platforms and sites like OKCupid, we are trained to rank all the products we come across to ultimately get better offerings, it is a step too far for the detective on the other side of the camera to turn around and point fingers at us for doing what we’re told. Indeed, this sentence plunges Rudder’s narrative deeply into the creepy and provocative territory, and he never fully returns, nor does he seem to want to. Rudder seems to confuse provocation for thoughtfulness.

    This is, again, a shame. A careful conversation about the issues of what we are attracted to, what we can imagine doing, and how we might imagine that will look to our wider audience, and how our culture informs those imaginings, are all in play here, and could have been drawn out in a non-accusatory and much more useful way.


    _____

    Cathy O’Neil is a data scientist and mathematician with experience in academia and the online ad and finance industries. She is one of the most prominent and outspoken women working in data science today, and was one of the guiding voices behind Occupy Finance, a book produced by the Occupy Wall Street Alt Banking group. She is the author of “On Being a Data Skeptic” (Amazon Kindle, 2013), and co-author with Rachel Schutt of Doing Data Science: Straight Talk from the Frontline (O’Reilly, 2013). Her Weapons of Math Destruction is forthcoming from Random House. She appears on the weekly Slate Money podcast hosted by Felix Salmon. She maintains the widely-read mathbabe blog, on which this review first appeared.

    Back to the essay

  • Frank Pasquale — Capital’s Offense: Law’s Entrenchment of Inequality (On Piketty, “Capital in the 21st Century”)

    Frank Pasquale — Capital’s Offense: Law’s Entrenchment of Inequality (On Piketty, “Capital in the 21st Century”)

    a review of Thomas Piketty, Capital in the Twenty-First Century (Harvard University Press, 2014)

    by Frank Pasquale

    ~

    Thomas Piketty’s Capital in the Twenty-First Century has succeeded both commercially and as a work of scholarship. Capital‘s empirical research is widely praised among economists—even by those who disagree with its policy prescriptions.  It is also the best-selling book in the century-long history of Harvard University Press, and a rare work of scholarship to reach the top spot on Amazon sales rankings.[1]

    Capital‘s main methodological contribution is to bring economic, sociological, and even literary perspectives to bear in a work of economics.[2] The book bridges positive and normative social science, offering strong policy recommendations for increased taxation of the wealthiest. It is also an exploration of historical trends.[3] In Capital, fifteen years of careful archival research culminate in a striking thesis: capitalism exacerbates inequality over time. There is no natural tendency for markets themselves, or even ordinary politics, to slow accumulation by top earners.[4]

    This review explains Piketty’s analysis and its relevance to law and social theory, drawing lessons for the re-emerging field of political economy. Piketty’s focus on long-term trends in inequality suggests that many problems traditionally explained as sector-specific (such as varied educational outcomes) are epiphenomenal with regard to increasingly unequal access to income and capital. Nor will a narrowing of purported “skills gaps” do much to improve economic security, since opportunity to earn money via labor matters far less in a world where capital is the key to enduring purchasing power. Policymakers and attorneys ignore Piketty at their peril, lest isolated projects of reform end up as little more than rearranging deck chairs amidst titanically unequal opportunities.

    Inequality, Opportunity, and the Rigged Game

    Capital weaves together description and prescription, facts and values, economics, politics, and history, with an assured and graceful touch. So clear is Piketty’s reasoning, and so compelling the enormous data apparatus he brings to bear, that few can doubt he has fundamentally altered our appreciation of the scope, duration, and intensity of inequality.[5]

    Piketty’s basic finding is that, absent extraordinary political interventions, the rate of return on capital (r) is greater than the rate of growth of the economy generally (g), which Piketty expresses via the now-famous formula r > g.[6] He finds that this relationship persists over time, and in the many countries with reliable data on wealth and income.[7] This simple inequality relationship has many troubling implications, especially in light of historical conflicts between capital and labor.

    Most persons support themselves primarily by wages—that is, what they earn from their labor. As capital takes more of economic output (an implication of r > g persisting over time), less is left for labor. Thus if we are concerned about unequal incomes and living standards, we cannot simply hope for a rising tide of growth to lift the fortunes of those in the bottom quintiles of the income and wealth distribution.  As capital concentrates, its owners take an ever larger share of income—unless law intervenes and demands some form of redistribution.[8] As the chart below (by Bard economist Pavlina Tcherneva, based on Piketty’s data) shows, we have now reached the point where the US economy is not simply distributing the lion’s share of economic gains to top earners; it is actively redistributing extant income of lower decile earners upwards:

    chart of doom

    In 2011, 93% of the gains in income during the economic “recovery” went to the top 1%.  From 2009 to 2011, “income gains to the top 1% … were 121% of all income increases,” because “incomes to the bottom 99% fell by 0.4%.”[9] The trend continued through 2012.

    Fractal inequality prevails up and down the income scale.[10] The top 15,000 tax returns in the US reported an average taxable income of $26 million in 2005—at least 400 times greater than the median return.[11] Moreover, Larry Bartels’s book, Unequal Democracy, graphs these trends over decades.[12] Bartels shows that, from 1945-2007, the 95th percentile did much better than those at lower percentiles.[13] He then shows how those at the 99.99th percentile did spectacularly better than those at the 99.9th, 99.5th, 99th, and 95th percentiles.[14] There is some evidence that even within that top 99.99th percentile, inequality reigned.  In 2005, the “Fortunate 400″—the 400 households with the highest earnings in the U.S.—made on average $213.9 million apiece, and the cutoff for entry into this group was a $100 million income—about four times the average income of $26 million prevailing in the top 15,000 returns.[15] As Danny Dorling observed in a recent presentation at the RSA, for those at the bottom of the 1%, it can feel increasingly difficult to “keep up with the Joneses,” Adelsons, and Waltons. Runaway incomes at the very top leave those slightly below the “ultra-high net worth individual” (UHNWI) cut-off ill-inclined to spread their own wealth to the 99%.

    Thus inequality was well-documented in these, and many other works, by the time Piketty published Capital—indeed, other authors often relied on the interim reports released by Piketty and his team of fellow inequality researchers over the past two decades.[16] The great contribution of Capital is to vastly expand the scope of the inquiry, over space and time. The book examines records in France going back to the 19th century, and decades of data in Germany, Japan, Great Britain, Sweden, India, China, Portugal, Spain, Argentina, Switzerland, and the United States.[17]

    The results are strikingly similar. The concentration of capital (any asset that generates income or gains in monetary value) is a natural concomitant of economic growth under capitalism—and tends to intensify if growth slows or stops.[18] Inherited fortunes become more important than those earned via labor, since the “miracle of compound interest” overwhelms any particularly hard-working person or ingenious idea. Once fortunes grow large enough, their owners can simply live off the interest and dividends they generate, without ever drawing on the principal. At the “escape velocity” enjoyed by some foundations and ultra-rich individuals, annual expenses are far less than annual income, precipitating ever-greater principal. This is Warren Buffett’s classic “snowball” of wealth—and we should not underestimate its ability to purchase the political favors that help constitute Buffettian “moats” around the businesses favored by the likes of Berkshire-Hathaway.[19]  Dynasties form and entrench their power.  If they can make capital pricey enough, even extraordinary innovations may primarily benefit their financers.

    Deepening the Social Science of Political Economy

    Just as John Rawls’s Theory of Justice laid a foundation for decades of writing on social justice, Piketty’s work is so generative that one could envision whole social scientific fields revitalized by it.[20] Political economy is the most promising, a long tradition of (as Piketty puts it) studying the “ideal role of the state in the economic and social organization of a country.”[21] Integrating the long-divided fields of politics and economics, a renewal of modern political economy could unravel “wicked problems” neither states nor markets alone can address.[22]

    But the emphasis in Piketty’s definition of political economy on “a country,” versus countries, or the world, is in tension with the global solutions he recommends for the regulation of capital. The dream of neoliberal globalization was to unite the world via markets.[23] Anti-globalization activists have often advanced a rival vision of local self-determination, predicated on overlaps between political and economic boundaries. State-bound political economy could theorize those units. But the global economy is, at present, unforgiving of autarchy and unlikely to move towards it.

    Capital tends to slip the bonds of states, migrating to tax havens. In the rarefied world of the global super-rich, financial privacy is a purchasable commodity.  Certainly there are always risks of discovery, or being taken advantage of by a disreputable tax shelter broker or shady foreign bank.  But for many wealthy individuals, tax havenry has been a rite of passage on the way to membership in a shadowy global elite. Piketty’s proposed global wealth tax would need international enforcement—for even the Foreign Accounts Tax Compliance Act (FATCA) imposed via America’s fading hegemony (and praised by Piketty) has only begun to address the problem of hidden (or runaway) wealth (and income).[24]

    It will be very difficult to track down the world’s hidden fortunes and tax them properly. Had Piketty consulted more legal sources, he may have acknowledged the problem more adequately in Capital. He recommends “automatic information exchange” among tax authorities, which is an excellent principle to improve enforcement. But actually implementing this principle could require fine-grained regulation of IT systems, deployment of whole new types of surveillance, and even uniform coding (via, say, standard legal entity identifiers, or LEIs) globally. More frankly acknowledging the difficulty of shepherding such legislation globally could have led to a more convincing (and comprehensive) examination of the shortcomings of globalized capitalism.

    In several extended interviews on Capital (with CNN Money, Econtalk, The New York Times, Huffington Post, and the New Republic, among others), Piketty pledges fealty to markets, praising their power to promote production and innovation. Never using the term “industrial policy” in his book, Piketty hopes that law may make the bounty of extant economic arrangements accessible to all, rather than changing the nature of those arrangements. But we need to begin to ask whether our very process of creating goods and services itself impedes better distribution of them.

    Unfortunately, mainstream economics itself often occludes this fundamental question. When distributive concerns arise, policymakers can either substantively intervene to reshape the benefits and burdens of commerce (a strategy economists tend to derogate as dirigisme), or may, post hoc, use taxes and transfer programs to redistribute income and wealth. For establishment economists, redistribution (happening after initial allocations by “the market”) is almost always considered more efficient than “distortion” of markets by regulation, public provision, or “predistribution.”[25]

    Tax law has historically been our primary way of arranging such redistribution, and Piketty makes it a focus of the concluding part of his book, called “Regulating Capital.” Piketty laments the current state of tax reporting and enforcement. Very wealthy individuals have developed complex webs of shell entities to hide their true wealth and earnings.[26] As one journalist observed, “Behind a New York City deed, there may be a Delaware LLC, which may be managed by a shell company in the British Virgin Islands, which may be owned by a trust in the Isle of Man, which may have a bank account in Liechtenstein managed by the private banker in Geneva. The true owner behind the structure might be known only to the banker.”[27] This is the dark side of globalization: the hidden structures that shield the unscrupulous from accountability.[28]

    The most fundamental tool of tax secrecy is separation: between persons and their money, between corporations and the persons who control them, between beneficial and nominal controllers of wealth. When money can pass between countries as easily as digital files, skilled lawyers and accountants can make it impossible for tax authorities to uncover the beneficial owners of assets (and the income streams generated by those assets).

    Piketty believes that one way to address inequality is strict enforcement of laws like America’s FATCA.[29] But the United States cannot accomplish much without pervasive global cooperation.  Thus the international challenge of inequality haunts Capital. As money concentrates in an ever smaller global “superclass” (to use David J. Rothkopf’s term), it’s easier for it to escape any ruling authority.[30] John Chung has characterized today’s extraordinary concentrations of wealth as a “death of reference” in our monetary system and its replacement with “a total relativity.”[31] He notes that “[i]n 2007, the average amount of annual compensation for the top twenty-five highest paid hedge fund managers was $892 million;” in the past few years, individual annual incomes in the group have reached two, three, or four billion dollars.  Today’s greatest hoards of wealth are digitized, as easily moved and hidden as digital files.

    We have no idea what taxes may be due from trillions of dollars in offshore wealth, or to what purposes it is directed.[32] In less-developed countries, dictators and oligarchs smuggle ill-gotten gains abroad.  Groups like Global Financial Integrity and the Tax Justice Network estimate that illicit financial flows out of poor countries (and into richer ones, often via tax havens) are ten times greater than the total sum of all development aid—nearly $1 trillion per year.  Given that the total elimination of extreme global poverty could cost about $175 billion per year for twenty years, this is not a trivial loss of funds—completely apart from what the developing world loses in the way of investment when its wealthiest residents opt to stash cash in secrecy jurisdictions.[33]

    An adviser to the Tax Justice Network once said that assessing money kept offshore is an “exercise in night vision,” like trying to measure “the economic equivalent of an astrophysical black hole.”[34] Shell corporations can hide connections between persons and their money, between corporations and the persons who control them, between beneficial and nominal owners. When enforcers in one country try to connect all these dots, there is usually another secrecy jurisdiction willing to take in the assets of the conniving. As the Tax Justice Network’s “TaxCast” exposes on an almost monthly basis, victories for tax enforcement in one developed country tend to be counterbalanced by a slide away from transparency elsewhere.

    Thus when Piketty recommends that “the only way to obtain tangible results is to impose automatic sanctions not only on banks but also on countries that refuse to require their financial institutions” to report on wealth and income to proper taxing authorities, one has to wonder: what super-institution will impose the penalties? Is this to be an ancillary function of the WTO?[35] Similarly, equating the imposition of a tax on capital with “the stroke of a pen” (568) underestimates the complexity of implementing such a tax, and the predictable forms of resistance that the wealth defense industry will engage in.[36] All manner of societal and cultural, public and private, institutions will need to entrench such a tax if it is to be a stable corrective to the juggernaut of r > g.[37]

    Given how much else the book accomplishes, this demand may strike some as a cavil—something better accomplished by Piketty’s next work, or by an altogether different set of allied social scientists.  But if Capital itself is supposed to model (rather than merely call for) a new discipline of political economy, it needs to provide more detail about the path from here to its prescriptions. Philosophers like Thomas Pogge and Leif Wenar, and lawyers like Terry Fisher and Talha Syed, have been quite creative in thinking through the actual institutional arrangements that could lead to better distribution of health care, health research, and revenues from natural resources.[38] They are not cited in Capital¸but their work could have enriched its institutional analysis greatly.

    An emerging approach to financial affairs, known as the Legal Theory of Finance (LTF), also offers illumination here, and should guide future policy interventions.  Led by Columbia Law Professor Katharina Pistor, an interdisciplinary research team of social scientists and attorneys have documented the ways in which law is constitutive of so-called financial markets.[39] Revitalizing the tradition of legal realism, Pistor has demonstrated the critical role of law in generating modern finance. Though law to some extent shapes all markets, in finance, its role is most pronounced.  The “products” traded are very little more than legal recognitions of obligations to buy or sell, own or owe. Their value can change utterly based on tiny changes to the bankruptcy code, SEC regulations, or myriad other laws and regulations.

    The legal theory of finance changes the dialogue about regulation of wealth.  The debate can now move beyond stale dichotomies like “state vs. market,” or even “law vs. technology.” While deregulationists mock the ability of regulators to “keep up with” the computational capacities of global banking networks, it is the regulators who made the rules that made the instantaneous, hidden transfer of financial assets so valuable in the first place. Such rules are not set in stone.

    The legal theory of finance also enables a more substantive dialogue about the central role of law in political economy. Not just tax rules, but also patent, trade, and finance regulation need to be reformed to make the wealthy accountable for productively deploying the wealth they have either earned or taken. Legal scholars have a crucial role to play in this debate—not merely as technocrats adjusting tax rules, but as advisors on a broad range of structural reforms that could ensure the economy’s rewards better reflected the relative contributions of labor, capital, and the environment.[40] Lawyers had a much more prominent role in the Federal Reserve when it was more responsive to workers’ concerns.[41]

    Imagined Critics as Unacknowledged Legislators

    A book is often influenced by its author’s imagined critics. Piketty, decorous in his prose style and public appearances, strains to fit his explosive results into the narrow range of analytical tools and policy proposals that august economists won’t deem “off the wall.”[42] Rather than deeply considering the legal and institutional challenges to global tax coordination, Piketty focuses on explaining in great detail the strengths and limitations of the data he and a team of researchers have been collecting for over a decade. But a renewed social science of political economy depends on economists’ ability to expand their imagined audience of critics, to those employing qualitative methodologies, to attorneys and policy experts working inside and outside the academy, and to activists and journalists with direct knowledge of the phenomena addressed.  Unfortunately, time that could have been valuably directed to that endeavor—either in writing Capital, or constructively shaping the extraordinary publicity the book received—has instead been diverted to shoring up the book’s reputation as rigorous economics, against skeptics who fault its use of data.

    To his credit, Piketty has won these fights on the data mavens’ own terms. The book’s most notable critic, Chris Giles at the Financial Times, tried to undermine Capital‘s conclusions by trumping up purported ambiguities in wealth measurement. His critique was rapidly dispatched by many, including Piketty himself.[43] Indeed, as Neil Irwin observed, “Giles’s results point to a world at odds not just with Mr. Piketty’s data, but also with that by other scholars and with the intuition of anyone who has seen what townhouses in the Mayfair neighborhood of London are selling for these days.”[44]

    One wonders if Giles reads his own paper. On any given day one might see extreme inequality flipping from one page to the next. For example, in a special report on “the fragile middle,” Javier Blas noted that no more than 12% of Africans earned over $10 per day in 2010—a figure that has improved little, if at all, since 1980.[45] Meanwhile, in the House & Home section on the same day, Jane Owen lovingly described the grounds of the estate of “His Grace Henry Fitzroy, the 12th Duke of Grafton.” The grounds cost £40,000 to £50,000 a year to maintain, and were never “expected to do anything other than provide pleasure.”[46] England’s revanchist aristocracy makes regular appearances in the Financial TimesHow to Spend It” section as well, and no wonder: as Oxfam reported in March, 2014, Britain’s five richest families have more wealth than its twelve million poorest people.[47]

    Force and Capital

    The persistence of such inequalities is as much a matter of law (and the force behind it to, say, disperse protests and selectively enforce tax regulations), as it is a natural outgrowth of the economic forces driving r and g. To his credit, Piketty does highlight some of the more grotesque deployments of force on behalf of capital. He begins Part I (“Income and Capital”) and ends Part IV (“Regulating Capital”) by evoking the tragic strike at the Lonmin Mine in South Africa in August 2012.  In that confrontation, “thirty-four strikers were shot dead” for demanding pay of about $1,400 a month (there were making about $700).[48] Piketty deploys the story to dramatize conflict over the share of income going to capital versus labor. But it also illustrates dynamics of corruption. Margaret Kimberley of Black Agenda Report claims that the union involved was coopted thanks to the wealth of the man who once ran it.[49] The same dynamics shine through documentaries like Big Men (on Ghana), or the many nonfiction works on oil exploitation in Africa. [50]

    Piketty observes that “foreign companies and stockholders are at least as guilty as unscrupulous African elites” in promoting the “pillage” of the continent.[51] Consider the state of Equatorial Guinea, which struck oil in 1995. By 2006, Equatoguineans had the third highest per capita income in the world, higher than many prosperous European countries.[52] Yet the typical citizen remains very poor. [53]  In the middle of the oil boom, an international observer noted that “I was unable to see any improvements in the living standards of ordinary people. In 2005, nearly half of all children under five were malnourished,” and “[e]ven major cities lack[ed] clean water and basic sanitation.”[54] The government has not demonstrated that things have improved much since them, despite ample opportunity to do so.  Poorly paid soldiers routinely shake people down for bribes, and the country’s president, Teodoro Obiang, has paid Moroccan mercenaries for his own protection.  A 2009 book noted that tensions in the country had reached a boiling point, as the “local Bubi people of Malabo” felt “invaded” by oil interests, other regions were “abandoned,” and self-determination movements decried environmental and human rights abuses.[55]

    So who did benefit from Equatorial Guinea’s oil boom?  Multinational oil companies, to be sure, though we may never know exactly how much profit the country generated for them—their accounting was (and remains) opaque.  The Riggs Bank in Washington, D.C. gladly handled accounts of President Obiang, as he became very wealthy.  Though his salary was reported to be $60,000 a year, he had a net worth of roughly $600 million by 2011.[56] (Consider, too, that such a fortune would not even register on recent lists of the world’s 1,500 or so billionaires, and is barely more than 1/80th the wealth of a single Koch brother.) Most of the oil companies’ payments to him remain shrouded in secrecy, but a few came to light in the wake of US investigations.  For example, a US Senate report blasted him for personally taking $96 million of his nation’s $130 million in oil revenue in 1998, when a majority of his subjects were malnourished.[57]

    Obiang’s sordid record has provided a rare glimpse into some of the darkest corners of the global economy.  But his story is only the tip of an iceberg of a much vaster shadow economy of illicit financial flows, secrecy jurisdictions, and tax evasion. Obiang could afford to be sloppy: as the head of a sovereign state whose oil reserves gave it some geopolitical significance, he knew that powerful patrons could shield him from the fate of an ordinary looter.  Other members of the hectomillionaire class (and plenty of billionaires) take greater precautions.  They diversify their holdings into dozens or hundreds of entities, avoiding public scrutiny with shell companies and pliant private bankers.  A hidden hoard of tens of trillions of dollars has accumulated, and likely throws off hundreds of billions of dollars yearly in untaxed interest, dividends, and other returns.[58] This drives a wedge between a closed-circuit economy of extreme wealth and the ordinary patterns of exchange of the world’s less fortunate.[59]

    The Chinese writer and Nobel Peace Prize winner Liu Xiaobo once observed that corruption in Beijing had led to an officialization of the criminal and the criminalization of the official.[60] Persisting even in a world of brutal want and austerity-induced suffering, tax havenry epitomizes that sinister merger, and Piketty might have sharpened his critique further by focusing on this merger of politics and economics, of private gain and public governance. Authorities promote activities that would have once been proscribed; those who stand in the way of such “progress” might be jailed (or worse).  In Obiang’s Equatorial Guinea, we see similar dynamics, as the country’s leader extracts wealth at a volume that could only be dreamed of by a band of thieves.

    Obiang’s curiously double position, as Equatorial Guinea’s chief law maker and law breaker, reflects a deep reality of the global shadow economy.  And just as “shadow banks” are rivalling more regulated banks in terms of size and influence, shadow economy tactics are starting to overtake old standards. Tax avoidance techniques that were once condemned are becoming increasingly acceptable.  Campaigners like UK Uncut and the Tax Justice Network try to shame corporations for opportunistically allocating profits to low-tax jurisdictions.[61] But CEOs still brag about their corporate tax unit as a profit center.

    When some of Republican presidential candidate Mitt Romney’s recherché tax strategies were revealed in 2012, Barack Obama needled him repeatedly.  The charges scarcely stuck, as Romney’s core constituencies aimed to emulate rather than punish their standard-bearer.[62] Obama then appointed a Treasury Secretary (Jack Lew), who had himself utilized a Cayman Islands account.  Lew was the second Obama Treasury secretary to suffer tax troubles: Tim Geithner, his predecessor, was also accused of “forgetting” to pay certain taxes in a self-serving way.  And Obama’s billionaire Commerce Secretary Penny Pritzker was no stranger to complex tax avoidance strategies.[63]

    Tax attorneys may characterize Pritzker, Lew, Geithner, and Romney as different in kind from Obiang.  But any such distinctions they make will likely need to be moral, rather than legal, in nature.  Sure, these American elites operated within American law—but Obiang is the law of Equatorial Guinea, and could easily arrange for an administrative agency to bless his past actions (even developed legal systems permit retroactive rulemaking) or ensure the legality of all future actions (via safe harbors).  The mere fact that a tax avoidance scheme is “legal” should not count for much morally—particularly as those who gain from prior US tax tweaks use their fortunes to support the political candidacies of those who would further push the law in their favor.

    Shadowy financial flows exemplify the porous boundary between state and market.  The book Tax Havens: How Globalization Really Works argues that the line between savvy tax avoidance and illegal tax evasion (or strategic money transfers and forbidden money laundering) is blurring.[64] Between our stereotypical mental images of dishonest tycoons sipping margaritas under the palm trees of a Caribbean tax haven, and a state governor luring a firm by granting it a temporary tax abatement, lie hundreds of subtler scenarios.  Dingy rows of Delaware, Nevada, and Wyoming file cabinets can often accomplish the same purpose as incorporating in Belize or Panama: hiding the real beneficiaries of economic activity.[65] And as one wag put it to journalist Nicholas Shaxson, “the most important tax haven in the world is an island”—”Manhattan.”[66]

    In a world where “tax competition” is a key to neoliberal globalization, it is hard to see how a global wealth tax (even if set at the very low levels Piketty proposes) supports (rather than directly attacks) existing market order. Political elites are racing to reduce tax liability to curry favor with the wealthy companies and individuals they hope to lure, serve, and bill.  The ultimate logic of that competition is a world made over in the image of Obiang’s Equatorial Guinea: crumbling infrastructure and impoverished citizenries coexisting with extreme luxury for a global extractive elite and its local enablers.  Books like Third World America, Oligarchy, and Captive Audience have already started chronicling the failure of the US tax system to fund roads, bridges, universal broadband internet connectivity, and disaster preparation.[67] As tax avoiding elites parley their gains into lobbying for rules that make tax avoidance even easier, self-reinforcing inequality seems all but inevitable.  Wealthy interests can simply fund campaigns to reduce their taxes, or to reduce the risk of enforcement to a nullity. As Ben Kunkel pointedly asks, “How are the executive committees of the ruling class in countries across the world to act in concert to impose Piketty’s tax on just this class?”[68]

    US history is instructive here. Congress passed a tax on the top 0.1% of earners in 1894, only to see the Supreme Court strike the tax down in a five to four decision.  After the 16th Amendment effectively repealed that Supreme Court decision, Congress steadily increased the tax on high income households.  From 1915 to 1918, the highest rate went from 7% to 77%, and over fifty-six tax brackets were set.  When high taxes were maintained for the wealthy after the war, tax evasion flourished.  At this point, as Jeffrey Winters writes, the government had to choose whether to “beef up law enforcement against oligarchs … , or abandon the effort and instead squeeze the same resources from citizens with far less material clout to fight back.”[69] Enforcement ebbed and flowed. But since then, what began by targeting the very wealthy has grown to include “a mass tax that burdens oligarchs at the same effective rate as their office staff and landscapers.”[70]

    The undertaxation of America’s wealthy has helped them capture key political processes, and in turn demand even less taxation.  The dynamic of circularity teaches us that there is no stable, static equilibrium to be achieved between regulators and regulated. The government is either pushing industry to realize some public values in its activities (say, by investing in sustainable growth), or industry is pushing its regulators to promote its own interests.[71] Piketty may worry that, if he too easily accepts this core tenet of politico-economic interdependence, he’ll be dismissed as a statist socialist. But until political economists do so, their work cannot do justice to the voices of those prematurely dead as a result of the relentless pursuit of profit—ranging from the Lonmin miners, to those crushed at Rana Plaza, to the spike of suicides provoked by European austerity and Indian microcredit gone wrong, to the thousands of Americans who will die early because they are stuck in states that refuse to expand Medicaid.[72] Contemporary political economy can only mature if capitalism’s ghosts constrain our theory and practice as pervasively as communism’s specter does.

    Renewing Political Economy

    Piketty has been compared to Alexis de Tocqueville: a French outsider capable of discerning truths about the United States that its own sages were too close to observe.  The function social equality played in Tocqueville’s analysis, is taken up by economic inequality in Piketty’s:  a set of self-reinforcing trends fundamentally reshaping the social order.[73] I’ve written tens of thousands of words on this inequality, but the verbal itself may be outmatched in the face of the numbers and force behind these trends.[74] As film director Alex Rivera puts it, in an interview with The New Inquiry:

    I don’t think we even have the vocabulary to talk about what we lose as contemporary virtualized capitalism produces these new disembodied labor relations. … The broad, hegemonic clarity is the knowledge that a capitalist enterprise has the right to seek out the cheapest wage and the right to configure itself globally to find it. … The next stage in this process…is for capital to configure itself to enable every single job to be put on the global market through the network.[75]

    Amazon’s “Mechanical Turk” has begun that process, supplying “turkers” to perform tasks at a penny each.[76] Uber, Lyft, TaskRabbit, and various “gig economy” imitators assure that micro-labor is on the rise, leaving micro-wages in its wake.[77] Workers are shifting from paid vacation to stay-cation to “nano-cation” to “paid time off” to hoarding hours to cover the dry spells when work disappears.[78] These developments are all predictable consequences of a globalization premised on maximizing finance rents, top manager compensation, and returns to shareholders.

    Inequality is becoming more outrageous than even caricaturists used to dare. The richest woman in the world (Gina Rinehart) has advised fellow Australians to temper their wage demands, given that they are competing against Africans willing to work for two dollars day.[79] Or consider the construct of Dogland, from Korzeniewicz and Moran’s 2009 book, Unveiling Inequality:

    The magnitude of global disparities can be illustrated by considering the life of dogs in the United States. According to a recent estimate … in 2007-2008 the average yearly expenses associated with owning a dog were $1425 … For sake of argument, let us pretend that these dogs in the US constitute their own nation, Dogland, with their average maintenance costs representing the average income of this nation of dogs.

    By such a standard, their income would place Dogland squarely as a middle-income nation, above countries such as Paraguay and Egypt. In fact, the income of Dogland would place its canine inhabitants above more than 40% of the world population. … And if we were to focus exclusively on health care expenditures, the gap becomes monumental: the average yearly expenditures in Dogland would be higher than health care expenditures in countries that account for over 80% of the world population.[80]

    Given disparities like this, wages cannot possibly reflect just desert: who can really argue that a basset hound, however adorable, has “earned” more than a Bangladeshi laborer? Cambridge economist Ha Joon Chang asks us to compare the job and the pay of transport workers in Stockholm and Calcutta. “Skill” has little to do with it. The former, drivers on clean and well-kept roads, may easily be paid fifty times more than the latter, who may well be engaged in backbreaking, and very skilled, labor to negotiate passengers among teeming pedestrians, motorbikes, trucks, and cars.[81]

    Once “skill-biased technological change” is taken off the table, the classic economic rationale for such differentials focuses on the incentives necessary to induce labor. In Sweden, for example, the government assures that a person is unlikely to starve, no matter how many hours a week he or she works. By contrast, in India, 42% of the children under five years old are malnourished.[82] So while it takes $15 or $20 an hour just to get the Swedish worker to show up, the typical Indian can be motivated to labor for much less. But of course, at this point the market rationale for the wage differential breaks down entirely, because the background set of social expectations of earnings absent work is epiphenomenal of state-guaranteed patterns of social insurance. The critical questions are: how did the Swedes generate adequate goods and services for their population, and the social commitment to redistribution necessary in order to assure that unemployment is not a death sentence? And how can such social arrangements create basic entitlements to food, housing, health care, and education, around the world?

    Piketty’s proposals for regulating capital would be more compelling if they attempted to answer questions like those, rather than focusing on the dry, technocratic aim of tax-driven wealth redistribution. Moreover, even within the realm of tax law and policy, Piketty will need to grapple with several enforcement challenges if a global wealth tax is to succeed. But to its great credit, Capital adopts a methodology capacious enough to welcome the contributions of legal academics and a broad range of social scientists to the study (and remediation) of inequality.[83] It is now up to us to accept the invitation, realizing that if we refuse, accelerating inequality will undermine the relevance—and perhaps even the very existence—of independent legal authority.


    _____

    Frank Pasquale (@FrankPasquale) is a Professor of Law at the University of Maryland Carey School of Law. His forthcoming book, The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press, 2015), develops a social theory of reputation, search, and finance.  He blogs regularly at Concurring Opinions. He has received a commission from Triple Canopy to write and present on the political economy of automation. He is a member of the Council for Big Data, Ethics, and Society, and an Affiliate Fellow of Yale Law School’s Information Society Project.

    Back to the essay
    _____

    [1] Dennis Abrams, Piketty’s “Capital”: A Monster Hit for Harvard U Press, Publishing Perspectives, at http://publishingperspectives.com/2014/04/pilkettys-capital-a-monster-hit-for-harvard-u-press/ (Apr. 29, 2014).

    [2] Intriguingly, one leading economist who has done serious work on narrative in the field, Dierdre McCloskey, offers a radically different (and far more positive) perspective on the nature of economic growth under capitalism. Evan Thomas, Has Thomas Piketty Met His Match?, http://www.spectator.co.uk/features/9211721/unequal-battle/. But this is to be expected as richer methodologies inform economic analysis. Sometimes the best interpretive social science leads not to consensus, but to ever sharper disagreement about the nature of the phenomena it describes and evaluates. Rather than trying to bury normative differences in jargon or flatten them into commensurable cost-benefit calculations, it surfaces them.

    [3] As Thomas Jessen Adams argues, “to understand how inequality has been overcome in the past, we must understand it historically.” Adams, The Theater of Inequality, at http://nonsite.org/feature/the-theater-of-inequality. Adams critiques Piketty for failing to engage historical evidence properly. In this review, I celebrate the book’s bricolage of methodological approaches as the type of problem-driven research promoted by Ian Shapiro.

    [4] Thomas Piketty, Capital in the Twenty-First Century 17 (Arthur Goldhammer trans., 2014).

    [5] Doug Henwood, The Top of the World, Book Forum, Apr. 2014,  http://www.bookforum.com/inprint/021_01/12987; Suresh Naidu, Capital Eats the World, Jacobin (May 30, 2014), https://www.jacobinmag.com/2014/05/capital-eats-the-world/.

    [6] Thomas Piketty, Capital in the Twenty-First Century 25 (Arthur Goldhammer trans., 2014).

    [7] Id.

    [8] As Piketty observes, war and revolution can also serve this redistributive function. Piketty, supra n. 3, at 20. Since I (and the vast majority of attorneys) do not consider violence a legitimate tool of social change, I do not include these options in my discussion of Piketty’s book.

    [9] Frank Pasquale, Access to Medicine in an Era of Fractal Inequality, 19 Annals of Health Law 269 (2010).

    [10] Charles R. Morris, The Two Trillion Dollar Meltdown: Easy Money, High Rollers, and the Great Credit Crash 139-40 (2009); see also Edward N. Wolff, Top Heavy: The Increasing Inequality of Wealth in America and What Can Be Done About It 36 (updated ed. 2002).

    [11] Yves Smith, Yes, Virginia, the Rich Continue to Get Richer: The Top 1% Get 121% of Income Gains Since 2009, Naked Capitalism (Feb. 13, 2013), http://www.nakedcapitalism.com/2013/02/yes-virginia-the-rich-continue-to-get-richer-the-1-got-121-of-income-gains-since-2009.html#XxsV2mERu5CyQaGE.99.

    [12] Larry M. Bartels, Unequal Democracy: The Political Economy of the New Gilded Age 8,10 (2010).

    [13] Id. at 8.

    [14] Id. at 10.

    [15] Tom Herman, There’s Rich, and There’s the ‘Fortunate 400′, Wall St. J., Mar. 5, 2008, http://online.wsj.com/article/SB120468366051012473.html.

    [16] See Thomas Piketty & Emmanuel Saez, The Evolution of Top Incomes: A Historical and International Perspective, 96 Am. Econ. Rev. 200, 204 (2006). 

    [17] Piketty, supra note 4, at 17. Note that, given variations in the data, Piketty is careful to cabin the “geographical and historical boundaries of this study” (27), and must “focus primarily on the wealthy countries and proceed by extrapolation to poor and emerging countries” (28).

    [18] Id. at 46, 571 (“In this book, capital is defined as the sum total of nonhuman assets that can be owned and exchanged on some market. Capital includes all forms of real property (including residential real estate) as well as financial and professional capital (plants, infrastructure, machinery, patents, and so on) used by firms and government agencies.”).

    [19] Alice Schroeder, The Snowball: Warren Buffett and the Business of Life (Bantam-Dell, 2008); Adam Levine-Weinberg, Warren Buffett Loves a Good Moat, at http://www.fool.com/investing/general/2014/06/30/warren-buffett-loves-a-good-moat.aspx.

    [20] John Rawls, A Theory of Justice (1971).

    [21] Piketty, supra note 4, at 540.

    [22] Atul Gawande, Something Wicked This Way Comes, New Yorker (June 28, 2012), http://www.newyorker.com/news/daily-comment/something-wicked-this-way-comes.

    [23] Philip Mirowski, Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown (2013).

    [24] The Foreign Account Tax Compliance Act (FATCA) was passed in 2010 as part of the Hiring Incentives to Restore Employment Act, Pub. L. No. 111-147, 124 Stat. 71 (2010), codified in sections 1471 to 1474 of the Internal Revenue Code, 26 U.S.C. §§ 1471-1474.  The law is effective as of 2014. It requires foreign financial institutions (FFIs) to report financial information about accounts held by United States persons, or pay a withholding tax. Id.

    [25] Christopher William Sanchirico, Deconstructing the New Efficiency Rationale, 86 Cornell L. Rev. 1003, 1005 (2001).

    [26] Nicholas Shaxson, Treasure Islands: Uncovering the Damage of Offshore Banking and Tax Havens (2012); Jeanna Smialek, The 1% May be Richer than You Think, Bloomberg, Aug. 7, 2014, at http://www.bloomberg.com/news/2014-08-06/the-1-may-be-richer-than-you-think-research-shows.html (collecting economics research).

    [27] Andrew Rice, Stash Pad: The New York real-estate market is now the premier destination for wealthy foreigners with rubles, yuan, and dollars to hide, N.Y. Mag., June 29, 2014, at http://nymag.com/news/features/foreigners-hiding-money-new-york-real-estate-2014-6/#.

    [28] Ronen Palan, Richard Murphy, and Christian Chavagneux, Tax Havens: How Globalization Really Works 272 (2009) (“[m]ore than simple conduits for tax avoidance and evasion, tax havens actually belong to the broad world of finance, to the business of managing the monetary resources of individuals, organizations, and countries.  They have become among the most powerful instruments of globalization, one of the principal causes of global financial instability, and one of the large political issues of our times.”).

    [29] 26 U.S.C. § 1471-1474 (2012); Itai Grinberg, Beyond FATCA: An Evolutionary Moment for the International Tax System (Georgetown Law Faculty, Working Paper No. 160, 2012), available at http://scholarship.law.georgetown.edu/cgi/viewcontent.cgi?article=1162&context=fwps_papers.

    [30] David Rothkopf, Superclass: The Global Power Elite and the World They Are Making (2009).

    [31] John Chung, Money as Simulacrum: The Legal Nature and Reality of Money, 5 Hasting Bus. L.J. 109,149 (2009).

    [32] James S. Henry, Tax Just. Network, The Price Of Offshore Revisited: New Estimates For “Missing” Global Private Wealth, Income, Inequality, And Lost Taxes 3 (2012), available at http://www.taxjustice.net/cms/upload/pdf/Price_of_Offshore_Revisited_120722.pdf; Scott Highman et al., Piercing the Secrecy of Offshore Tax Havens, Wash. Post (Apr. 6, 2013), http://www.washingtonpost.com/investigations/piercing-the-secrecy-of-offshore-tax-havens/2013/04/06/1551806c-7d50-11e2-a044-676856536b40_story.html.

    [33] Dev Kar & Devon Cartwright‐Smith, Center for Int’l Pol’y, Illicit Financial Flows from Developing Countries: 2002-2006 (2012); Jeffrey Sachs, The End of Poverty: Economic Possibilities for Our Time (2006); Ben Harack, How Much Would it Cost to End Extreme Poverty in the World?, Vision Earth, (Aug. 26, 2011), http://www.visionofearth.org/economics/ending-poverty/how-much-would-it-cost-to-end-extreme-poverty-in-the-world/.

    [34] Henry, supra note 68.

    [35] Piketty, supra note 4, at 523.

    [36] Jeffrey Winters coined the term “wealth defense industry” in his book, Oligarchy. See Frank Pasquale, Understanding Wealth Defense: Direct Action from the 0.1%, at http://www.concurringopinions.com/archives/2011/11/understanding-wealth-defense-direct-action-from-the-0-1.html.

    [37] For a similar argument, focusing on the historical specificity of the US parallel to the trente glorieuses, see  Thomas Jessen Adams, The Theater of Inequality, http://nonsite.org/feature/the-theater-of-inequality.

    [38] Thomas Pogge, The Health Impact Fund: Boosting Pharmaceutical Innovation Without Obstructing Free Access, 18 Cambridge Q. Healthcare Ethics 78 (2008) (proposing global R&D  fund);William Fisher III, Promise to Keep: Technology, Law, and the Future of Entertainment (2007); William W. Fisher & Talha Syed, Global Justice in Healthcare: Developing Drugs for the Developing World, 40 U.C. Davis L. Rev. 581 (2006).

    [39] Katharina Pistor, A Legal Theory of Finance, 41 J. Comp. Econ. 315 (2013); Law in Finance, 41 J. Comp. Econ (2013). Several other articles in the same journal issue discuss the implications of LTF for derivatives, foreign currency exchange, and central banking.

    [40] University of Chicago Law Professor Eric A. Posner and economist Glen Weyl recognize this in their review of Piketty, arguing that “the fundamental problem facing American capitalism is not the high rate of return on capital relative to economic growth that Piketty highlights, but the radical deviation from the just rewards of the marketplace that have crept into our society and increasingly drives talented students out of innovation and into finance.”  Posner & Weyl, Thomas Piketty Is Wrong: America Will Never Look Like a Jane Austen Novel, The New Republic, July 31, 2014, at http://www.newrepublic.com/article/118925/pikettys-capital-theory-misunderstands-inherited-wealth-today. See also Timothy A. Canova, The Federal Reserve We Need, 21 American Prospect 9 (October 2010), at http://prospect.org/article/federal-reserve-we-need.

    [41] Timothy Canova, The Federal Reserve We Need: It’s the Fed We Once Had, at http://prospect.org/article/federal-reserve-we-need; Justin Fox, How Economics PhDs Took Over the Federal Reserve, at http://blogs.hbr.org/2014/02/how-economics-phds-took-over-the-federal-reserve/.

    [42] Jack M. Balkin, From Off the Wall to On the Wall: How the Mandate Challenge Went Mainstream, Atlantic (June 4, 2012, 2:55 PM), http://www.theatlantic.com/national/archive/2012/06/from-off-the-wall-to-on-the-wall-how-the-mandate-challenge-went-mainstream/258040/ (Jack Balkin has described how certain arguments go from being ‘off the wall‘ to respectable in constitutional thought; economists have yet to take up that deflationary nomenclature for the evolution of ideas in their own field’s intellectual history. That helps explain the rising power of economists vis a vis lawyers, since the latter field’s honesty about the vagaries of its development diminishes its authority as a ‘science.’).  For more on the political consequences of the philosophy of social science, see Jamie Cohen-Cole, The Open Mind: Cold War Politics and the Sciences of Human Nature (2014), and Joel Isaac, Working Knowledge: Making the Human Sciences from Parsons to Kuhn (2012).

    [43] Chris Giles, Piketty Findings Undercut by Errors, Fin. Times (May 23, 2014, 7:00 PM), http://www.ft.com/intl/cms/s/2/e1f343ca-e281-11e3-89fd-00144feabdc0.html#axzz399nSmEKj; Thomas Piketty, Addendum: Response to FT, Thomas Piketty (May 28, 2014), http://piketty.pse.ens.fr/files/capital21c/en/Piketty2014TechnicalAppendixResponsetoFT.pdf; Felix Salmon, The Piketty Pessimist, Reuters (April 25, 2014), http://blogs.reuters.com/felix-salmon/2014/04/25/the-piketty-pessimist/.

    [44] Neil Irwin, Everything You Need to know About Thomas Piketty vs. The Financial Times, N.Y. Times (May 30, 2014), http://www.nytimes.com/2014/05/31/upshot/everything-you-need-to-know-about-thomas-piketty-vs-the-financial-times.html

    [45] Javier Blas, The Fragile Middle: Rising Inequality in Africa Weighs on New Consumers, Fin. Times (Apr. 18, 2014), http://www.ft.com/intl/cms/s/0/49812cde-c566-11e3-89a9-00144feabdc0.html#axzz399nSmEKj.

    [46] Jane Owen, Duke of Grafton Uses R&B to Restore Euston Hall’s Pleasure Grounds, Fin. Times (Apr. 18, 2014, 2:03 PM), http://www.ft.com/intl/cms/s/2/b49f6dd8-c3bc-11e3-870b-00144feabdc0.html#slide0.

    [47] Larry Elliott, Britain’s Five Richest Families Worth More Than Poorest 20%, Guardian, Mar. 16, 2014, http://www.theguardian.com/business/2014/mar/17/oxfam-report-scale-britain-growing-financial-inequality#101.

    [48] Piketty, supra note 4, at 570.

    [49] Margaret Kimberley, Freedom Rider: Miners Shot Down, Black Agenda Report (June 4, 2014), http://www.blackagendareport.com/content/freedom-rider-miners-shot-down.

    [50] Peter Maass, Crude World: The Violent Twilight of Oil (2009); Nicholas Shaxson, Poisoned Wells: The Dirty Politics of African Oil (2008).

    [51] Piketty, supra note 4, at 539.

    [52] Jad Mouawad, Oil Corruption in Equatorial Guinea, N.Y. Times Green Blog (July 9, 2009, 7:01 AM), http://green.blogs.nytimes.com/2009/07/09/oil-corruption-in-equatorial-guinea; Tina Aridas & Valentina Pasquali, Countries with the Highest GDP Average Growth, 2003–2013, Global Fin. (Mar. 7, 2013), http://www.gfmag.com/component/content/article/119-economic-data/12368-countries-highest-gdp-growth.html#axzz2W8zLMznX; CIA, The World Factbook 184 (2007).

    [53] Interview with President Teodoro Obiang of Equatorial Guinea, CNN’s Amanpour (CNN broadcast Oct. 5, 2012), transcript available at http://edition.cnn.com/TRANSCRIPTS/1210/05/ampr.01.html.

    [54] Peter Maass, A Touch of Crude, Mother Jones, Jan. 2005,http://www.motherjones.com/politics/2005/01/obiang-equatorial-guinea-oil-riggs.

    [55] Geraud Magrin & Geert van Vliet, The Use of Oil Revenues in Africa, in Governance of Oil in Africa: Unfinished Business 114 (Jacques Lesourne ed., 2009).

    [56] Interview with President Teodoro Obiang of Equatorial Guinea, supra note 89 .

    [57] S. Minority Staff of Permanent Subcomm. on Investigations, Comm. on Gov’t Affairs, 108th Cong., Rep. on Money Laundering and Foreign Corruption: Enforcement and Effectiveness of the Patriot Act 39-40 (Subcomm. Print 2004).

    [58] Henry, supra note 68 , at 6, 19-20.

    [59] Frank Pasquale, Closed Circuit Economics, New City Reader, Dec. 3, 2010, at 3, at http://neildonnelly.net/ncr/08_Business/NCR_Business_%5BF%5D_web.pdf.

    [60] Liu Xiaobo, No Enemies, No Hatred 102 (Perry Link, trans., 2012).

    [61] Jesse Drucker, Occupy Wall Street Stylists Pursue U.K. Tax Dodgers, Bloomberg News (June 11, 2013), http://www.businessweek.com/news/2013-06-11/occupy-wall-street-stylists-pursue-u-dot-k-dot-tax-dodgers.

    [62] Daniel J. Mitchell, Tax Havens Should Be Emulated, Not Prosecuted, CATO Inst. (Apr. 13, 2009, 12:36 PM), http://www.cato.org/blog/tax-havens-should-be-emulated-not-prosecuted.

    [63] Janet Novack, Pritzker Family Baggage: Tax Saving Offshore Trusts, Forbes (May 2, 2013, 8:20 PM), http://www.forbes.com/sites/janetnovack/2013/05/02/pritzker-family-baggage-tax-saving-offshore-trusts/.

    [64] Ronen Palan et al., Tax Havens: How Globalization Really Works (2013); see also Carolyn Nordstrom, Global Outlaws: Crime, Money, and Power in the Contemporary World (2007), and Loretta Napoleoni, Rogue Economics (2009).

    [65] Palan et al., supra note 100 .

    [66] Shaxson, supra note 86 , at 24.

    [67] Arianna Huffington, Third World America: How Our Politicians Are Abandoning the Middle Class and Betraying the American Dream (2011); Jeffrey A. Winters, Oligarchy (2011); Susan B. Crawford, Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age (2014).

    [68] Benjamin Kunkel, Paupers and Richlings, 36 London Rev. Books 17 (2014) (reviewing Thomas Piketty, Capital in the Twenty-First Century).

    [69] Jeffrey A. Winters, Oligarchy and Democracy, Am. Interest, Sept. 28, 2011, http://www.the-american-interest.com/articles/2011/9/28/oligarchy-and-democracy/.

    [70] Id.

    [71]  James K. Galbraith, The Predator State: How Conservatives Abandoned the Free Market and Why Liberals Should, Too (2009).

    [72] Alex Duval Smith, South Africa Lonmin Mine Massacre Puts Nationalism Back on Agenda, Guardian (Aug. 29, 2012), http://www.theguardian.com/global-development/poverty-matters/2012/aug/29/south-africa-lonmin-mine-massacre-nationalisation; Charlie Campbell, Dying for Some New Clothes: Bangladesh’s Rana Plaza Tragedy, Time (Apr. 26, 2013), http://world.time.com/2013/04/26/dying-for-some-new-clothes-the-tragedy-of-rana-plaza/; David Stuckler, The Body Economic: Why Austerity Kills xiv (2013); Soutik Biswas, India’s Micro-Finance Suicide Epidemic, BBC (Dec. 16, 2010), http://www.bbc.com/news/world-south-asia-11997571; Michael P. O’Donnell, Further Erosion of Our Moral Compass: Failure to Expand Medicaid to Low-Income People in All States, 28 Am. J. Health Promotion iv (2013); Sam Dickman et al., Opting Out of Medicaid Expansion; The Health and Financial Impacts, Health Affairs Blog (Jan. 30, 2014), http://healthaffairs.org/blog/2014/01/30/opting-out-of-medicaid-expansion-the-health-and-financial-impacts/.

    [73] It would be instructive to compare political theorists’ varying models of Tocqueville’s predictive efforts, with Piketty’s sweeping r > g.  See, e.g., Roger Boesche, Why Could Tocqueville Predict So Well?, 11 Political Theory 79 (1983) (“Democracy in America endeavors to demonstrate how language, literature, the relations of masters and servants, the status of women, the family,  property, politics, and so forth, must change and align themselves in a new, symbiotic configuration as a result of the historical thrust toward equality”); Jon Elster, Alexis de Tocqueville:  the First Social Scientist (2012).

    [74] See, e.g., Frank Pasquale, Access to Medicine in an Era of Fractal Inequality, 19 Annals of Health Law 269 (2010); Frank Pasquale, The Cost of Conscience: Quantifying our Charitable Burden in an Era of Globalization, at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=584741 (2004); Frank Pasquale, Diagnosing Finance’s Failures: From Economic Idealism to Lawyerly Realism, 6 India L. J. 2 (2012).

    [75] Malcolm Harris interview of Alex Rivera, Border Control, New Inquiry (July 2, 2012), http://thenewinquiry.com/features/border-control/.

    [76] Trebor Scholz, Digital Labor (Palgrave, forthcoming, 2015); Frank Pasquale, Banana Republic.com, Jotwell (Jan. 14, 2011), http://cyber.jotwell.com/banana-republic-com/.

    [77] The Rise of Micro-Labor, On Point with Tom Ashbrook (NPR Apr. 3, 2012, 10:00 AM), http://onpoint.wbur.org/2012/04/03/micro-labor-websites.

    [78] Vacation Time, On Point with Tom Ashbrook (NPR June 22, 2012, 10:00 AM), http://onpoint.wbur.org/2012/06/22/vacation-time.

    [79] Peter Ryan, Aussies Must Compete with $2 a Day Workers: Rinehart, ABC News (Sept. 25, 2012, 2:56 PM), http://www.abc.net.au/news/2012-09-05/rinehart-says-aussie-workers-overpaid-unproductive/4243866.

    [80] Roberto Patricio Korzeniewicz & Timothy Patrick Moran, Unveiling Inequality, at xv (2012).

    [81] Ha Joon Chang, 23 Things They Don’t Tell You About Capitalism 98 (2012).

    [82] Jason Burke, Over 40% of Indian Children Are Malnourished, Report Finds, Guardian (Jan. 10, 2012), http://www.theguardian.com/world/2012/jan/10/child-malnutrition-india-national-shame.

    [83] Paul Farmer observes that “an understanding of poverty must be linked to efforts to end it.” Farmer, In the Company of the Poor, at http://www.pih.org/blog/in-the-company-of-the-poor.  The same could be said of extreme inequality.

  • All Hitherto Existing Social Media

    All Hitherto Existing Social Media

    Social Media: A Critical Introduction (Sage, 2013)a review of Christian Fuchs, Social Media: A Critical Introduction
    by Zachary Loeb
    ~
    Legion are the books and articles describing the social media that has come before. Yet the tracts focusing on Friendster, LiveJournal, or MySpace now appear as throwbacks, nostalgically immortalizing the internet that was and is now gone. On the cusp of the next great amoeba-like expansion of the internet (wearable technology and the “internet of things”) it is a challenging task to analyze social media as a concept while recognizing that the platforms being focused upon—regardless of how permanent they seem—may go the way of Friendster by the end of the month. Granted, social media (and the companies whose monikers act as convenient shorthand for it) is an important topic today. Those living in highly digitized societies can hardly avoid the tendrils of social media (even if a person does not use a particular platform it may still be tracking them), but this does not mean that any of us fully understand these platforms, let alone have a critical conception of them. It is into this confused and confusing territory that Christian Fuchs steps with his Social Media: A Critical Introduction.

    It is a book ostensibly targeted at students. Though when it comes to social media—as Fuchs makes clear—everybody has quite a bit to learn.

    By deploying an analysis couched in Marxist and Critical Theory, Fuchs aims not simply to describe social media as it appears today, but to consider its hidden functions and biases, and along the way to describe what social media could become. The goal of Fuchs’s book is to provide readers—the target audience is students, after all—with the critical tools and proper questions with which to approach social media. While Fuchs devotes much of the book to discussing specific platforms (Google, Facebook, Twitter, WikiLeaks, Wikipedia), these case studies are used to establish a larger theoretical framework which can be applied to social media beyond these examples. Affirming the continued usefulness of Marxist and Frankfurt School critiques, Fuchs defines the aim of his text as being “to engage with the different forms of sociality on the internet in the context of society” (6) and emphasizes that the “critical” questions to be asked are those that “are concerned with questions of power” (7).

    Thus a critical analysis of social media demands a careful accounting of the power structures involved not just in specific platforms, but in the larger society as a whole. So though Fuchs regularly returns to the examples of the Arab Spring and the Occupy Movement, he emphasizes that the narratives that dub these “Twitter revolutions” often come from a rather non-critical and generally pro-capitalist perspective that fail to embed adequately uses of digital technology in their larger contexts.

    Social media is portrayed as an example, like other media, of “techno-social systems” (37) wherein the online platforms may receive the most attention but where the, oft-ignored, layer of material technologies is equally important. Social media, in Fuchs’s estimation, developed and expanded with the growth of “Web 2.0” and functions as part of the rebranding effort that revitalized (made safe for investments) the internet after the initial dot.com bubble. As Fuchs puts it, “the talk about novelty was aimed at attracting novel capital investments” (33). What makes social media a topic of such interest—and invested with so much hope and dread—is the degree to which social media users are considered as active creators instead of simply consumers of this content (Fuchs follows much recent scholarship and industry marketing in using the term “prosumers” to describe this phenomenon; the term originates from the 1970s business-friendly futurology of Alvin Toffler’s The Third Wave). Social media, in Fuchs’s description, represents a shift in the way that value is generated through labor, and as a result an alteration in the way that large capitalist firms appropriate surplus value from workers. The social media user is not laboring in a factory, but with every tap of the button they are performing work from which value (and profit) is skimmed.

    Without disavowing the hope that social media (and by extension the internet) has liberating potential, Fuchs emphasizes that such hopes often function as a way of hiding profit motives and capitalist ideologies. It is not that social media cannot potentially lead to “participatory democracy” but that “participatory culture” does not necessarily have much to do with democracy. Indeed, as Fuchs humorously notes: “participatory culture is a rather harmless concept mainly created by white boys with toys who love their toys” (58). This “love their toys” sentiment is part of the ideology that undergirds much of the optimism around social media—which allows for complex political occurrences (such as the Arab Spring) to be reduced to events that can be credited to software platforms.

    What Fuchs demonstrates at multiple junctures is the importance of recognizing that the usage of a given communication tool by a social movement does not mean that this tool brought about the movement: intersecting social, political and economic factors are the causes of social movements. In seeking to provide a “critical introduction” to social media, Fuchs rejects arguments that he sees as not suitably critical (including those of Henry Jenkins and Manuel Castells), arguments that at best have been insufficient and at worst have been advertisements masquerading as scholarship.

    Though the time people spend on social media is often portrayed as “fun” or “creative,” Fuchs recasts these tasks as work in order to demonstrate how that time is exploited by the owners of social media platforms. By clicking on links, writing comments, performing web searches, sending tweets, uploading videos, and posting on Facebook, social media users are performing unpaid labor that generates a product (in the form of information about users) that can then be sold to advertisers and data aggregators; this sale generates profits for the platform owner which do not accrue back to the original user. Though social media users are granted “free” access to a service, it is their labor on that platform that makes the platform have any value—Facebook and Twitter would not have a commodity to sell to advertisers if they did not have millions of users working for them for free. As Fuchs describes it, “the outsourcing of work to consumers is a general tendency of contemporary capitalism” (111).

    screen shot of Karl Marx Community Facebook Page
    screen shot of a Karl Marx Community Page on Facebook

    While miners of raw materials and workers in assembly plants are still brutally exploited—and this unseen exploitation forms a critical part of the economic base of computer technology—the exploitation of social media users is given a gloss of “fun” and “creativity.” Fuchs does not suggest that social media use is fully akin to working in a factory, but that users carry the factory with them at all times (a smart phone, for example) and are creating surplus value as long as they are interacting with social media. Instead of being a post-work utopia, Fuchs emphasizes that “the existence of the internet in its current dominant capitalist form is based on various forms of labour” (121) and the enrichment of internet firms is reliant upon the exploitation of those various forms of labor—central amongst these being the social media user.

    Fuchs considers five specific platforms in detail so as to illustrate not simply the current state of affairs but also to point towards possible alternatives. Fuchs analyzes Google, Facebook, Twitter, WikiLeaks and Wikipedia as case studies of trends to encourage and trends of which to take wary notice. In his analysis of the three corporate platforms (Google, Facebook and Twitter) Fuchs emphasizes the ways in which these social media companies (and the moguls who run them) have become wealthy and powerful by extracting value from the labor of users and by subjecting users to constant surveillance. The corporate platforms give Fuchs the opportunity to consider various social media issues in sharper relief: labor and monopolization in terms of Google, surveillance and privacy issues with Facebook, the potential for an online public sphere and Twitter. Despite his criticisms, Fuchs does not dismiss the value and utility of what these platforms offer, as is captured in his claim that “Google is at the same time the best and the worst thing that has ever happened on the internet” (147). The corporate platforms’ successes are owed at least partly to their delivering desirable functions to users. The corrective for which Fuchs argues is increased democratic control of these platforms—for the labor to be compensated and for privacy to pertain to individual humans instead of to businesses’ proprietary methods of control. Indeed, one cannot get far with a “participatory culture” unless there is a similarly robust “participatory democracy,” and part of Fuchs’s goal is to show that these are not at all the same.

    WikiLeaks and Wikipedia both serve as real examples that demonstrate the potential of an “alternative” internet for Fuchs. Though these Wiki platforms are not ideal they contain within themselves the seeds for their own adaptive development (“WikiLeaks is its own alternative”—232), and serve for Fuchs as proof that the internet can move in a direction akin to a “commons.” As Fuchs puts it, “the primary political task for concerned citizens should therefore be to resist the commodification of everything and to strive for democratizing the economy and the internet” (248), a goal he sees as at least partly realized in Wikipedia.

    While the outlines of the internet’s future may seem to have been written already, Fuchs’s book is an argument in favor of the view that the code can still be altered. A different future relies upon confronting the reality of the online world as it currently is and recognizing that the battles waged for control of the internet are proxy battles in the conflict between capitalism and an alternative approach. In the conclusion of the book Fuchs eloquently condenses his view and the argument that follows from it in two simple sentences: “A just society is a classless society. A just internet is a classless internet” (257). It is a sentiment likely to spark an invigorating discussion, be it in a classroom, at a kitchen table, or in a café.

    * * *

    While Social Media: A Critical Introduction is clearly intended as a text book (each chapter ends with a “recommended readings and exercises” section), it is written in an impassioned and engaging style that will appeal to anyone who would like to see a critical gaze turned towards social media. Fuchs structures his book so that his arguments will remain relevant even if some of the platforms about which he writes vanish. Even the chapters in which Fuchs focuses on a specific platform are filled with larger arguments that transcend that platform. Indeed one of the primary strengths of Social Media is that Fuchs skillfully uses the familiar examples of social media platforms as a way of introducing the reader to complex theories and thinkers (from Marx to Habermas).

    Whereas Fuchs accuses some other scholars of subtly hiding their ideological agendas, no such argument can be made regarding Fuchs himself. Social Media is a Marxist critique of the major online platforms—not simply because Fuchs deploys Marx (and other Marxist theorists) to construct his arguments, but because of his assumption that the desirable alternative for the internet is part and parcel of a desirable alternative to capitalism. Such a sentiment can be found at several points throughout the book, but is made particularly evident by lines such as these from the book’s conclusion: “There seem to be only two options today: (a) continuance and intensification of the 200-year-old barbarity of capitalism or (b) socialism” (259)—it is a rather stark choice. It is precisely due to Fuchs’s willingness to stake out, and stick to, such political positions that this text is so effective.

    And yet, it is the very allegiance to such positions that also presents something of a problem. While much has been written of late—in the popular press in addition to by scholars—regarding issues of privacy and surveillance, Fuchs’s arguments about the need to consider users as exploited workers will likely strike many readers as new, and thus worthwhile in their novelty if nothing else. Granted, to fully go along with Fuchs’s critique requires readers to already be in agreement or at least relatively sympathetic with Fuchs political and ethical positions. This is particularly true as Fuchs excels at making an argument about media and technology, but devotes significantly fewer pages to ethical argumentation.

    The lines (quoted earlier) “A just society is a classless society. A just internet is a classless internet” (257) serve as much as a provocation as a conclusion. For those who ascribe to a similar notion of “a just society” Fuchs book will likely function as an important guide to thinking about the internet; however, to those whose vision of “a just society” is fundamentally different from his, Fuchs’s book may be less than convincing. Social Media does not present a complete argument about how one defines a “just society.” Indeed, the danger may be that Fuchs’s statements in praise of a “classless society” may lead to some dismissing his arguments regarding the way in which the internet has replicated a “class society.” Likewise, it is easy to imagine a retort being offered that the new platforms of “the sharing economy” represent the birth of this “classless society” (though it is easy to imagine Fuchs pointing out, as have other critics from the left, that the “sharing economy” is simply more advertising lingo being used to hide the same old capitalist relations). This represents something of a peculiar challenge when it comes to Social Media, as the political commitment of the book is simultaneously what makes it so effective and that which threatens the book’s potential political efficacy.

    Thus Social Media presents something of a conundrum: how effective is a critical introduction if its conclusion offers a heads-and-tails choice between “barbarity of capitalism or…socialism”? Such a choice feels slightly as though Fuchs is begging the question. While it is curious that Fuchs does not draw upon critical theorists’ writings about the culture industry, the main issues with Social Media seem to be reflections of this black-and-white choice. Thus it is something of a missed chance that Fuchs does not draw upon some of the more serious critics of technology (such as Ellul or Mumford)—whose hard edged skepticism would nevertheless likely not accept Fuchs’s Marxist orientation. Such thinkers might provide a very different perspective on the choice between “capitalism” and “socialism”—arguing that “technique” or “the megamachine” can function quite effectively in either. Though Fuchs draws heavily upon thinkers in the Marxist tradition it may be that another set of insights and critiques might have been gained by bringing in other critics of technology (Hans Jonas, Peter Kropotkin, Albert Borgmann)—especially as some of these thinkers had warned that Marxism may overvalue the technological as much as capitalism does. This is not to argue in favor of any of these particular theorists, but to suggest that Fuchs’s claims would have been strengthened by devoting more time to considering the views of those who were critical of technology, capitalism and of Marxism. Social Media does an excellent job of confronting the ideological forces on its right flank; it could have benefited from at least acknowledging the critics to its left.

    Two other areas that remain somewhat troubling are in regards to Fuchs’s treatment of Wiki platforms and of the materiality of technology. The optimism with which Fuchs approaches WikiLeaks and Wikipedia is understandable given the dourness with which he approaches the corporate platforms, and yet his hopes for them seem somewhat exaggerated. Fuchs claims “Wikipedians are prototypical contemporary communists” (243), partially to suggest that many people are already engaged in commons based online activities and yet it is an argument that he simultaneously undermines by admitting (importantly) the fact that Wikipedia’s editor base is hardly representative of all of the platform’s users (it’s back to the “white boys with toys who love their toys”), and some have alleged that putatively structureless models of organization like Wikipedia’s actually encourage oligarchical forms of order. Which is itself not to say anything about the role that editing “bots” play on the platform or the degree to which Wikipedia is reliant upon corporate platforms (like Google) for promotion. Similarly, without ignoring its value, the example of WikiLeaks seems odd at a moment when the organization seems primarily engaged in a rearguard self-defense whilst the leaks that have generated the most interest of late has been made to journalists at traditional news sources (Edward Snowden’s leaks to Glenn Greenwald, who was writing for The Guardian when the leaks began).

    The further challenge—and this is one that Fuchs is not alone in contending with—is the trouble posed by the materiality of technology. An important aspect of Social Media is that Fuchs considers the often-unseen exploitation and repression upon which the internet relies: miners, laborers who build devices, those who recycle or live among toxic e-waste. Yet these workers seem to disappear from the arguments in the later part of the book, which in turn raises the following question: even if every social media platform were to be transformed into a non-profit commons-based platform that resists surveillance, manipulation, and the exploitation of its users, is such a platform genuinely just if to use it one must rely on devices whose minerals were mined in warzones, assembled in sweatshops, and which will eventually go to an early grave in a toxic dump? What good is a “classless (digital) society” without a “classless world?” Perhaps the question of a “capitalist internet” is itself a distraction from the fact that the “capitalist internet” is what one gets from capitalist technology. Granted, given Fuchs’s larger argument it may be fair to infer that he would portray “capitalist technology” as part of the problem. Yet, if the statement “a just society is a classless society” is to be genuinely meaningful than this must extend not just to those who use a social media platform but to all of those involved from the miner to the manufacturer to the programmer to the user to the recycler. To pose the matter as a question, can there be participatory (digital) democracy that relies on serious exploitation of labor and resources?

    Social Media: A Critical Introduction provides exactly what its title promises—a critical introduction. Fuchs has constructed an engaging and interesting text that shows the continuing validity of older theories and skillfully demonstrates the way in which the seeming newness of the internet is itself simply a new face on an old system. While Fuchs has constructed an argument that resolutely holds its position it is from a stance that one does not encounter often enough in debates around social media and which will provide readers with a range of new questions with which to wrestle.

    It remains unclear in what ways social media will develop in the future, but Christian Fuchs’s book will be an important tool for interpreting these changes—even if what is in store is more “barbarity.”
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck. He previously reviewed The People’s Platform by Astra Taylor for boundary2.org.
    Back to the essay

  • From the Decision to the Digital

    From the Decision to the Digital

    Laruelle: Against the Digital

    a review of Alexander R. Galloway, Laruelle: Against the Digital

    by Andrew Culp

    ~
    Alexander R. Galloway’s forthcoming Laruelle: Against the Digital is a welcome and original entry in the discussion of French theorist François Laruelle’s thought. The book is at once both pedagogical and creative: it succinctly summarizes important aspects of Laruelle’s substantial oeuvre by placing his thought within the more familiar terrain of popular philosophies of difference (most notably the work of Gilles Deleuze and Alain Badiou) and creatively extends Laruelle’s work through a series of fourteen axioms.

    The book is a bridge between current Anglophone scholarship on Laruelle, which largely treats Laruelle’s non-standard philosophy through an extension of problematics common to contemporary continental philosophy (Mullarkey 2006, Mullarkey and Smith 2012, Smith 2013, Gangle 2013, Kolozova 2014), and such scholarship’s maturation, which blazes new territory because it takes thought to be “an exercise in perpetual innovation” (Brassier 2003, 25). As such, Laruelle: Against the Digital stands out from other scholarship in that it is not primarily a work of exposition or application of the axioms laid out by Laruelle. This approach is apparent from the beginning, where Galloway declares that he is not a foot soldier in Laruelle’s army and he does not proceed by way of Laurelle’s “non-philosophical” method (a method so thoroughly abstract that Laruelle appears to be the inheritor of French rationalism, though in his terminology, philosophy should remain only as “raw material” to carry thinking beyond philosophy’s image of thought). The significance of Galloway’s Laruelle is that he instead produces his own axioms, which follow from non-philosophy but are of his own design, and takes aim at a different target: the digital.

    The Laruellian Kernel

    Are philosophers no better than creationists? Philosophers may claim to hate irrationalist leaps of faith, but Laruelle locates such leaps precisely in philosophers’ own narcissistic origin stories. This argument follows from Chapter One of Galloway’s Laruelle, which outlines how all philosophy begins with the world as ‘fact.’ For example: the atomists begin with change, Kant with empirical judgment, and Fichte with the principle of identity. And because facts do not speak for themselves, philosophy elects for itself a second task — after establishing what ‘is’ — inventing a form of thought to reflect on the world. Philosophy thus arises out of a brash entitlement: the world exists to be thought. Galloway reminds us of this through Gottfried Leibniz, who tells us that “everything in the world happens for a specific reason” (and it is the job of philosophers to identify it), and Alfred North Whitehead, who alternatively says, “no actual entity, then no reason” (so it is up to philosophers to find one).

    For Laruelle, various philosophies are but variations on a single approach that first begins by positing how the world presents itself, and second determines the mode of thought that is the appropriate response. Between the two halves, Laruelle finds a grand division: appearance/presence, essence/instance, Being/beings. Laruelle’s key claim is that philosophy cannot think the division itself. The consequence is that such a division is tantamount to cheating, as it wills thought into being through an original thoughtless act. This act of thoughtlessly splitting of the world in half is what Laruelle calls “the philosophical decision.”

    Philosophy need not wait for Laruelle to be demoted, as it has already done this for itself; no longer the queen of the sciences, philosophy seems superfluous to the most harrowing realities of contemporary life. The recent focus on Laruelle did indeed come from a reinvigoration of philosophy that goes under the name ‘speculative realism.’ Certainly there are affinities between Laruelle and these philosophers — the early case was built by Ray Brassier, who emphasizes that Laruelle earnestly adopts an anti-correlationalist position similar to the one suggested by Quentin Meillassoux and distances himself from postmodern constructivism as much as other realists, all by positing the One as the Real. It is on the issue of philosophy, however, that Laruelle is most at odds with the irascible thinkers of speculative realism, for non-philosophy is not a revolt against philosophy nor is it a patronizing correction of how others see reality. 1 Galloway argues that non-philosophy should be considered materialist. He attributes to Laruelle a mix of empiricism, realism, and materialism but qualifies non-philosophy’s approach to the real as not a matter of the givenness of empirical reality but of lived experience (vécu) (Galloway, Laruelle, 24-25). The point of non-philosophy is to withdraw from philosophy by short-circuiting the attempt to reflect on what supposedly exists. To be clear: such withdrawal is not an anti-philosophy. Non-philosophy suspends philosophy, but also raids it for its own rigorous pursuit: an axiomatic investigation of the generic. 2

    From Decision to Digital

    A sharp focus on the concept of “the digital” is Galloway’s main contribution — a concept not in the forefront of Laruelle’s work, but of great interest to all of us today. Drawing from non-philosophy’s basic insight, Galloway’s goal in Laruelle is to demonstrate the “special connection” shared by philosophy and digital (15). Galloway asks his readers to consider a withdrawal from digitality that is parallel to the non-philosophical withdrawal from philosophy.

    Just as Laruelle discovered the original division to which philosophy must remain silent, Galloway finds that the digital is the “basic distinction that makes it a possible to make any distinction at all” (Laruelle, 26). Certainly the digital-analog opposition survives this reworking, but not as one might assume. Gone are the usual notions of online-offline, new-old, stepwise-continuous variation, etc. To maintain these definitions presupposes the digital, or as Galloway defines it, “the capacity to divide things and make distinctions between them” (26). Non-philosophy’s analogy for the digital thus becomes the processes of distinction and decision themselves.

    The dialectic is where Galloway provocatively traces the history of digitality. This is because he argues that digitality is “not so much 0 and 1” but “1 and 2” (Galloway, Laruelle, 26). Drawing on Marxist definitions of the dialectical process, he defines the movement from one to two as analysis, while the movement from two to one is synthesis (26-27). In this way, Laruelle can say that, “Hegel is dead, but he lives on inside the electric calculator” (Introduction aux sciences génériques, 28, qtd in Galloway, Laruelle, 32). Playing Badiou and Deleuze off of each other, as he does throughout the book, Galloway subsequently outlines the political stakes between them — with Badiou establishing clear reference points through the argument that analysis is for leftists and synthesis for reactionaries, and Deleuze as a progenitor of non-philosophy still too tied to the world of difference but shrewd enough to have a Spinozist distaste for both movements of the dialectic (Laruelle, 27-30). Galloway looks to Laruelle to get beyond Badiou’s analytic leftism and Deleuze’s “Spinozist grand compromise” (30). His proposal is a withdrawal in the name of indecision that demands abstention from digitality’s attempt to “encode and simulate anything whatsoever in the universe” (31).

    Insufficiency

    Insufficiency is the idea into which Galloway sharpens the stakes of non-philosophy. In doing so, he does to Laruelle what Deleuze does to Spinoza. While Deleuze refashions philosophy into the pursuit of adequate knowledge, the eminently practical task of understanding the conditions of chance encounters enough to gain the capacity to influence them, Galloway makes non-philosophy into the labor of inadequacy, a mode of thought that embraces the event of creation through a withdrawal from decision. If Deleuze turns Spinoza into a pragmatist, then Galloway turns Laruelle into a nihilist.

    There are echoes of Massimo Cacciari, Giorgio Agamben, and Afro-pessimism in Galloway’s Laruelle. This is because he uses nihilism’s marriage of withdrawal, opacity, and darkness as his orientation to politics, ethics, and aesthetics. From Cacciari, Galloway borrows a politics of non-compromise. But while the Italian Autonomist Marxist milieu of which Cacciari’s negative thought is characteristic emphasizes subjectivity, non-philosophy takes the subject to be one of philosophy’s dirty sins and makes no place for it. Yet Galloway is not shy about bringing up examples, such as Bartleby, Occupy, and other figures of non-action. Though as in Agamben, Galloway’s figures only gain significance in their insufficiency. “The more I am anonymous, the more I am present” Galloway repeats from Tiqqun to axiomatically argue the centrality of opacity (233-236). There is also a strange affinity between Galloway and Afro-pessimists, who both oppose the integrationist tendencies of representational systems ultimately premised on the exclusion, exploitation, and elimination of blackness. In spite of potential differences, they both define blackness as absolute foreclosure to being; from which, Galloway is determined to “channel that great saint of radical blackness, Toussaint Louveture,” in order to bring about a “cataclysm of human color” through the “blanket totality of black” that “renders color invalid” and brings about “a new uchromia, a new color utopia rooted in the generic black universe” (188-189). What remains an open question is: how does such a formulation of the generic depart from the philosophy of difference’s becoming-minor, whereby the liberation must first pass through the figures of the woman, the fugitive, and the foreigner?

    Actually Existing Digitality

    One could read Laruelle not as urging thought to become more practical, but to become less so. Evidence for such a claim comes in his retreat to dense abstract writing and a strong insistence against providing examples. Each is an effect of non-philosophy’s approach, which is both rigorous and generic. Although possibly justified, there are those who stylistically object to Laruelle for taking too many liberties with his prose; most considerations tend make up for such flights of fancy by putting non-philosophy in communication with more familiar philosophies of difference (Mullarkey 2006; Kolozova 2014). Yet the strangeness of the non-philosophical method is not a stylistic choice intended to encourage reflection. Non-philosophy is quite explicitly not a philosophy of difference — Laruelle’s landmark Philosophies of Difference is an indictment of Hegel, Heidegger, Nietzsche, Derrida, and Deleuze. To this end, non-philosophy does not seek to promote thought through marginality, Otherness, or any other form of alterity.

    Readers who have henceforth been frustrated with non-philosophy’s impenetrability may be more attracted to the second part of Galloway’s Laruelle. In part two, Galloway addresses actually existing digitality, such as computers and capitalism. This part also includes a contribution to the ethical turn, which is premised on a geometrically neat set of axioms whereby ethics is the One and politics is the division of the One into two. He develops each chapter through numerous examples, many of them concrete, that helps fold non-philosophical terms into discussions with long-established significance. For instance, Galloway makes his way through a chapter on art and utopia with the help of James Turrell’s light art, Laruelle’s Concept of Non-Photography, and August von Briesen’s automatic drawing (194-218). The book is over three hundred-pages long, so most readers will probably appreciate the brevity of many of the chapters in part two. The chapters are short enough to be impressionistic while implying that treatments as fully rigorous as what non-philosophy often demands may be much longer.

    Questions

    While his diagrammatical thinking is very clear, I find it more difficult to determine during Galloway’s philosophical expositions whether he is embracing or criticizing a concept. The difficulty of such determinations is compounded by the ambivalence of the non-philosophical method, which adopts philosophy as its raw material while simultaneously declaring that philosophical concepts are insufficient. My second fear is that while Galloway is quite adept at wielding his reworked concept of ‘the digital,’ his own trademark rigor may be lost when taken up by less judicious scholars. In particular, his attack on digitality could form the footnote for a disingenuous defense of everything analog.

    There is also something deeper at stake: What if we are in the age of non-representation? From the modernists to Rancière and Occupy, we have copious examples of non-representational aesthetics and politics. But perhaps all previous philosophy has only gestured at non-representational thought, and non-philosophy is the first to realize this goal. If so, then a fundamental objection could be raised about both Galloway’s Laruelle and non-philosophy in general: is non-philosophy properly non-thinking or is it just plain not thinking? Galloway’s axiomatic approach is a refreshing counterpoint to Laruelle’s routine circumlocution. Yet a number of the key concepts that non-philosophy provides are still frustratingly elusive. Unlike the targets of Laruelle’s criticism, Derrida and Deleuze, non-philosophy strives to avoid the obscuring effects of aporia and paradox — so is its own use of opacity simply playing coy, or to be understood purely as a statement that the emperor has no clothes? While I am intrigued by anexact concepts such as ‘the prevent,’ and I understand the basic critique of the standard model of philosophy, I am still not sure what non-philosophy does. Perhaps that is an unfair question given the sterility of the One. But as Hardt and Negri remind us in the epigraph to Empire, “every tool is a weapon if you hold it right.” We now know that non-philosophy cuts — what remains to be seen, is where and how deeply.
    _____

    Andrew Culp is a Visiting Assistant Professor of Rhetoric Studies at Whitman College. He specializes in cultural-communicative theories of power, the politics of emerging media, and gendered responses to urbanization. In his current project, Escape, he explores the apathy, distraction, and cultural exhaustion born from the 24/7 demands of an ‘always-on’ media-driven society. His work has appeared Radical Philosophy, Angelaki, Affinities, and other venues.

    _____

    Notes

    1. There are two qualifications worth mentioning: first, Laruelle presents non-philosophy as a scientific enterprise. There is little proximity between non-philosophy’s scientific approach and other sciences, such as techno-science, big science, scientific modernity, modern rationality, or the scientific method. Perhaps it is closest to Althusser’s science, but some more detailed specification of this point would be welcome.
    Back to the essay

    2. Galloway lays out the non-philosophy of generic immanence, The One, in Chapter Two of Laruelle. Though important, Galloway’s main contribution is not a summation of Laruelle’s version of immanence and thus not the focus of this review. Substantial summaries of this sort are already available, including Mullarkey 2006, and Smith 2013.
    Back to the essay

    Bibliography

    Brassier, Ray (2003) “Axiomatic Heresy: The Non-Philosophy of François Laruelle,” Radical Philosophy 121.
    Gangle, Rocco (2013) François Laruelle’s Philosophies of Difference (Edinburgh, UK: Edinburgh University Press).
    Kolozova, Katerina (2014) Cut of the Real (New York, USA: Columbia University Press).
    Hardt, Michael and Antonio Negri (2000) Empire (Cambridge, MA: Harvard University Press).
    Laruelle, François (2010/1986) Philosophies of Difference (London, UK and New York, USA: Continuum).
    Laruelle, François (2011) Concept of Non-Photography (Falmouth, UK: Urbanomic).
    Mullarkey, John (2006) Post-Continental Philosophy (London, UK: Continuum).
    Mullarkey, John and Anthony Paul Smith (eds) (2012) Laruelle and Non-Philosophy (Edinburgh, UK: Edinburgh University Press).
    Smith, Anthony Paul (2013) A Non-Philosophical Theory of Nature (New York, USA: Palgrave Macmillan).

  • The Eversion of the Digital Humanities

    The Eversion of the Digital Humanities

    image
    by Brian Lennon

    on The Emergence of the Digital Humanities by Steven E. Jones

    1

    Steven E. Jones begins his Introduction to The Emergence of the Digital Humanities (Routledge, 2014) with an anecdote concerning a speaking engagement at the Illinois Institute of Technology in Chicago. “[M]y hosts from the Humanities department,” Jones tells us,

    had also arranged for me to drop in to see the fabrication and rapid-prototyping lab, the Idea Shop at the University Technology Park. In one empty room we looked into, with schematic drawings on the walls, a large tabletop machine jumped to life and began whirring, as an arm with a router moved into position. A minute later, a student emerged from an adjacent room and adjusted something on the keyboard and monitor attached by an extension arm to the frame for the router, then examined an intricately milled block of wood on the table. Next door, someone was demonstrating finely machined parts in various materials, but mostly plastic, wheels within bearings, for example, hot off the 3D printer….

    What exactly, again, was my interest as a humanist in taking this tour, one of my hosts politely asked?1

    It is left almost entirely to more or less clear implication, here, that Jones’s humanities department hosts had arranged the expedition at his request, and mainly or even only to oblige a visitor’s unusual curiosity, which we are encouraged to believe his hosts (if “politely”) found mystifying. Any reader of this book must ask herself, first, if she believes this can really have occurred as reported: and if the answer to that question is yes, if such a genuinely unlikely and unusual scenario — the presumably full-time, salaried employees of an Institute of Technology left baffled by a visitor’s remarkable curiosity about their employer’s very raison d’être — warrants any generalization at all. For that is how Jones proceeds: by generalization, first of all from a strained and improbably dramatic attempt at defamiliarization, in the apparent confidence that this anecdote illuminating the spirit of the digital humanities will charm — whom, exactly?

    It must be said that Jones’s history of “digital humanities” is refreshingly direct and initially, at least, free of obfuscation, linking the emergence of what it denotes to events in roughly the decade preceding the book’s publication, though his reading of those events is tendentious. It was the “chastened” retrenchment after the dot-com bubble in 2000, Jones suggests (rather, just for example, than the bubble’s continued inflation by other means) that produced the modesty of companies like our beloved Facebook and Twitter, along with their modest social networking platform-products, as well as the profound modesty of Google Inc. initiatives like Google Books (“a development of particular interest to humanists,” we are told2) and Google Maps. Jones is clearer-headed when it comes to the disciplinary history of “digital humanities” as a rebaptism of humanities computing and thus — though he doesn’t put it this way — a catachrestic asseveration of traditional (imperial-nationalist) philology like its predecessor:

    It’s my premise that what sets DH apart from other forms of media studies, say, or other approaches to the cultural theory of computing, ultimately comes through its roots in (often text-based) humanities computing, which always had a kind of mixed-reality focus on physical artifacts and archives.3

    Jones is also clear-headed on the usage history of “digital humanities” as a phrase in the English language, linking it to moments of consolidation marked by Blackwell’s Companion to Digital Humanities, the establishment of the National Endowment for the Humanities Office for the Digital Humanities, and higher-education journalism covering the annual Modern Language Association of America conventions. It is perhaps this sensitivity to “digital humanities” as a phrase whose roots lie not in original scholarship or cultural criticism itself (as was still the case with “deconstruction” or “postmodernism,” even at their most shopworn) but in the dependent, even parasitic domains of reference publishing, grant-making, and journalism that leads Jones to declare “digital humanities” a “fork” of humanities computing, rather than a Kuhnian paradigm shift marking otherwise insoluble structural conflict in an intellectual discpline.

    At least at first. Having suggested it, Jones then discards the metaphor drawn from the tree structures of software version control, turning to “another set of metaphors” describing the digital humanities as having emerged not “out of the primordial soup” but “into the spotlight” (Jones, 5). We are left to guess at the provenance of this second metaphor, but its purpose is clear: to construe the digital humanities, both phenomenally and phenomenologically, as the product of a “shift in focus, driven […] by a new set of contexts, generating attention to a range of new activities” (5).

    Change; shift; new, new, new. Not a branch or a fork, not even a trunk: we’re now in the ecoverse of history and historical time, in its collision with the present. The appearance and circulation of the English-language phrase “digital humanities” can be documented — that is one of the things that professors of English like Jones do especially well, when they care to. But “changes in the culture,” much more broadly, within only the last ten years or so? No scholar in any discipline is particularly well trained, well positioned, or even well suited to diagnosing those; and scholars in English studies won’t be at the top of anyone’s list. Indeed, Jones very quickly appeals to “author William Gibson” for help, settling on the emergence of the digital humanities as a response to what Gibson called “the eversion of cyberspace,” in its ostensibly post-panopticist colonization of the physical world.6 It makes for a rather inarticulate and self-deflating statement of argument, in which on its first appearance eversion, ambiguously, appears to denote the response as much as its condition or object:

    My thesis is simple: I think that the cultural response to changes in technology, the eversion, provides an essential context for understanding the emergence of DH as a new field of study in the new millennium.7

    Jones offers weak support for the grandiose claim that “we can roughly date the watershed moment when the preponderant collective perception changed to 2004–2008″ (21). Second Life “peaked,” we are told, while World of Warcraft “was taking off”; Nintendo introduced the Wii; then Facebook “came into its own,” and was joined by Twitter and Foursquare, then Apple’s iPhone. Even then (and setting aside the question of whether such benchmarking is acceptable evidence), for the most part Jones’s argument, such as it is, is that something is happening because we are talking about something happening.

    But who are we? Jones’s is the typical deference of the scholar to the creative artist, unwilling to challenge the latter’s utter dependence on meme engineering, at least where someone like Gibson is concerned; and Jones’s subsequent turn to the work of a scholar like N. Katherine Hayles on the history of cybernetics comes too late to amend the impression that the order of things here is marked first by gadgets, memes, and conversations about gadgets and memes, and only subsequently by ideas and arguments about ideas. The generally unflattering company among whom Hayles is placed (Clay Shirky, Nathan Jurgenson) does little to move us out of the shallows, and Jones’s profoundly limited range of literary reference, even within a profoundly narrowed frame — it’s Gibson, Gibson, Gibson all the time, with the usual cameos by Bruce Sterling and Neal Stephenson — doesn’t help either.

    Jones does have one problem with the digital humanities: it ignores games. “My own interest in games met with resistance from some anonymous peer reviewers for the program for the DH 2013 conference, for example,” he tells us (33). “[T]he digital humanities, at least in some quarters, has been somewhat slow to embrace the study of games” (59). “The digital humanities could do worse than look to games” (36). And so on: there is genuine resentment here.

    But nobody wants to give a hater a slice of the pie, and a Roman peace mandates that such resentment be sublated if it is to be, as we say, taken seriously. And so in a magical resolution of that tension, the digital humanities turns out to be constituted by what it accidentally ignores or actively rejects, in this case — a solution that sweeps antagonism under the rug as we do in any other proper family. “[C]omputer-based video games embody procedures and structures that speak to the fundamental concerns of the digital humanities” (33). “Contemporary video games offer vital examples of digital humanities in practice” (59). If gaming “sounds like what I’ve been describing as the agenda of the digital humanities, it’s no accident” (144).

    Some will applaud Jones’s niceness on this count. It may strike others as desperately friendly, a lingering under a big tent as provisional as any other tent, someday to be replaced by a building, if not by nothing. Few of us will deny recognition to Second Life, World of Warcraft, Wii, Facebook, Twitter, etc. as cultural presences, at least for now. But Jones’s book is also marked by slighter and less sensibly chosen benchmarks, less sensibly chosen because Jones’s treatment of them, in a book whose ambition is to preach to the choir, simply imputes their cultural presence. Such brute force argument drives the pathos that Jones surely feels, as a scholar — in the recognition that among modern institutions, it is only scholarship and the law that preserve any memory at all — into a kind of melancholic unconscious, from whence his objects return to embarrass him. “[A]s I write this,” we read, “QR codes show no signs yet of fading away” (41). Quod erat demonstrandum.

    And it is just there, in such a melancholic unconscious, that the triumphalism of the book’s title, and the “emergence of the digital humanities” that it purports to mark, claim, or force into recognition, straightforwardly gives itself away. For the digital humanities will pass away, and rather than being absorbed into the current order of things, as digital humanities enthusiasts like to believe happened to “high theory” (it didn’t happen), the digital humanities seems more likely, at this point, to end as a blank anachronism, overwritten by the next conjuncture in line with its own critical mass of prognostications.

    2

    To be sure, who could deny the fact of significant “changes in the culture” since 2000, in the United States at least, and at regular intervals: 2001, 2008, 2013…? Warfare — military in character, but when that won’t do, economic; of any interval, but especially when prolonged and deliberately open-ended; of any intensity, but especially when flagrantly extrajudicial and opportunistically, indeed sadistically asymmetrical — will do that to you. No one who sets out to historicize the historical present can afford to ignore the facts of present history, at the very least — but the fact is that Jones finds such facts unworthy of comment, and in that sense, for all its pretense to worldliness, The Emergence of the Digital Humanities is an entirely typical product of the so-called ivory tower, wherein arcane and plain speech alike are crafted to euphemize and thus redirect and defuse the conflicts of the university with other social institutions, especially those other institutions who command the university to do this or do that. To take the ambiguity of Jones’s thesis statement (as quoted above) at its word: what if the cultural response that Jones asks us to imagine, here, is indeed and itself the “eversion” of the digital humanities, in one of the metaphorical senses he doesn’t quite consider: an autotomy or self-amputation that, as McLuhan so enjoyed suggesting in so many different ways, serves to deflect the fact of the world as a whole?

    There are few moments of outright ignorance in The Emergence of the Digital Humanities — how could there be, in the security of such a narrow channel?6 Still, pace Jones’s basic assumption here (it is not quite an argument), we might understand the emergence of the digital humanities as the emergence of a conversation that is not about something — cultural change, etc. — as much as it is an attempt to avoid conversing about something: to avoid discussing such cultural change in its most salient and obvious flesh-and-concrete manifestations. “DH is, of course, a socially constructed phenomenon,” Jones tells us (7) — yet “the social,” here, is limited to what Jones himself selects, and selectively indeed. “This is not a question of technological determinism,” he insists. “It’s a matter of recognizing that DH emerged, not in isolation, but as part of larger changes in the culture at large and that culture’s technological infrastructure” (8). Yet the largeness of those larger changes is smaller than any truly reasonable reader, reading any history of the past decade, might have reason to expect. How pleasant that such historical change was “intertwined with culture, creativity, and commerce” (8) — not brutality, bootlicking, and bank fraud. Not even the modest and rather opportunistic gloom of Gibson’s 2010 New York Times op-ed entitled “Google’s Earth” finds its way into Jones’s discourse, despite the extended treatment that Gibson’s “eversion” gets here.

    From our most ostensibly traditional scholarly colleagues, toiling away in their genuine and genuinely book-dusty modesty, we don’t expect much respect for the present moment (which is why they often surprise us). But The Emergence of the Digital Humanities is, at least in ambition, a book about cultural change over the last decade. And such historiographic elision is substantive — enough so to warrant impatient response. While one might not want to say that nothing good can have emerged from the cultural change of the period in question, it would be infantile to deny that conditions have been unpropitious in the extreme, possibly as unpropitious as they have ever been, in U.S. postwar history — and that claims for the value of what emerges into institutionality and institutionalization, under such conditions, deserve extra care and, indeed defense in advance, if one wants not to invite a reasonably caustic skepticism.

    When Jones does engage in such defense, it is weakly argued. To construe the emergence of the digital humanities as non-meaninglessly concurrent with the emergence of yet another wave of mass educational automation (in the MOOC hype that crested in 2013), for example, is wrong not because Jones can demonstrate that their concurrence is the concurrence of two entirely segregated genealogies — one rooted in Silicon Valley ideology and product marketing, say, and one utterly and completely uncaused and untouched by it — but because to observe their concurrence is “particularly galling” to many self-identified DH practitioners (11). Well, excuse me for galling you! “DH practitioners I know,” Jones informs us, “are well aware of [the] complications and complicities” of emergence in an age of precarious labor, “and they’re often busy answering, complicating, and resisting such opportunistic and simplistic views” (10). Argumentative non sequitur aside, that sounds like a lot of work undertaken in self-defense — more than anyone really ought to have to do, if they’re near to the right side of history. Finally, “those outside DH,” Jones opines in an attempt at counter-critique, “often underestimate the theoretical sophistication of many in computing,” who “know better than many of their humanist critics that their science is provisional and contingent” (10): a statement that will only earn Jones super-demerits from those of such humanist critics — they are more numerous than the likes of Jones ever seem to suspect — who came to the humanities with scientific and/or technical aptitudes, sometimes with extensive educational and/or professional training and experience, and whose “sometimes world-weary and condescending skepticism” (10) is sometimes very well-informed and well-justified indeed, and certain to outlive Jones’s winded jabs at it.

    Jones is especially clumsy in confronting the charge that the digital humanities is marked by a forgetting or evasion of the commitment to cultural criticism foregrounded by other, older and now explicitly competing formations, like so-called new media studies. Citing the suggestion by “media scholar Nick Montfort” that “work in the digital humanities is usually considered to be the digitization and analysis of pre-digital cultural artifacts, not the investigation of contemporary computational media,” Jones remarks that “Montfort’s own work […] seems to me to belie the distinction,”7 as if Montfort — or anyone making such a statement — were simply deluded about his own work, or about his experience of a social economy of intellectual attention under identifiably specific social and historical conditions, or else merely expressing pain at being excluded from a social space to which he desired admission, rather than objecting on principle to a secessionist act of imagination.8

    3

    Jones tells us that he doesn’t “mean to gloss over the uneven distribution of [network] technologies around the world, or the serious social and political problems associated with manufacturing and discarding the devices and maintaining the server farms and cell towers on which the network depends” — but he goes ahead and does it anyway, and without apology or evident regret. “[I]t’s not my topic in this book,” we are told, “and I’ve deliberately restricted my focus to the already-networked world” (3). The message is clear: this is a book for readers who will accept such circumscription, in what they read and contemplate. Perhaps this is what marks the emergence of the digital humanities, in the re-emergence of license for restrictive intellectual ambition and a generally restrictive purview: a bracketing of the world that was increasingly discredited, and discredited with increasing ferocity, just by the way, in the academic humanities in the course of the three decades preceding the first Silicon Valley bubble. Jones suggests that “it can be too easy to assume a qualitative hierarchical difference in the impact of networked technology, too easy to extend the deeper biases of privilege into binary theories of the global ‘digital divide’” (4), and one wonders what authority to grant to such a pronouncement when articulated by someone who admits he is not interested, at least in this book, in thinking about how an — how any — other half lives. It’s the latter, not the former, that is the easy choice here. (Against a single, entirely inconsequential squib in Computer Business Review entitled “Report: Global Digital Divide Getting Worse,” an almost obnoxiously perfunctory footnote pits “a United Nations Telecoms Agency report” from 2012. This is not scholarship.)

    Thus it is that, read closely, the demand for finitude in the one capacity in which we are non-mortal — in thought and intellectual ambition — and the more or less cheerful imagination of an implied reader satisfied by such finitude, become passive microaggressions aimed at another mode of the production of knowledge, whose expansive focus on a theoretical totality of social antagonism (what Jones calls “hierarchical difference”) and justice (what he calls “binary theories”) makes the author of The Emergence of the Digital Humanities uncomfortable, at least on its pages.

    That’s fine, of course. No: no, it’s not. What I mean to say is that it’s unfair to write as if the author of The Emergence of the Digital Humanities alone bears responsibility for this particular, certainly overdetermined state of affairs. He doesn’t — how could he? But he’s getting no help, either, from most of those who will be more or less pleased by the title of his book, and by its argument, such as it is: because they want to believe they have “emerged” along with it, and with that tension resolved, its discomforts relieved. Jones’s book doesn’t seriously challenge that desire, its (few) hedges and provisos notwithstanding. If that desire is more anxious now than ever, as digital humanities enthusiasts find themselves scrutinized from all sides, it is with good reason.
    _____

    Brian Lennon is Associate Professor of English and Comparative Literature at Pennsylvania State University and the author of In Babel’s Shadow: Multilingual Literatures, Monolingual States (University of Minnesota Press, 2010).
    _____

    notes:
    1. Jones, 1.
    Back to the essay

    2. Jones, 4. “Interest” is presumed to be affirmative, here, marking one elision of the range of humanistic critical and scholarly attitudes toward Google generally and the Google Books project in particular. And of the unequivocally less affirmative “interest” of creative writers as represented by the Authors Guild, just for example, Jones has nothing to say: another elision.
    Back to the essay

    3. Jones, 13.
    Back to the essay

    4. See Gibson.
    Back to the essay

    5. Jones, 5.
    Back to the essay

    6. As eager as any other digital humanities enthusiast to accept Franco Moretti’s legitimation of DH, but apparently incurious about the intellectual formation, career and body of work that led such a big fish to such a small pond, Jones opines that Moretti’s “call for a distant reading” stands “opposed to the close reading that has been central to literary studies since the late nineteenth century” (Jones, 62). “Late nineteenth century” when exactly, and where (and how, and why)? one wonders. But to judge by what Jones sees fit to say by way of explanation — that is, nothing at all — this is mere hearsay.
    Back to the essay

    7. Jones, 5. See also Montfort.
    Back to the essay

    8. As further evidence that Montfort’s statement is a mischaracterization or expresses a misunderstanding, Jones suggests the fact that “[t]he Electronic Literature Organization itself, an important center of gravity for the study of computational media in which Montfort has been instrumental, was for a time housed at the Maryland Institute for Technology in the Humanities (MITH), a preeminent DH center where Matthew Kirschenbaum served as faculty advisor” (Jones, 5–6). The non sequiturs continue: “digital humanities” includes the study of computing and media because “self-identified practitioners doing DH” study computing and media (Jones, 6); the study of computing and media is also “digital humanities” because the study of computing and digital media might be performed at institutions like MITH or George Mason University’s Roy Rosenzweig Center for History and New Media, which are “digital humanities centers” (although the phrase “digital humanities” appears nowhere in their names); “digital humanities” also adequately describes work in “media archaeology” or “media history,” because such work has “continued to influence DH” (Jones, 6); new media studies is a component of the digital humanities because some scholars suggest it is so, and others cannot be heard to object, at least after one has placed one’s fingers in one’s ears; and so on.
    Back to the essay

    (feature image: “Bandeau – Manifeste des Digital Humanities,” uncredited; originally posted on flickr.)

  • The Lenses of Failure

    The Lenses of Failure

    The Art of Failure

    by Nathan Altice

    On Software’s Dark Souls II and Jesper Juul’s The Art of Failure

    ~

    I am speaking to a cat named Sweet Shalquoir. She lounges on a desk in a diminutive house near the center of Majula, a coastal settlement that harbors a small band of itinerant merchants, tradespeople, and mystics. Among Shalquoir’s wares is the Silvercat ring, whose circlet resembles a leaping, blue-eyed cat.

    ‘You’ve seen that gaping hole over there? Well, there’s nasty little vermin down there,’ Shalquoir says, observing my window shopping. ‘Although who you seek is even further below.’ She laughs. She knows her costly ring grants its wearer a cat-like affinity for lengthy drops. I check my inventory. Having just arrived in Majula, I have few souls on hand.

    I turn from Shalquoir and exit the house ringless. True to her word, a yawning chasm opens before me, its perimeter edged in slabbed stonework and crumbling statues but otherwise unmarked and unguarded. One could easily fall in while sprinting from house to house in search of Majula’s residents. Wary of an accidental fall, I nudge toward its edge.

    The pit has a mossy patina, as if it was once a well for giants that now lies parched after drinking centuries of Majula’s sun. Its surface is smooth save for a few distant torches sawing at the dark and several crossbeams that bisect its diameter at uneven intervals. Their configuration forms a makeshift spiral ladder. Corpses are slung across the beams like macabre dolls, warning wanderers fool enough to chase after nasty little vermin. But atop the first corpse gleams a pinprick of ethereal light, both a beacon to guide the first lengthy drop and a promise of immediate reward if one survives.

    Silvercat ring be damned, I think I can make it.

    I position myself parallel to the first crossbeam, eyes fixed on that glimmering point. I jump.

    The Jump

    [Dark Souls II screenshots source: ItsBlueLizardJello via YouTube]

    For a breathless second, I plunge toward the beam. My aim is true—but my body is weak. I collapse, sprawled atop the lashed wooden planks, inches from my coveted jewel. I evaporate into a green vapor as two words appear in the screen’s lower half: ‘YOU DIED.’

    Decisions such as these abound in Dark Souls II, the latest entry in developer From Software’s cult-to-crossover-hit series of games bearing the Souls moniker. The first, Demon’s Souls, debuted on the PlayStation 3 in 2009, attracting players with its understated lore, intricate level design, and relentless difficulty. Spiritual successor Dark Souls followed in 2011 and its direct sequel Dark Souls II released earlier this year.

    Each game adheres to standard medieval fantasy tropes: there are spellcasters, armor-clad knights, parapet-trimmed castles, and a variety of fire-spewing dragons. You select one out of several archetypal character classes (e.g., Cleric, Sorcerer, Swordsman), customize a few appearance options, then explore and fight through a series of interconnected, yet typically non-linear, locations populated by creatures of escalating difficulty. What distinguishes these games from the hundreds of other fantasy games those initial conditions could describe are their melancholy tone and their general disregard for player hand-holding. Your hero begins as little more than a voiceless, fragile husk with minimal direction and fewer resources. Merely surviving takes precedence over rescuing princesses or looting dungeons. The Souls games similarly reveal little about their settings or systems, driving some players to declare them among the worst games ever made while catalyzing others to revisit the game’s environs for hundreds of hours. Vibrant communities have emerged around the Souls series, partly in an effort to document the mechanics From Software purposefully obscures and partly to construct a coherent logic and lore from the scraps and minutiae the game provides.

    Dark Souls II Settings

    Unlike most action games, every encounter in Dark Souls II is potentially deadly, from the lowliest grunts to the largest boss creatures. To further raise the stakes, death has consequences. Slaying foes grants souls, the titular items that fuel both trade and character progression. Spending souls increases your survivability, whether you invest them directly in your character stats (e.g. Vitality) or a more powerful shield. However, dying forfeits any souls you are currently carrying and resets your progress to the last bonfire (i.e., checkpoint) you rested beside. The catch is that dying or resting resets any creatures you have previously slain, giving your quest a moribund, Sisyphean repetition that grinds impatient players to a halt. And once slain, you have one chance to recover your lost souls. A glowing green aura marks the site of your previous bereavement. Touch that mark before you die again and you regain your cache; fail to do so and you lose it forever. You will often fail to do so.

    What many Souls reviewers find refreshing about the game’s difficulty is actually a more forgiving variation of the death mechanics found in early ASCII-based games like Rogue (1980), Hack (1985), and NetHack (1987), wherein ‘permadeath’—i.e., death meant starting the game anew—was a central conceit. And those games were almost direct ‘ports’ of tabletop roleplaying progenitors like Dungeons & Dragons, whose early versions were skewed more toward the gritty realism of pulp literature than the godlike power fantasies of modern roleplaying games. A successful career in D&D meant accumulating enough treasure to eventually retire from dungeon-delving, so one could hire other hapless retainers to loot on your behalf. Death was frequent and expected because dungeons were dangerous places. And unless one’s Dungeon Master was particularly lenient, death was final. A fatal mistake meant re-rolling your character. In this sense, the Souls games stand apart from their videogame peers because of the conservatism of their design. Though countless games ape D&D’s generic fantasy setting and stat-based progress model, few adopt the existential dread of its early forms.

    Dark Souls II’s adherence to opaque systems and traditional difficulty has alienated players unaccustomed to the demands of earlier gaming models. For those repeatedly stymied by the game’s frustrations, several questions arise: Why put forth the effort in a game that feels so antagonistic toward its players? Is there any reward worth the frequent, unforgiving failure? Aren’t games supposed to be fun—and is failing fun?

    YOU DIED

    Games scholar Jesper Juul raises similar questions in The Art of Failure, the second book in MIT’s new Playful Thinking series. His central thesis is that games present players a ‘paradox of failure’: we do not like to fail, yet games perpetually make us do so; weirder still, we seek out games voluntarily, even though the only victory they offer is over a failure that they themselves create. Despite games’ reputation as frivolous fun, they can humiliate and infuriate us. Real emotions are at stake. And, as Juul argues, ‘the paradox of failure is unique in that when you fail in a game, it really means that you were in some way inadequate’ (7). So when my character plunges down the pit in Majula, the developers do not tell me ‘Your character died,’ even though I have named that character. Instead the games remind us, ‘YOU DIED.’ YOU, the player, the one holding the Xbox 360 controller.

    The strength of Juul’s argument is that he does not rely on a single discipline but instead approaches failure via four related ‘lenses’: philosophy, psychology, game design, and fiction (30). Each lens has its own brief chapter and accompanying game examples, and throughout Juul interjects anecdotes from his personal play experience alongside lessons he’s learned co-designing a number of experimental video games. The breadth of examples is wide, ranging from big-budget games like Uncharted 2, Meteos, and Skate 2 to more obscure works like Flywrench, September 12, and Super Real Tennis.

    Juul’s first lens (chapter 2) links up his paradox of failure to a longstanding philosophical quandary known as the ‘paradox of painful art.’ Like video games, art tends to elicit painful emotions from viewers, whether a tragic stage play or a disturbing novel, yet contrary to the notion that we seek to avoid pain, people regularly pursue such art—even enjoy it. Juul provides a summary of positions philosophers have offered to explain this behavior, categorized as follows: deflationary arguments skirt the paradox by claiming that art doesn’t actually cause us pain in the first place; compensatory arguments acknowledge the pain, but claim that the sum of painful vs. pleasant reactions to art yield a net positive; and a-hedonistic arguments deny that humans are solely pleasure-seekers—some of us pursue pain.

    Juul’s commonsense response is that we should not limit human motivation to narrow, atemporal explanations. Instead, a synthesis of categories is possible, because we can successfully manage multiple contradictory desires based on immediate and long-term (i.e., aesthetic) time frames. He writes, ‘Our moment-to-moment desire to avoid unpleasant experiences is at odds with a longer-term aesthetic desire in which we understand failure, tragedy, and general unpleasantness to be necessary for our experience’ (115). In Dark Souls II, I faced a particularly challenging section early on when my character, a sorcerer, was under-powered and under-equipped to face a strong, agile boss known as The Pursuer. I spent close to four hours running the same path to the boss, dying dozens of times, with no net progress.

    Facing the Pursuer

    For Juul, my continued persistence did not betray a masochistic personality flaw (not that I didn’t consider it), nor would he trivialize my frustration (which I certainly felt), nor would he argue that I was eking out more pleasure than pain during my repeated trials (I certainly wasn’t). Instead, I was tolerating immediate failure in pursuit of a distant aesthetic goal, one that would not arrive during that game session—or many sessions to come. And indeed, this is why Juul calls games the ‘art of failure,’ because ‘games hurt us and then induce an urgency to repair our self-image’ (45). I could only overcome the Pursuer if I learned to play better. Juul writes, ‘Failure is integral to the enjoyment of game playing in a way that it is not integral to the enjoyment of learning in general. Games are a perspective on failure and learning as enjoyment, or satisfaction’ (45). Failure is part of what makes a game a game.

    Chapter 3 proceeds to the psychological lens, allowing Juul to review the myriad ways we experience failure emotionally. For many games, the impact can be significant: ‘To play a game is to take an emotional gamble. The higher the stakes, in terms of time investment, public acknowledgement, and personal importance, the higher are the potential losses and rewards’ (57). Failure doesn’t feel good, but again, paradoxically, we must first accept responsibility for our failures in order to then learn from them. ‘Once we accept responsibility,’ Juul writes, ‘failure also concretely pushes us to search for new strategies and learning opportunities in a game’ (116). But why can’t we learn without the painful consequences? Because most of us need prodding to be the best players we can be. In the absence of failure, players will cheese and cheat their way to favorable outcomes (59).

    Juul concludes that games help us grow—‘we come away from any skill-based game changed, wiser, and possessing new skills’ (59)—but his more interesting point is how we buffer the emotional toll of failure by diverting or transforming it. ‘Self-defeating’ players react to failure by lessening their efforts, a laissez-faire attitude that makes failure expected and thus less painful. ‘Spectacular’ failures, on the other hand, elevate negativity to an aesthetic focal point. When I laugh at the quivering pile of polygons clipped halfway through the floor geometry by the Pursuer’s blade, I’m no longer lamenting my own failure but celebrating the game’s.

    Chapter 4 provides a broad view of how games are designed to make us fail and counters much conventional wisdom about prevailing design trends. For instance, many players complain that contemporary games are too easy, that we don’t fail enough, but Juul argues that those players are confusing failure with punishment. Failure is now designed to be more frequent than in the past, but punishment is far less severe. Death in early arcade or console games often meant total failure, resetting your progress to the beginning of the game. Death in Dark Souls II merely forfeits your souls in-hand—any spent souls, found items, gained levels, or cached equipment are permanent. Punishment certainly feels severe when you lose tens of thousands of souls, but the consequences are far less jarring than losing your final life in Ghost ’n’ Goblins.

    Juul outlines three different paths through which games lead us to success or failure—skill, chance, and labor—but notes that his categories are neither exhaustive nor mutually exclusive (75, 82). The first category is likely the most familiar for frequent game players: ‘When we fail in a game of skill, we are therefore marked as deficient in a straightforward way: as lacking the skills required to play the game’ (74). When our skills fail us, we only have ourselves to blame. Chance, however, ‘marks us in a different way…as being on poor terms with the gods, or as simply unlucky, which is still a personal trait that we would rather not have’ (75). With chance in play, failure gains a cosmic significance.

    Labor is one of the newer design paths, characterized by the low-skill, slow-grind style of play frequently maligned in Farmville and its clones, but also found in better-regarded titles like World of Warcraft (and RPGs in general). In these games, failure has its lowest stakes: ‘Lack of success in a game of labor therefore does not mark us as lacking in skill or luck, but at worst as someone lazy (or too busy). For those who are afraid of failure, this is close to an ideal state. For those who think of games as personal struggles for improvement, games of labor are anathema’ (79). Juul’s last point is an important lesson for critics quick to dismiss the ‘click-to-win’ genre outright. For players averse to personal or cosmic failure, games of labor are a welcome respite.

    Juul’s final lens (chapter 5) examines fictional failure. ‘Most video games,’ he writes, ‘represent our failures and successes by letting our performance be mirrored by a protagonist (or society, etc.) in the game’s fictional world. When we are unhappy to have failed, a fictional character is also unhappy’ (117). Beginning with this conventional case, Juul then discusses games that subvert or challenge the presumed alignment of player/character interests, asking whether games can be tragic or present situations where character failure might be the desired outcome. While Juul concedes that ‘the self-destruction of the protagonist remains awkward,’ complicity—a sense of player regret when facing a character’s repugnant actions—offers a ‘better variation’ of game tragedy (117). Juul argues that complicity is unique to games, an experience that is ‘more personal and stronger than simply witnessing a fictional character performing the same actions’ (113). When I nudge my character into Majula’s pit, I’m no longer a witness—I’m a participant.

    The Art of Failure’s final chapter focuses the prior lens’ viewpoints on failure into a humanistic concluding point: ‘Failure forces us to reconsider what we are doing, to learn. Failure connects us personally to the events in the game; it proves that we matter, that the world does no simply continue regardless of our actions’ (122). For those who already accept games as a meaningful, expressive medium, Juul’s conclusion may be unsurprising. But this kind of thoughtful optimism is also part of the book’s strength. Juul’s writing is approachable and jargon-free, and the Playful Thinking series’ focus on depth, readability, and pocket-size volumes makes The Art of Failure an ideal book to pass along to friends and colleagues that might question your ‘frivolous’ videogame hobby—or, more importantly, justify why you often spend hours swearing at the screen while purportedly in pursuit of ‘fun.’

    The final chapter also offers a tantalizingly brief analysis of how Juul’s lenses might refract outward, beyond games, to culture at large. Specifically targeting the now-widespread corporate practice of gamification, wherein game design principles are applied as motivators and performance measures for non-leisure activities (usually work), Juul reminds us that the technique often fails because workplace performance goals ‘rarely measure what they are supposed to measure’ (120). Games are ideal for performance measurement because of their peculiar teleology: ‘The value system that the goal of a game creates is not an artificial measure of the value of the player’s performance; the goal is what creates the value in the first place by assigning values to the possible outcomes of a game’ (121). This kind of pushback against digital idealism is an important reminder that games ‘are not a pixie dust of motivation to be sprinkled on any subject’ (10), and Juul leaves a lot of room for further development of his thesis beyond the narrow scope of videogames.

    For the converted, The Art of Failure provides cross-disciplinary insights into many of our unexamined play habits. While playing Dark Souls II, I frequently thought of Juul’s triumvirate of design paths. Dark Souls II is an exemplary hybrid—though much of your success is skill-based, chance and labor play significant roles. The algorithmic systems that govern item drops or boss attacks can often sway one’s fortunes toward success or failure, as many speedrunners would attest. And for as much ink is spilt about Dark Souls II being a ‘hardcore’ game with ‘old-school’ challenge, success can also be won through skill-less labor. Summoning high-level allies to clear difficult paths or simply investing hours grinding souls to level your character are both viable supplements for chance and skill.

    But what of games that do not fit these paths? How do they contend with failure? There is a rich tradition of experimental or independent artgames, notgames, game poems, and the like that are designed with no path to failure. Standout examples like Proteus, Dys4ia, and Your Lover Has Turned Into a Flock of Birds require no skills beyond operating a keyboard or mouse, do not rely on chance, and require little time investment. Unsurprisingly, games like these are often targeted as ‘non-games,’ and Juul’s analysis leaves little room for games that skirt these borderlines. There is a subtext in The Art of Failure that draws distinctions between ‘good’ and ‘bad’ design. Early on, Juul writes that ‘(good) games are designed such that they give us a fair chance’ (7) and ‘for something to be a good game, and a game at all, we expect resistance and the possibility of failure’ (12).

    There are essentialist, formalist assumptions guiding Juul’s thesis, leading him to privilege games’ ‘unique’ qualities at the risk of further marginalizing genres, creators, and hybrid play practices that already operate at the margins. To argue that complicity is unique to games or that games are the art of failure is to make an unwarranted leap into medium specificity and draw borderlines that need not be drawn. Certainly other media can draw us into complicity, a path well-trodden in cinema’s exploration of voyeurism (Rear Window, Blow-Up) and extreme horror (Saw, Hostel). Can’t games simply be particularly strong at complicity, rather than its sole purveyor?

    I’m similarly unconvinced that games are the quintessential art of failure. Critics often contend that video games are unique as a medium in that they require a certain skill threshold to complete. While it is true that finishing Super Mario Bros. is different than watching the entirety of The Godfather, we can use Juul’s own multi-path model to understand how we might fail at other media. The latter example certainly requires more labor—one can play dozens of Super Mario runs during The Godfather’s 175-minute runtime. Further, watching a film lauded as one of history’s greatest carries unique expectations that many viewers may fail to satisfy, from the societal pressure to agree on its quality to the comprehensive faculties necessary to follow its narrative. Different failures arise from different media—I’ve failed reading Infinite Jest more than I’ve failed completing Dark Souls II. And any visit to a museum will teach you that many people feel as though they fail at modern art. Tackling Dark Souls II’s Pursuer or Barnett Newman’s Onement, I can be equally daunting.

    When scholars ask, as Juul does, what games can do, they must be careful that by doing so they do not also police what games can be. Failure is a compelling lens through which to examine our relationship to play, but we needn’t valorize it as the only means to count as a game.
    _____


    Nathan Altice is an instructor of sound and game design at Virginia Commonwealth University and author of the platform study of the NES/Famicom, I AM ERROR (MIT, 2015). He writes at metopal.com and burns bridges at @circuitlions.

  • The People’s Platform by Astra Taylor

    The People’s Platform by Astra Taylor

    image

    Or is it? : Astra Taylor’s The People’s Platform

    Review by Zachary Loeb

    ~

    Imagine not using the Internet for twenty-four hours.

    Really: no Internet from dawn to dawn.

    Take a moment to think through the wide range of devices you would have to turn off and services you would have to avoid to succeed in such a challenge. While a single day without going online may not represent too outlandish an ordeal such an endeavor would still require some social and economic gymnastics. From the way we communicate with friends to the way we order food to the way we turn in assignments for school or complete tasks in our jobs – our lives have become thoroughly entangled with the Internet. Whether its power and control are overt or subtle the Internet has come to wield an impressive amount of influence over our lives.

    All of which should serve to raise a discomforting question – so, who is in control of the Internet? Is the Internet a fantastically democratic space that puts the power back in the hands of people? Is the Internet a sly mechanism for vesting more power in the hands of the already powerful, whilst distracting people with a steady stream of kitschy content and discounted consumerism? Or, is the Internet a space relying on levels of oft-unseen material infrastructures with a range of positive and negative potentialities? These are the questions that Astra Taylor attempts to untangle in her book The People’s Platform: Taking Back Power and Culture in the Digital Age (Metropolitan Books, 2014). It is the rare example of a book where the title itself forms a thesis statement of sorts: the Internet was and can be a platform for the people but this potential has been perverted, and thus there needs to be a “taking back” of power (and culture).

    At the outset Taylor locates her critique in the space between the fawning of the “techno-optimists” and the grousing of the “techno-skeptics.” Far from trying to assume a “neutral” stance, Taylor couches her discussion of the “techno” by stepping back to consider the social, political, and economic forces that shape the “techno” reality that inspires optimism and skepticism. Taylor, therefore, does not build her argument upon a discussion of the Internet as such but builds her argument around a discussion of the Internet as it is and as it could be. Unfortunately the “as it currently is” of this “new media” evinces that: “Corporate power and the quest for profit are as fundamental to new media as old.” (8)

    Thus Taylor sets up the conundrum of the Internet – it is at once a media platform with a great deal of democratic potential, and yet this potential has been continually appropriated for bureaucratic, technocratic, and indeed plutocratic purposes.

    Over the course of The People’s Platform Taylor moves from one aspect of the Internet (and its related material infrastructures) to another – touching upon a range of issues from the Internet’s history, to copyright and the way it has undermined “cultural creators” ability to earn a living, the way the Internet persuades and controls, across the issues of journalism and e-waste, to the ways in which the Internet can replicate the misogyny and racism of the offline world.

    With her background as a documentary filmmaker (she directed the film The Examined Life [which is excellent]) Taylor is skilled in cutting deftly from one topic to the next, though this particular experience also gives her cause to dwell at length upon the matter of how culture is created and supported in the digital age. Indeed as a maker of independent films Taylor is particularly attuned to the challenges of making culturally valuable content in a time when free copies spread rapidly on-line. Here too Taylor demonstrates the link to larger economic forces – there are still highly successful “stars” and occasional stories of “from nowhere” success, but the result is largely that those attempting to eke out a nominal subsistence find it increasingly challenging to do so.

    As the Internet becomes the principle means of dissemination of material “cultural creators” find themselves bound to a system wherein the ultimate remuneration rarely accrues back to them. Likewise the rash of profit-driven mergers and shifting revenue streams has resulted in a steady erosion of the journalistic field. It is not – as Taylor argues – that there is a lack of committed “cultural creators” and journalists working today, it is that they are finding it increasingly difficult to sustain their efforts. The Internet, as Taylor describes it, is certainly making many people enormously wealthy but those made wealthy are more likely to be platform owners (think Google or Facebook) than those who fill those platforms with the informational content that makes them valuable.

    Though the Internet may have its roots in massive public investment and though the value of the Internet is a result of the labor of Internet users (example: Facebook makes money by selling advertisements based on the work you put it in on your profile), the Internet as it is now is often less of an alternative to society than it is a replication. The biases of the offline world are replicated in the digital realm, as Taylor puts it:

    “While the Internet offers marginalized groups powerful and potentially world-changing opportunities to meet and act together, new technologies also magnify inequality, reinforcing elements of the old order. Networks do not eradicate power: they distribute it in different ways, shuffling hierarchies and producing new mechanisms of exclusion.” (108)

    Thus, the Internet – often under the guise of promoting anonymity – can be a site for an explosion of misogyny, racism, classism, and an elitism blossoming from a “more-technologically-skilled-than-thou” position. There are certainly many “marginalized groups” and individuals trying to use the Internet to battle their historical silencing, but for every social justice minded video there is a comment section seething with the grunts of trolls. Meanwhile behind this all stand the same wealthy corporate interests that enjoyed privileged positions before the rise of the Internet. These corporate forces can wield the power they gain from the Internet to steer and persuade Internet users in such a way that the “curated experience” of the Internet is increasingly another way of saying, “what a major corporation thinks you (should) want.”

    image

    Breaking through the ethereal airs of the Internet, Taylor also grounds her argument in the material realities of the digital realm. While it is true that more and more people are increasingly online, Taylor emphasizes that there are still many without access and that the high-speed access enjoyed by some is not had by one and all. Furthermore, all of this access, all of these fanciful devices, all of these democratic dreams are reliant upon a physical infrastructure shot through with dangerous mining conditions, wretched laboring facilities, and toxic dumps where discarded devices eventually go to decay. Those who are able to enjoy the Internet as a positive feature in their day to day life are rarely the same people who worked in the mines, the assembly plants, or who will have to live on the land that has been blighted by e-waste.

    While Taylor refuses to ignore the many downsides associated with the Internet age she remains fixed on its positive potential. The book concludes without offering a simplistic list of solutions but nevertheless ends with a sense that those who care about the Internet’s non-corporate potential need to work to build a “sustainable digital future” (183). Though there are certainly powerful interests profiting from the current state of the Internet the fact remains that (in a historical sense) the Internet is rather young, and there is still time to challenge the shape it is taking. Considering what needs to be done, Taylor notes: “The solutions we need require collective, political action.” (218)

    It is a suggestion that carries a sentiment that people can band together to reassert control over the online commons that are steadily being enclosed by corporate interests. By considering the Internet as a public utility (a point being discussed at the moment in regards to Net Neutrality) and by focusing on democratic values instead of financial values – it may be possible for people to reverse (or at least slow) the corporate wave which is washing over the Internet.

    After all, the Internet is the result of massive public investment, why is it that it has been delivered into corporate hands? Ultimately, Taylor concludes (in a chapter titled “In Defense of the Commons: A Manifesto for Sustainable Culture”) that if people want the Internet to be a “people’s platform” that they will have to organize and fight for it (“collective, political”). In a time when the Internet is an important feature of society, it makes a difference if the Internet is an open “people’s platform” or a highly (if subtly) controlled corporate theme park. “The People’s Platform” requires people who care to raise their voices…such as the people who have read Astra Taylor’s book, perhaps.

    * * * * *

    With The People’s Platform Astra Taylor has made an effective and interesting contribution to the discussion around the nature of the Internet and its future. By emphasizing a political and economic critique she is able to pull the Internet away from a utopian fantasy in order to analyze it in terms of the competing forces that have shaped (and continue to shape) it. The perspective that Taylor brings, as a documentary filmmaker, allows her to drop the journalistic façade of objectivity in order to genuinely and forcefully engage with issues pertaining to the compensation of cultural creators in the age of digital dissemination. Whilst the sections that Taylor writes on the level of misogyny one encounters online and the section on e-waste make this book particularly noteworthy. Though each chapter of The People’s Platform could likely be extended into an entire book, it is in their interconnections that Taylor is able to demonstrate the layers of interconnected issues that are making such a mess of the Internet today. For the problem facing the online realm is not just corporate control – it is a slew of issues that need to be recognized in total (and in their interconnected nature) if any type of response is to be mounted.

    Though The People’s Platform is ostensibly about a conflict regarding the future of the Internet, the book is itself a site of conflicting sentiments. Though Taylor – at the outset – aims to avoid aligning herself with the “cheerleaders of progress” or “the prophets of doom” (4) the book that emerges is one that is in the stands of the “cheerleaders of progress” (even if with slight misgivings about being in those stands). The book’s title suggests that even with all of the problems associated with the Internet it still represents something promising, something worth fighting to “take back.” It is a point that is particularly troublesome to consider after Taylor’s description of labor conditions and e-waste. For one of the main questions that emerges towards the end of Taylor’s book – though it is not one she directly poses – makes problematic the book’s title, that question being: which “people” are being described in “the people’s platform?”

    image

    It may be tempting to answer such a question with a simplistic “well, all of the people” yet such a response is inadequate in light of the way that Taylor’s book clearly discusses the layers of control and dominance one finds surrounding the Internet. Can the Internet be “the people’s platform” for writers, journalists, documentary filmmakers, and activists with access to digital tools? Sure. But what of those described in the e-waste chapter – people living in oppressive conditions and toiling in factories where building digital devices puts them at risk of cancer or disassembling such devices poisons them and their families? Those people count as well, but those upon whom “the people’s platform” is built seem to be crushed beneath it, not able to get on top of it – to stand on “the people’s platform” is to stand on the hunched shoulders of others. It is true that Taylor takes this into account in emphasizing that something needs to be done to recognize and rectify this matter – but insofar as the material tools “the people” use to reach the Internet are built upon the repression and oppression of other people, it sours the very notion of the Internet as “the people’s platform.”

    This in turn raises another question: what would a genuine “people’s platform” look like? In the conclusion to the book Taylor attempts to answer this question by arguing for political action and increased democratic control over the Internet; however, one can easily imagine classifying the Internet as a “public utility” without doing anything to change the laboring conditions of those who build devices. Indeed, the darkly amusing element of The People’s Platform is that Taylor answers this question brilliantly on the second page of her book and then spends the following two hundred and thirty pages ignoring this answer.

    Taylor begins The People’s Platform with an anecdote about her youth in the pre-Internet (or pre-high speed Internet) era, wherein she recalls working on a small personally assembled magazine (a “zine”) which she would then have printed and distribute to friends and a variety of local shops. Looking back upon her time making zines, Taylor writes:
    “Today any kid with a smartphone and a message has the potential to reach more people with the push of a button that I did during two years of self-publishing.” (2)

    These lines from Taylor come only a sentence after she considers how her access to easy photocopying (for her zine) made it easier for her than it had been for earlier would-be publishers. Indeed, Taylor recalls:

    “a veteran political organizer told me how he and his friends had to sell blood in order to raise the funds to buy a mimeograph machine so they could make a newsletter in the early sixties.” (2)

    There are a few subtle moments in the above lines (from the second page of Taylor’s book) that say far more about a “people’s platform” than they let on. It is true that a smartphone gives a person “the potential to reach more people” but as the rest of Taylor’s book makes clear – it is not necessarily the case that people really do “reach more people” online. There are certainly wild success stories, but for “any kid” their reach with their smartphone may not be much greater than the number of people reachable with a photocopied zine. Furthermore, the zine audience might have been more engaged and receptive than the idle scanner of Tweets or Facebook updates – the smartphone may deliver more potential but actually achieve less.

    Nevertheless, the key aspects is Taylor’s comment about the “veteran political organizer” – this organizer (“and his friends”) were able to “buy a mimeograph machine so they could make a newsletter.” Is this different from buying a laptop computer, Internet access, and a domain name? Actually? Yes. Yes, it is. For once those newsletter makers bought the mimeograph machine they were in control of it – they did not need to worry about its Terms of Service changing, about pop-up advertisements, about their movements being tracked through the device, about the NSA having installed a convenient backdoor – and frankly there’s a good chance that the mimeograph machine they purchased had a much longer life than any laptop they would purchase today. Again – they bought and were able to control the means for disseminating their message, one cannot truly buy all of the means necessary for disseminating an online message (when one includes cable, ISP providers, etc…).

    The case of the mimeograph machine and the Internet is the question of what types of technologies represent genuine people’s platforms and which result in potential “people’s platforms” (note the quotation marks)? This is not to say that mimeograph machines are perfect (after all somebody did build that machine) but when considering technology in a democratic sense it is important to puzzle over whether or not (to borrow Lewis Mumford’s terminology) the tool itself is “authoritarian” or “democratic.” The way the Internet appears in Taylor’s book – with its massive infrastructure, propensity for centralized control, material reality built upon toxic materials – should at the very least make one question to what extent the Internet is genuinely a democratic “people’s” tool. Or, whether or not it is simply such a tool for those who are able to enjoy the bulk of the benefits and a minimum of the downsides. Taylor clearly does not want to be accused of being a “prophet of doom” – or of being a prophet for profit – but the sad result is that she jumps over the genuine people’s platform she describes on the second page in favor of building an argument for a platform that, by book’s end, seems to hardly be one for “the people” in any but a narrow sense of “the people.”

    The People’s Platform: Taking Back Power and Culture in the Digital Age is a well written, solidly researched, and effectively argued book that raises many valuable questions. The book offers no simplistic panaceas but instead forces the reader to think through the issues – oftentimes by forcing them to confront uncomfortable facts about digital technologies (such as e-waste). As Taylor uncovers and discusses issue after bias after challenge regarding the Internet the question that haunts her text is whether or not the platform she is describing – the Internet – is really worthy of being called “The People’s Platform”? If so, to which “people” does this apply?

    The People’s Platform is well worth reading – but it is not the end of the conversation. It is the beginning of the conversation.

    And it is a conversation that is desperately needed.

    __

    The People’s Platform: Taking Back Power and Culture in the Digital Age
    by Astra Taylor
    Metropolitan Books, 2014

    __

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck, which is where this review originally appeared.

  • The Digital Turn

    The Digital Turn

    800px-Culture_d'amandiers

    David Golumbia and The b2 Review look to digital culture

    ~
    I am pleased and honored to have been asked by the editors of boundary 2 to inaugurate a new section on digital culture for The b2 Review.

    The editors asked me to write a couple of sentences for the print journal to indicate the direction the new section will take, which I’ve included here:

    In the new section of the b2 Review, we’ll be bringing the same level of critical intelligence and insight—and some of the same voices—to the study of digital culture that boundary 2 has long brought to other areas of literary and cultural studies. Our main focus will be on scholarly books about digital technology and culture, but we will also branch out to articles, legal proceedings, videos, social media, digital humanities projects, and other emerging digital forms.

    While some might think it late in the day for boundary 2 to be joining the game of digital cultural criticism, I take the time lag between the moment at which thoroughgoing digitization became an unavoidable reality (sometime during the 1990s) and the first of the major literary studies journals to dedicate part of itself to digital culture as indicative of a welcome and necessary caution with regard to the breathless enthusiasm of digital utopianism. As humanists our primary intellectual commitment is to the deeply embedded texts, figures, and themes that constitute human culture, and precisely the intensity and thoroughgoing nature of the putative digital revolution must give somebody pause—and if not humanists, who?

    Today, the most overt mark of the digital in humanities scholarship goes by the name Digital Humanities, but it remains notable how little interaction there is between the rest of literary studies and that which comes under the DH rubric. That lack of interaction goes in both directions: DH scholars rarely cite or engage directly with the work the rest of us do, and the rest of literary studies rarely cites DH work, especially when DH is taken in its “narrow” or most heavily quantitative form. The enterprises seem, at times, to be entirely at odds, and the rhetoric of the digital enthusiasts who populate DH does little to forestall this impression. Indeed, my own membership in the field of DH has long been a vexed question, despite being one of the first English professors in the country to be hired to a position for which the primary specialization was explicitly indicated as Digital Humanities (at the University of Virginia in 2003), and despite being a humanist whose primary area is “digital studies,” and the inability of scholars “to be” or “not to be” members of a field in which they work is one of the several ways that DH does not resemble other developments in the always-changing world of literary studies.

    800px-054_Culture_de_fraises_en_hauteur_et_sous_serre_à_Plougastel

    Earlier this month, along with my colleague Jennifer Rhee, I organized a symposium called Critical Approaches to Digital Humanities sponsored by the MATX PhD program at Virginia Commonwealth University, where Prof. Rhee and I teach in the English Department. One of the conference participants, Fiona Barnett of Duke and HASTAC, prepared a Storify version of the Twitter activity at the symposium that provides some sense of the proceedings. While it followed on the heels and was continuous with panels such as the ‘Dark Side of the Digital Humanities’ at the 2013 MLA Annual Convention, and several at recent American Studies Association Conventions, among others, this was to our knowledge the first standalone DH event that resembled other humanities conferences as they are conducted today. Issues of race, class, gender, sexuality, and ability were primary; cultural representation and its relation to (or lack of relation to) identity politics was of primary concern; close reading of texts both likely and unlikely figured prominently; the presenters were diverse along several different axes. This arose not out of deliberate planning so much as organically from the speakers whose work spoke to the questions we wanted to raise.

    I mention the symposium to draw attention to what I think it represents, and what the launching of a digital culture section by boundary 2 also represents: the considered turning of the great ship of humanistic study toward the digital. For too long enthusiasts alone have been able to stake out this territory and claim special and even exclusive insight with regard to the digital, following typical “hacker” or cyberlibertarian assertions about the irrelevance of any work that does not proceed directly out of knowledge of the computer. That such claims could even be taken seriously has, I think, produced a kind of stunned silence on the part of many humanists, because it is both so confrontational and so antithetical to the remit of the literary humanities from comparative philology to the New Criticism to deconstruction, feminism and queer theory. That the core of the literary humanities as represented by so august an institution as boundary 2 should turn its attention there both validates the sense of digital enthusiasts of the medium’s importance, but should also provoke them toward a responsibility toward the project and history of the humanities that, so far, many of them have treated with a disregard that at times might be characterized as cavalier.

    -David Golumbia

    Browse All Digital Studies Reviews