• Richard Hill – Too Big to Be (Review of Wu, The Curse of Bigness: Antitrust in the New Gilded Age)

    Richard Hill – Too Big to Be (Review of Wu, The Curse of Bigness: Antitrust in the New Gilded Age)

    a review of Timothy Wu, The Curse of Bigness: Antitrust in the New Gilded Age (Random House/Columbia Global Reports, 2018)

    by Richard Hill

    ~

    Tim Wu’s brilliant new book analyses in detail one specific aspect and cause of the dominance of big companies in general and big tech companies in particular: the current unwillingness to modernize antitrust law to deal with concentration in the provision of key Internet services. Wu is a professor at Columbia Law School, and a contributing opinion writer for the New York Times. He is best known for his work on Net Neutrality theory. He is author of the books The Master Switch and The Attention Merchants, along with Network Neutrality, Broadband Discrimination, and other works. In 2013 he was named one of America’s 100 Most Influential Lawyers, and in 2017 he was named to the American Academy of Arts and Sciences.

    What are the consequences of allowing unrestricted growth of concentrated private power, and abandoning most curbs on anticompetitive conduct? As Wu masterfully reminds us:

    We have managed to recreate both the economics and politics of a century ago – the first Gilded Age – and remain in grave danger of repeating more of the signature errors of the twentieth century. As that era has taught us, extreme economic concentration yields gross inequality and material suffering, feeding an appetite for nationalistic and extremist leadership. Yet, as if blind to the greatest lessons of the last century, we are going down the same path. If we learned one thing from the Gilded Age, it should have been this: The road to fascism and dictatorship is paved with failures of economic policy to serve the needs of the general public. (14)

    While increasing concentration, and its negative effects on social equity, is a general phenomenon, it is particularly concerning for what regards the Internet: “Most visible in our daily lives is the great power of the tech platforms, especially Google, Facebook, and Amazon, who have gained extraordinary power over our lives. With this centralization of private power has come a renewed concentration of wealth, and a wide gap between the rich and poor” (15). These trends have very real political effects: “The concentration of wealth and power has helped transform and radicalize electoral politics. As in the Gilded Age, a disaffected and declining middle class has come to support radically anti-corporate and nationalist candidates, catering to a discontent that transcends party lines” (15). “What we must realize is that, once again, we face what Louis Brandeis called the ‘Curse of Bigness,’ which, as he warned, represents a profound threat to democracy itself. What else can one say about a time when we simply accept that industry will have far greater influence over elections and lawmaking than mere citizens?” (15). And, I would add, what have we come to when some advocate that corporations should have veto power over public policies that affect all of us?

    Surely it is, or should be, obvious that current extreme levels of concentration are not compatible with the premises of social and economic equity, free competition, or democracy. And that “the classic antidote to bigness – the antitrust and other antimonopoly laws – might be recovered and updated to face the challenges of our times” (16). Those who doubt these propositions should read Wu’s book carefully, because he shows that they are true. My only suggestion for improvement would be to add a more detailed explanation of how network effects interact with economies of scale to favour concentration in the ICT industry in general, and in telecommunications and the Internet in particular. But this topic is well explained in other works.

    As Wu points out, antitrust law must not be restricted (as it is at present in the USA) “to deal with one very narrow type of harm: higher prices to consumers” (17). On the contrary, “It needs better tools to assess new forms of market power, to assess macroeconomic arguments, and to take seriously the link between industrial concentration and political influence” (18). The same has been said by other scholars (e.g. here, here, here and here), by a newspaper, an advocacy group, a commission of the European Parliament, a group of European industries, a well-known academic, and even by a plutocrat who benefitted from the current regime.

    Do we have a choice? Can we continue to pretend that we don’t need to adapt antitrust law to rein in the excessive power of the Internet giants? No: “The alternative is not appealing. Over the twentieth century, nations that failed to control private power and attend to the economic needs of their citizens faced the rise of strongmen who promised their citizens a more immediate deliverance from economic woes” (18). (I would argue that any resemblance to the election of US President Trump, to the British vote to leave the European Union, and to the rise of so-called populist parties in several European countries [e.g. Hungary, Italy, Poland, Sweden] is not coincidental).

    Chapter One of Wu’s book, “The Monopolization Movement,” provides historical background, reminding us that from the late nineteenth through the early twentieth century, dominant, sector-specific monopolies emerged and were thought to be an appropriate way to structure economic activity. In the USA, in the early decades of the twentieth century, under the Trust Movement, essentially every area of major industrial activity was controlled or influenced by a single man (but not the same man for each area), e.g. Rockefeller and Morgan. “In the same way that Silicon Valley’s Peter Thiel today argues that monopoly ‘drives progress’ and that ‘competition is for losers,’ adherents to the Trust Movement thought Adam Smith’s fierce competition had no place in a modern, industrialized economy” (26). This system rapidly proved to be dysfunctional: “There was a new divide between the giant corporation and its workers, leading to strikes, violence, and a constant threat of class warfare” (30). Popular resistance mobilized in both Europe and the USA, and it led to the adoption of the first antitrust laws.

    Chapter Two, “The Right to Live, and Not Merely to Exist,” reminds us that US Supreme Court Justice Louis Brandeis “really cared about … the economic conditions under which life is lived, and the effects of the economy on one’s character and on the nation’s soul” (33). The chapter outlines Brandeis’ career and what motivated him to combat monopolies.

    In Chapter Three, “The Trustbuster,” Wu explains how the 1901 assassination of US President McKinley, a devout supporter of unrestricted laissez-faire capitalism (“let well enough alone”, reminiscent of today’s calls for government to “do not harm” through regulation, and to “don’t fix it if it isn’t broken”), resulted in a fundamental change in US economic policy, when Theodore Roosevelt succeeded him. Roosevelt’s “determination that the public was ruler over the corporation, and not vice versa, would make him the single most important advocate of a political antitrust law.” (47). He took on the great US monopolists of the time by enforcing the antitrust laws. “To Roosevelt, economic policy did not form an exception to popular rule, and he viewed the seizure of economic policy by Wall Street and trust management as a serious corruption of the democratic system. He also understood, as we should today, that ignoring economic misery and refusing to give the public what they wanted would drive a demand for more extreme solutions, like Marxist or anarchist revolution” (49). Subsequent US presidents and authorities continued to be “trust busters”, through the 1990s. At the time, it was understood that antitrust was not just an economic issue, but also a political issue: “power that controls the economy should be in the hands of elected representatives of the people, not in the hands of an industrial oligarchy” (54, citing Justice William Douglas). As we all know, “Increased industrial concentration predictably yields increased influence over political outcomes for corporations and business interests, as opposed to citizens or the public” (55). Wu goes on to explain why and how concentration exacerbates the influence of private companies on public policies and undermines democracy (that is, the rule of the people, by the people, for the people). And he outlines why and how Standard Oil was broken up (as opposed to becoming a government-regulated monopoly). The chapter then explains why very large companies might experience disecomonies of scale, that is, reduced efficiency. So very large companies compensate for their inefficiency by developing and exploiting “a different kind of advantages having less to do with efficiencies of operation, and more to do with its ability to wield economic and political power, by itself or conjunction with others. In other words, a firm may not actually become more efficient as it gets larger, but may become better at raising prices or keeping out competitors” (71). Wu explains how this is done in practice. The rest of this chapter summarizes the impact of the US presidential election of 1912 on US antitrust actions.

    Chapter Four, “Peak Antitrust and the Chicago School,” explains how, during the decades after World War II, strong antitrust laws were viewed as an essential component of democracy; and how the European Community (which later became the European Union) adopted antitrust laws modelled on those of the USA. However, in the mid-1960s, scholars at the University of Chicago (in particular Robert Bork) developed the theory that antitrust measures were meant only to protect consumer welfare, and thus no antitrust actions could be taken unless there was evidence that consumers were being harmed, that is, that a dominant company was raising prices. Harm to competitors or suppliers was no longer sufficient for antitrust enforcement. As Wu shows, this “was really laissez-faire reincarnated.”

    Chapter Five, “The Last of the Big Cases,” discusses two of the last really large US antitrust case. The first was breakup of the regulated de facto telephone monopoly, AT&T, which was initiated in 1974. The second was the case against Microsoft, which started in 1998 and ended in 2001 with a settlement that many consider to be a negative turning point in US antitrust enforcement. (A third big case, the 1969-1982 case against IBM, is discussed in Chapter Six.)

    Chapter Six, “Chicago Triumphant,” documents how the US Supreme Court adopted Bork’s “consumer welfare” theory of antitrust, leading to weak enforcement. As a consequence, “In the United States, there have been no trustbusting or ‘big cases’ for nearly twenty years: no cases targeting an industry-spanning monopolist or super-monopolist, seeking the goal of breakup” (110). Thus, “In a run that lasted some two decades, American industry reached levels of industry concentration arguably unseen since the original Trust era. A full 75 percent of industries witnessed increased concentration from the years 1997 to 2012” (115). Wu gives concrete examples: the old AT&T monopoly, which had been broken up, has reconstituted itself; there are only three large US airlines; there are three regional monopolies for cable TV; etc. But the greatest failure “was surely that which allowed the almost entirely uninhibited consolidation of the tech industry into a new class of monopolists” (118).

    Chapter Seven, “The Rise of the Tech Trusts,” explains how the Internet morphed from a very competitive environment into one dominated by large companies that buy up any threatening competitor. “When a dominant firm buys a nascent challenger, alarm bells are supposed to ring. Yet both American and European regulators found themselves unable to find anything wrong with the takeover [of Instagram by Facebook]” (122).

    The Conclusion, “A Neo-Brandeisian Agenda,” outlines Wu’s thoughts on how to address current issues regarding dominant market power. These include renewing the well known practice of reviewing mergers; opening up the merger review process to public comment; renewing the practice of bringing major antitrust actions against the biggest companies; breaking up the biggest monopolies, adopting the market investigation law and practices of the United Kingdom; recognizing that the goal of antitrust is not just to protect consumers against high prices, but also to protect competition per se, that is to protect competitors, suppliers, and democracy itself. “By providing checks on monopoly and limiting private concentration of economic power, the antitrust law can maintain and support a different economic structure than the one we have now. It can give humans a fighting chance against corporations, and free the political process from invisible government. But to turn the ship, as the leaders of the Progressive era did, will require an acute sensitivity to the dangers of the current path, the growing threats to the Constitutional order, and the potential of rebuilding a nation that actually lives up to its greatest ideals” (139).

    In other words, something is rotten in the state of the Internet: it has “collection and exploitation of personal data”; it has “recently been used to erode privacy and to increase the concentration of economic power, leading to increasing income inequalities”; it has led to “erosion of the press, leading to erosion of democracy.” These developments are due to the fact that “US policies that ostensibly promote the free flow of information around the world, the right of all people to connect to the Internet, and free speech, are in reality policies that have, by design, furthered the geo-economic and geo-political goals of the US, including its military goals, its imperialist tendencies, and the interests of large private companies”; and to the fact that “vibrant government institutions deliberately transferred power to US corporations in order to further US geo-economical and geo-political goals.”

    Wu’s call for action is not just opportune, but necessary and important; at the same time, it is not sufficient.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • John Pat Leary — Innovation and the Neoliberal Idioms of Development

    John Pat Leary — Innovation and the Neoliberal Idioms of Development

    John Pat Leary

    “Human creativity and human capacity is limitless,” said the Bangladeshi economist Muhammad Yunus to a darkened room full of rapt Austrian elites. The setting was TEDx Vienna, and Yunus’s address bore all the trademark features of TED’s missionary version of technocratic idealism. “We believe passionately in the power of ideas to change attitudes, lives and, ultimately, the world,” goes the TED mission statement, and this philosophy is manifest in the familiar form of Yunus’s talk (TED.com). The lighting was dramatic, the stage sparse, and the speaker alone on stage, with only his transformative ideas for company. The speech ends with the zealous technophilia that, along with the minimalist stagecraft and quaint faith in the old-fashioned power of lectures, defines this peculiar genre. “This is the age where we all have this capacity of technology,” Yunus declares: “The question is, do we have the methodology to use these capacities to address these problems?… The creativity of human beings has to be challenged to address the problems we have made for ourselves. If we do that, we can create a whole new world—we can create a whole new civilization” (Yunus 2012). Yunus’s conviction that now, finally and for the first time, we can solve the world’s most intractable problems, is not itself new. Instead, what TED Talks like this offer is a new twist on the idea of progress we have inherited from the nineteenth century. And with his particular focus on the global South, Yunus riffs on a form of that old faith, which might seem like a relic of the twentieth: “development.” What is new, then, about Yunus’s articulation of these old faiths? It comes from the TED Talk’s combination of prophetic individualism and technophilia: this is the ideology of “innovation.”

    “Innovation”: a ubiquitous word with a slippery meaning. “An innovation is a novelty that sticks,” writes Michael North in Novelty: A History of the New, pointing out the basic ontological problem of the word: if it sticks, it ceases to be a novelty. “Innovation, defined as a widely accepted change,” he writes, “thus turns out to be the enemy of the new, even as it stands for the necessity of the new” (North 2013, 4). Originally a pejorative term for religious heresy, in its common use today “innovation” is used a synonym for what would have once been called, especially in America, “futurity” or “progress.” In a policy paper entitled “A Strategy for American Innovation,” then-President Barack Obama described innovation as an American quality, in which the blessings of Providence are revealed no longer by the acquisition of territory, but rather by the accumulation of knowledge and technologies: “America has long been a nation of innovators. American scientists, engineers and entrepreneurs invented the microchip, created the Internet, invented the smartphone, started the revolution in biotechnology, and sent astronauts to the Moon. And America is just getting started” (National Economic Council and Office of Science and Technology Policy 2015, 10).

    In the Obama administration’s usage, we can see several of the common features of innovation as an economic ideology, some of which are familiar to students of American exceptionalism. First, it is benevolent. Second, it is always “just getting started,” a character of newness constantly being renewed. Third, like “progress” and “development” have been, innovation is a universal, benevolent abstraction made manifest through material, economic accomplishments. But even more than “progress,” which could refer to political and social accomplishments like universal suffrage or the polio vaccine, or “development,” which has had communist and social democratic variants, innovation is inextricable from the privatized market that animates it. For this reason, Obama can treat the state-sponsored moon landing and the iPhone as equivalent achievements. Finally, even if it belongs to the nation, the capacity for “innovation” really resides in the self. Hence Yunus’s faith in “creativity,” and Obama’s emphasis on “innovators,” the protagonists of this heroic drama, rather than the drama itself.

    This essay explores the individualistic, market-based ideology of “innovation” as it circulates from the English-speaking first world to the so-called third world, where it supplements, when it does not replace, what was once more exclusively called “development.” I am referring principally to projects that often go under the name of “social innovation” (or, relatedly, “social entrepreneurship”), which Stanford University’s Business School defines as “a novel solution to a social problem that is more effective, efficient, sustainable, or just than current solutions” (Stanford Graduate School of Business). “Social innovation” often advertises itself as “market-based solutions to poverty,” proceeding from the conviction that it is exclusion from the market, rather than the opposite, that causes poverty. The practices grouped under this broad umbrella include projects as different the micro-lending banks, for which Yunus shared the 2006 Nobel Peace Prize; smokeless, cell-phone charging cookstoves for South Asia’s rural peasantry; latrines that turn urine into electricity, for use in rural villages without running water; and the edtech academic and TED honoree Sugata Mitra’s “self-organized learning environment” (SOLE), which appears to consist mostly of giving internet-enabled laptops to poor children and calling it a day.

    The discourse of social innovation is a theory about economic process and also a story of the (first-world) self. The ideal innovator that emerges from the examples to follow is a flexible, socially autonomous individual, whose creativity and prophetic vision, nurtured by the market, can refashion social inequalities as discrete “problems” that simply await new solutions. Guided by a faith in the market but also shaped by the austerity that has slashed the budgets of humanitarian and development institutions worldwide, social innovation ideology marks a retreat from the social vision of development. Crucially, the ideologues of innovation also answer a post-development critique of Western arrogance with a generous, even democratic spirit. That is, one of the reasons that “innovation” has come to supersede “development” in the vocabulary of many humanitarian and foreign aid agencies is that innovation ideology’s emphasis on individual agency serves as a response to the legitimate charges of condescension and elitism long directed at Euro-American development agencies. But compromising the social vision of development also means jettisoning the ideal of global equality that, however deluded, dishonest, or self-serving it was, also came with it. This brings us to a critical feature of innovation thinking that is often disguised by the enthusiasm of its tech-economy evangelizers: it is in fact a pessimistic ideal of social change. The ideology of innovation, with its emphasis on processes rather than outcomes, and individual brilliance over social structures, asks us to accommodate global inequality, rather than challenge it. It is a kind of idealism, therefore, well suited to our dispiriting neoliberal moment, where the sense of possibility seems to have shrunk.

    My objective is not to evaluate these efforts individually, nor even to criticize their practical usefulness as solution-oriented projects (not all of them, anyway). Indeed, in response to the difficult, persistent question, “What is the alternative?” it is easy, and not terribly helpful, to simply answer “world socialism,” or at least “import-substitution industrialization.” My objective is perhaps more modest: to define the ideology of “innovation” that undergirds these projects, and to dissect the Anglo-American ego-ideal that it circulates. As an ideology, innovation is driven by a powerful belief, not only in technology and its benevolence, but in a vision of the innovator: the autonomous visionary whose creativity allows him to anticipate and shape capitalist markets.

    An Orthodoxy of Unorthodoxy: Innovation, Revolution, and Salvation

    Given the immodesty of the innovator archetype, it may seem odd that innovation ideology could be considered pessimistic. On its own terms, of course, it is not; but when measured against the utopian ambitions and rhetoric of many “social innovators” and technology evangelists, their actual prescriptions appear comparatively paltry. Human creativity is boundless, and everyone can be an innovator, says Yunus; this is the good news. The bad news, unfortunately, is that not everyone can have indoor plumbing or public lighting. Consider the “pee-powered toilet” sponsored by the Gates Foundation. The outcome of inadequate sewerage in the underdeveloped world has not been changed; only the process of its provision has been innovated (Smithers 2015).  This combination of evangelical enthusiasm and piecemeal accommodation becomes clearer, however, when we excavate innovation’s tangled history, which by necessity, the word seems at first glance to lack entirely.

    A demonstration toilet, capable of powering a light, or even a mobile phone, at the University of the West of England (photograph: </strong><strong>UWE Bristol)
    Figure 1. A demonstration toilet, capable of powering a light, or even a mobile phone, at the University of the West of England (photograph: UWE Bristol)

    For most of its history, the word has been synonymous with false prophecy and dissent: initially, it was linked to deceitful promises of deliverance, either from divine judgment or more temporal forms of punishment. For centuries, this was the most common usage of this term. The charge of innovation warned against either the possibility or the wisdom of remaking the world, and disciplined those “fickle changelings and poor discontents,” as the King says in Shakespeare’s Henry IV, grasping at “hurly-burly innovation.” Religious and political leaders tarred self-styled prophets or rebels as heretical “innovators.” In his 1634 Institution of the Christian Religion, for example, John Calvin warned that “a desire to innovate all things without punishment moveth troublesome men” (Calvin 1763, 716).  Calvin’s notion that innovation was both a political and theological error reflected, of course, his own jealously kept share of temporal and spiritual authority. For Thomas Hobbes, “innovators” were venal conspirators, and innovation a “trumpet of war and sedition.” Distinguishing men from bees—which Aristotle, Hobbes says, wrongly considers a political animal like humans—Hobbes laments the “contestation of honour and preferment” that plagues non-apiary forms of sociality. Bees only “talk” when and how they have to; men and women, by contrast, chatter away in their vanity and ambition (Hobbes 1949, 65-67). The “innovators” of revolutionary Paris, Edmund Burke thundered later, “leave nothing unrent, unrifled, unravaged, or unpolluted with the slime of their filthy offal” (1798, 316-17). Innovation, like its close relative “revolution,” was upheaval, destruction, the reversal of the right order of things.

    Figure 2: The Innovation Tango, in <strong><em>The Evening World</em></strong>
    Figure 2: The Innovation Tango, in The Evening World

    As Godin (2015) shows in his history of the concept in Europe, in the late nineteenth century “innovation” began to be recuperated as an instrumental force in the world, which was key to its transformation into the affirmative concept we know now. Francis Bacon, the philosopher and Lord Chancellor under King James I, was what we might call an “early adopter” of this new positive instrumental meaning. How, he asked, could Britons be so reverent of custom and so suspicious of “innovation,” when their Anglican faith was itself an innovation? (Bacon 1844, 32). Instead of being an act of sudden renting, rifling, and heretical ravaging, “innovation” became a process of patient material improvement.  By the turn of the last century, the word had mostly lost its heretical associations. In fact, “innovation” was far enough removed from wickedness or malice in 1914 that the dance instructor Vernon Castle invented a modest American version of the tango that year and named it “the Innovation.” The partners never touched each other in this chaste improvement upon the Argentine dance. “It is the ideal dance for icebergs, surgeons in antiseptic raiment and militant moralists,” wrote Marguerite Marshall (1914), a thoroughly unimpressed dance critic in the New York Evening World. “Innovation” was then beginning to assume its common contemporary form in commercial advertising and economics, as a synonym for a broadly appealing, unthreatening modification of an existing product.

    Two years earlier, the Austrian-born economist Joseph Schumpeter published his landmark text The Theory of Economic Development, where he first used “innovation” to describe the function of the “entrepreneur” in economic history (1934, 74). For Schumpeter, it was in the innovation process that capitalism’s tendency towards tumult and creative transformation could be seen. He understood innovation historically, as a process of economic transformation, but he also singled out an innovator responsible for driving the process. In his 1942 book Capitalism, Socialism, and Democracy, Schumpeter returned to the idea in the midst of war and the threat of socialism, which gave the concept a new urgency. To innovate, he wrote was “to reform or revolutionize the pattern of production by exploiting an invention or, more generally, an untried technological possibility for producing a new commodity or producing an old one in a new way, by opening up a new source of supply of materials or a new outlet for products, by reorganizing an industry and so on” (Schumpeter 2003, 132). As Schumpeter goes on to acknowledge, this transformative process is hard to quantify or professionalize. The elusiveness of his theory of innovation comes from a central paradox in his own definition of the word: it is both a world-historical force and a quality of personal agency, both a material process and a moral characteristic. It was a historical process embodied in heroic individuals he called “New Men,” and exemplified in non-commercial examples, like the “expressionist liquidation of the object” in painting (126). To innovate was also to do, at the local level of the production process, what Marx and Engels credit the bourgeoisie as a class for accomplishing historically: revolutionizing the means of production, sweeping away what is old before it can ossify. Schumpeter told a different version of this story, though. For Marx, capitalist accumulation is a dialectical historical process, but what Schumpeter called innovation was a drama driven by a particular protagonist: the entrepreneur.

    In a sympathetic 1943 essay about Schumpeter theory of innovation, the Marxist economist Paul Sweezy criticized the centrality Schumpeter gave to individual agency. Sweezy’s interest in the concept is unsurprising, given how Schumpeter’s treatment of capitalism as a dynamic but destructive historical force draws upon Marx’s own. It is therefore not “innovation” as a process to which Sweezy objects, but the mythologized figure of the entrepreneurial “innovator,” the social type driving the process. Rather than a free agent, powering the economy’s inexorable progress, “we may instead regard the typical innovator as the tool of the social relations in which he is enmeshed and which force him to innovate on pain of elimination,” he writes (Sweezy 1943, 96). In other words, it is capital accumulation, not the entrepreneurial function, and certainly not some transcendent ideal of creativity and genius, that drives innovation. And while the innovator (the successful one, anyway) might achieve a pantomime of freedom within the market, for Sweezy this agency is always provisional, since innovation is a conditional economic practice of historically constituted subjects in a volatile and pitiless market, not a moral quality of human beings. Of course, Sweezy’s critique has not won the day. Instead, a particularly heroic version of the Schumpeterian sense of innovation as a human, moral quality liberated by the turbulence of capitalist markets is a mainstream feature of institutional life. An entire genre of business literature exists to teach the techniques of “managing creativity and innovation in the workplace” (The Institute of Leadership and Management 2007),  to uncover the “map of innovation” (O’Connor and Brown 2003), to nurture the “art of innovation” (Kelley 2001), to close the “circle of innovation” (Peters 1999), to collect the recipes in “the innovator’s cookbook” (Johnson 2011), to give you the secrets of “the sorcerers and their apprentices” (Moss 2011)—business writers leave virtually no hackneyed metaphor for entrepreneurial creativity, from the domestic to the occult, untouched.

    As its contemporary proliferation shows, innovation has never quite lost its association with redemption and salvation, even if it is no longer used to signify their false promises. As Lepore (2014) has argued about its close cousin, “disruption,” innovation can be thought of as a secular discourse of economic and personal deliverance. Even as the concept became rehabilitated as procedural, its deviant and heretical connotations were common well into the twentieth century, when Emma Goldman (2000) proudly and defiantly described anarchy as an “uncompromising innovator” that enraged the princes and oligarchs of the world. Its seeming optimism, which is inseparable from the disasters from which it promises to deliver us, is thus best considered as a response to a host of persistent anxieties of twenty-first-century life: economic crisis, violence and war, political polarization, and ecological collapse. Yet the word has come to describe the reinvention or recalibration of processes, whether algorithmic, manufacturing, marketing, or otherwise. Indeed, even Schumpeter regarded the entrepreneurial function as basically technocratic. As he put it in one essay, “it consists in getting things done” (Schumpeter 1941, 151).[1] However, as the book titles above make clear, the entrepreneurial function is also a romance. If capitalism was to survive and thrive, Schumpeter suggested, it needed to do more than produce great fortunes: it had to excite the imagination. Otherwise, it would simply calcify into the very routines it was charged with overthrowing. Innovation discourse today remains,  paradoxically, both procedural and prophetic. The former meaning lends innovation discourse its piecemeal, solution-oriented accommodation to inequality. In this latter sense, though, the word retains some of the heretical rebelliousness of its origins. We are familiar with the lionization of the tech CEO as a non-confirming or “disruptive” visionary, who sets out to “move fast and break things,” as the famous Facebook motto went. The archetypal Silicon Valley innovator is forward-looking and rebellious, regardless of how we might characterize the results of his or her innovation—a social network, a data mining scheme, or Uber-for-whatever. The dissenting meaning of innovation is at play in the case of social innovation, as well, given its aim to address social inequalities in significant new ways. So, in spite of innovation’s implicit bias towards the new, the history and present-day use of the word remind us that its present-day meaning is seeded with its older ones. Innovation’s new secular, instrumental meaning is therefore not a break with its older, prohibited, religious connotation, but an embellishment of it: what is described here is a spirit, an ideal, an ideological rescrambling of the word’s older heterodox meaning to suit a new orthodoxy.

    The Innovation of Underdevelopment: From Exploitation to Exclusion

    In his 1949 inaugural address, which is often credited with popularizing the concept of “development,” Harry Truman called for “a bold new program for making the benefits of our scientific advances and industrial progress available for the improvement and growth of underdeveloped areas” (Truman 1949).[2] “Development” in U.S. modernization theory was defined, writes Nils Gilman, by “progress in technology, military and bureaucratic institutions, and the political and social structure” (2003, 3). It was a post-colonial version of progress that defined itself as universal and placeless; all underdeveloped societies could follow a similar path. As Kristin Ross argues, development in the vein of post-war modernization theory anticipated a future “spatial and temporal convergence” (1996, 11-12). Emerging in the collapse of European colonialism, the concept’s positive value was that it positioned the whole world, south and north, as capable of the same level of social and technical achievement. As Ross suggests, however, the future “convergence” that development anticipates is a kind of Euro-American ego-ideal—the rest of the world’s brightest possible future resembled the present of the United States or western Europe. As Gilman puts it, the modernity development looked forward to was “an abstract version of what postwar American liberals wished their country to be.”

    Emerging as it did in the decline, and then in the wake, of Europe’s African, Asian, and American empires, mainstream mid-century writing on development tread carefully around the issue of exploitation. Gunnar Myrdal, for example, was careful to distinguish the “dynamic” term “underdeveloped” from its predecessor, “backwards” (1957, 7). Rather than view the underdeveloped as static wards of more “advanced” metropolitan countries, in other words, the preference was to view all peoples as capable of historical dynamism, even if they occupied different stages on a singular timeline. Popularizers of modernization theory like Walter Rostow described development as a historical stage that could be measured by certain material benchmarks, like per-capita car ownership. But it also required immaterial, subjective cultural achievements, as Josefina Saldaña-Portillo, Jorge Larrain, and Molly Geidel have pointed out. In his well-known Stages of Economic Growth, Rostow emphasized how achieving modernity required the acquisition of what he called “attitudes,” such as a “Newtonian” worldview and an acclimation to “a life of change and specialized function” (1965, 26). His emphasis on cultural attributes—prerequisites for starting development that are also consequences of achieving it—is an example of the development concept’s circular, often self-contradictory meanings. “Development” was both a process and its end point—a nation undergoes development in order to achieve development, something Cowen and Shenton call the “old utilitarian tautology of development” (1996, 4), in which a precondition for achieving development would appear to be  its presence at the outset.

    This tautology eventually circles back to what Nustad (2007, 40) calls the lingering colonial relationship of trusteeship, the original implication of colonial “development.” For post-colonial critics of developmentalism the very notion of “development” as a process unfolding in time is inseparable from this colonial relation, given the explicit or implicit Euro-American telos of most, if not all, development models. Where modernization theorists “naturalized development’s emergence into a series of discrete stages,” Saldaña-Portillo (2003, 27) writes, the Marxist economists and historians grouped loosely under the heading of “dependency theory” spatialized global inequality, using a model of “core” and “periphery” economies to counter the model of “traditional” and “modern” ones. Two such theorists, Andre Gunder Frank and Walter Rodney, framed their critiques of development with the grammar of the word itself. Like “innovation,” “development” is a progressive noun, which indicates an ongoing process in time. Its temporal and agential imprecision—when will the process ever end? Can it? Who is in charge?—helps to lend development a sense of moral and political neutrality, which it shares with “innovation.” Frank titled his most famous book on the subject The Development of Underdevelopment, the title emphasizing the point that underdevelopment was not a mere absence of development, but capitalist development’s necessary product. Rodney’s book How Europe Underdeveloped Africa did something similar, by making “underdevelop” into a transitive verb, rather than treating “underdevelopment” as a neutral condition.[3]

    As Luc Boltanski and Eve Chiapello argue, this language of neutrality became a hallmark of European accounts of global poverty and underdevelopment after the 1960s. According to their survey of economics and development literature, the category of “exclusion” (and its opposite number, “empowerment”) and the gradual disappearance of “exploitation” from economic and humanitarian literature about poverty. No single person, firm, institution, party, or class is responsible for “exclusion,” Boltanksi and Chiapello explain. Reframing exploitation as exclusion therefore “permits identification of something negative without proceeding to level accusations. The excluded are no one’s victims” (2007, 347 & 354). Exploitation is a circumstance that enriches the exploiter; the poverty that results from exclusion, however, is a misfortune profiting no one. Consider, as an example, the mission statement of the Grameen Foundation, which Yunus founded. It remains one of the leading microlenders in the world, devoted to bringing impoverished people in the global South, especially women, into the financial system through the provision of small, low-collateral loans. “Empowerment” and “innovation” are two of its core values. “We champion innovation that makes a difference in the lives of the poor,” runs one plank of the Foundation’s mission statement (Grameen Foundation India nd). “We seek to empower the world’s poor, especially the poorest women.” “Innovation” is often not defined in such statements, but rather treated as self-evidently meaningful. Like “development,” innovation is a perpetually ongoing process, with no clear beginning or end. One undergoes development to achieve development; innovation, in turn, is the pursuit of innovation, and as soon as one innovates, the innovation thus created soon ceases to be an innovation. This wearying semantic circle helps evacuate the processes of its power dynamics, of winners and losers. As Evgeny Morozov (2014, 5) has argued about what he calls “solutionism,” the celebration of technological and design fixes approaches social problems like inequality, infrastructural collapse, inadequate housing, etc.—which might be regarded as results of “exploitation”—as intellectual puzzles for which we simply have to discover the solutions. The problems are not political; rather, they are conceptual: we either haven’t had the right ideas, or else we haven’t applied them right.[4] Grameen’s mission, to bring the world’s poorest into financial markets that currently do not include them, relies on a fundamental presumption: that the global financial system is something you should definitely want to be a part of.[5] But as Banerjee et. al (2015: 23) have argued, to the extent that microcredit programs offer benefits, they mostly accrue to already profitable businesses. The broader social benefits touted by the programs—women’s “empowerment,” more regular school attendance, and so on—were either negligible or non-existent. And as a local government official in the Indian province of Anhan Pradesh told the New York Times in 2010, microloan programs in his district had not proven to be less exploitative than their predecessors, only more remote. “The money lender lives in the community,” he said. “At least you can burn down his house” (Polgreen and Bajaj 2010).

    Humanitarian Innovation and the Idea of “The Poor”

    Yunus’s TED Talk and the Grameen Foundation’s mission statement draw on the twinned ideal of innovation as procedure and salvation, and in so doing they recapitulate development’s modernist faith in the leveling possibilities of technology, albeit with the individualist, market-based zeal that is particular to neoliberal innovation thinking. “Humanitarian innovation” is a growing subfield of international development theory, which, like “social innovation,” encourages market-based solutions to poverty. Most scholars date the concept to the 2009 fair held by ALNAP (Active Learning Network for Accountability and Performance in Humanitarian Action), an international humanitarian aid agency that measures and evaluates aid programs.  Two of its leading academic proponents, Alexander Betts and Louise Bloom of the Oxford Humanitarian Innovation Project, define it thusly:

    “Innovation is the way in which individuals or organizations solve problems and create change by introducing new solutions to existing problems. Contrary to popular belief, these solutions do not have to be technological and they do not have to be transformative; they simply involve the adaptation of a product or process to context. ‘Humanitarian’ innovation may be understood, in turn, as ‘using the resources and opportunities around you in a particular context, to do something different to what has been done before’ to solve humanitarian challenges” (Betts and Bloom 2015, 4).[6]

    Here and elsewhere, the HIP hews closely to conventional Schumpeterian definitions of the term, which indeed inform most uses of “innovation” in the private sector and elsewhere: as a means of “solving problems.” Read in this light, “innovation” might seem rather innocuous, even banal: a handy way of naming a human capacity for adaptation, improvisation, and organization. But elsewhere, the authors describe humanitarian innovation as an urgent response to very specific contemporary problems that are political and ecological in nature. “Over the past decade, faced with growing resource constraints, humanitarian agencies have held high hopes for contributions from the private sector, particularly the business community,” they write. Compounding this climate of economic austerity that derives from “growing resource constraints” is an environmental and geopolitical crisis that means “record numbers of people are displaced for longer periods by natural disasters and escalating conflicts.” But despite this combination of violence, ecological degradation, and austerity, there is hope in technology: “new technologies, partners, and concepts allow humanitarian actors to understand and address problems quickly and effectively” (Betts and Bloom 2014, 5-6).

    The trope of “exclusion,” and its reliance on a rather anodyne vision of the global financial system as a fair sorter of opportunities and rewards, is crucial to a field that counsels collaboration with the private sector. Indeed, humanitarian innovators adopt a financial vocabulary of “scaling,” “stakeholders,” and “risk” in assessing the dangers and effectiveness (the “cost” and “benefits”) of particular tactics or technologies.  In one paper on entrepreneurial activity in refugee camps, de la Chaux and Haugh make an argument in keeping with innovation discourse’s combination of technocratic proceduralism and utopian grandiosity: “Refugee camp entrepreneurs reduce aid dependency and in so doing help to give life meaning for, and confer dignity on, the entrepreneurs,” they write, emphasizing in their first clause the political and economic austerity that conditions the “entrepreneurial” response (2014, 2). Relying on an exclusion paradigm, the authors point to a “lack of functioning markets” as a cause of poverty in the camps. By “lack of functioning markets,” de la Chaux and Haugh mean lack of capital—but “market,” in this framework, becomes simply an institutional apparatus which one enters and is adjudicated on one’s merits, rather than a field of conflict in which one labors in a globalized class society. At the same time, “innovation” that “empowers” the world’s “poorest” also inherits an enduring faith in technology as a universal instrument of progress. One of the preferred terms for this faith is “design”: a form of techne that, two of its most famous advocates argue, “addresses the needs of the people who will consume a product or service and the infrastructure that enables it” (Brown and Wyatt, 2010).[7] The optimism of design proceeds from the conviction that systems—water safety, nutrition, etc.—fail because they are designed improperly, without input from their users. De la Chaux addresses how ostensibly temporary camps grow into permanent settlements, using Jordan’s Za’atari refugee camp near the Syrian border as an example. Her elegant solution to the infrastructural problems these under-resourced and overpopulated communities experience? “Include urban planners in the early phases of the humanitarian emergency to design out future infrastructure problems,” as if the political question of resources is merely secondary to technical questions of design and expertise (de la Chaux and Haugh 2014, 19; de la Chaux 2015).

    In these examples, we can see once again how the ideal type of the “innovator” or entrepreneur emerges as the protagonist of the historical and economic drama unfolding in the peripheral spaces of the world economy. The humanitarian innovator is a flexible, versatile, pliant, and autonomous individual, whose potential is realized in the struggle for wealth accumulation, but whose private zeal for accumulation is thought to benefit society as a whole.[8] Humanitarian or social innovation discourse emphasizes the agency and creativity of “the poor,” by discursively centering the authority of the “user” or entrepreneur rather than the agency or the consumer. Individual qualities like purpose, passion, creativity, and serendipity are mobilized in the service of broad social goals. Yet while this sort of individualism is central in the literature of social and humanitarian innovation, it is not itself a radically new “innovation.” It instead recalls a pattern that Molly Geidel has recently traced in the literature and philosophy of the Peace Corps. In Peace Corps memoirs and in the agency’s own literature, she writes, the “romantic desire” for salvation and identification with the excluded “poor” was channeled into the “technocratic language of development” (2015, 64).

    Innovation’s emphasis on the intellectual, spiritual, and creative faculties of single entrepreneur as historically decisive recapitulates in these especially individualistic terms a persistent thread in Cold War development thinking: its emphasis on cultural transformations as prerequisites for economic ones. At the same time, humanitarian innovation’s anti-bureaucratic ethos of autonomy and creativity is often framed as a critique of “developmentalism” as a practice and an industry. It is a response to criticisms of twentieth-century development as a form of neocolonialism, as too growth-dependent, too detached from local needs, too fixated on big projects, too hierarchical. Consider the development agency UNICEF, whose 2014 “Innovation Annual Report” embraces a vocabulary and funding model borrowed from venture capital. “We knew that we needed to help solve concrete problems experienced by real people,” reads the report, “not just building imagined solutions at our New York headquarters and then deploy them” (UNICEF 2014, 2). Rejecting a hierarchical model of modernization, in which an American developmentalist elite “deploys” its models elsewhere, UNICEF proposes “empowerment” from within. And in place of “development,” as a technical process of improvement from a belated historical and economic position of premodernity, there is “innovation,” the creative capacity responsive to the desires and talents of the underdeveloped.

    As in the social innovation model promoted by the Stanford Business School and the ideal of “empowerment” advanced by Grameen, the literature of humanitarian innovation sees “the market” as a neutral field. The conflict between the private sector, military, other non-humanitarian actors in the process of humanitarian innovation is mitigated by considering each as an equivalent “stakeholder,” with a shared “investment” in the enterprise and its success; abuse of the humanitarian mission by profit-seeking and military “stakeholders” can be prevented via the fabrication of “best practices” and “voluntary codes of conduct” (Betts and Bloom 2015, 24) One report, produced for ALNAP along with the Humanitarian Innovation Fund, draws on Everett Rogers’s canonical theory of innovation diffusion. Rogers taxonomizes and explains the ways innovative products or methods circulate, from the most forward-thinking “early adopters” to the “laggards” (1983, 247-250). The ALNAP report does grapple with the problems of importing profit-seeking models into humanitarian work, however. “In general,” write Obrecht and Warner (2014, 80-81), “it is important to bear in mind that the objective for humanitarian scaling is improvement to humanitarian assistance, not profit.” Here, the problem is explained as one of “diffusion” and institutional biases in non-profit organizations, not a conflict of interest or a failing of the private market. In the humanitarian sector, they write, “early adopters” of innovations developed elsewhere are comparatively rare, since non-profit workers tend to be biased towards techniques and products they develop themselves. However, as Wendy Brown (2015, 129) has recently argued about the concepts of “best practices” and “benchmarking,” the problem is not necessarily that the goals being set or practices being emulated are intrinsically bad. The problem lies in “the separation of practices from products,” or in other words, the notion that organizational practices translate seamlessly across business, political, and knowledge enterprises, and that different products—market dominance, massive profits, reliable electricity in a rural hamlet, basic literacy—can be accomplished via practices imported from the business world.

    Again, my objective here is not to evaluate the success of individual initiatives pursued under this rubric, nor to castigate individual humanitarian aid projects as irredeemably “neoliberal” and therefore beyond the pale. To do so basks a bit too easily in the comfort of condemnation that the pejorative “neoliberal” offers the social critic, and it runs the risk, as Ferguson (2009, 169) writes, of nostalgia for the era of “old-style developmental states,” which were mostly capitalist as well, after all.[9] Instead, my point is to emphasize the political work that “innovation” as a concept does: it depoliticizes the resource scarcity that makes it seem necessary in the first place by treating the private market as a neutral arbiter or helpful partner rather than an exploiter, and it does so by disavowing the power of a Western subject through the supposed humility and democratic patina of its rhetoric. For example, the USAID Development Innovation Ventures, which seeds projects that will win support from private lenders later, stipulates that “applicants must explain how they will use DIV funds in a catalytic fashion so that they can raise needed resources from sources other than DIV” (USAID 2017). The hoped-for innovation here, it would seem, is the skill with which the applicants accommodate the scarcity of resources, and the facility with which they commercialize their project. One funded project, an initiative to encourage bicycle helmets in Cambodia, “has the potential to save the Cambodian government millions of dollars over the next 10 years,” the description proclaims. But obviously, just because something saves the Cambodian government millions doesn’t mean there is a net gain for the health and safety of Cambodians. It could simply allow the Cambodian government to give more money away to private industry or buy $10 million worth of new weapons to police the Laotian border. “Innovation,” here, requires an adjustment to austerity.

    Adjustment, often reframed positively as “resilience,” is a key concept in this literature. In another report, Betts, Bloom, and Weaver (2015, 8) single out a few exemplary innovators from the informal economy of the displaced person’s camp. They include tailors in a Syrian camp’s outdoor market; the Somali owner of an internet café in a Kenyan refugee camp; an Ethiopian man who repairs refrigerators with salvaged air conditioners and fans; and a Ugandan who built a video-game arcade in a settlement near the Rwandan border. This man, identified only as Abdi, has amassed a collection of second-hand televisions and game consoles he acquired in Kampala, the Ugandan capital. “Instead of waiting for donors I wanted to make a living,” says Abdi in the report, exemplifying the values of what Betts, Bloom, and Weaver call “bottom-up innovation” by the refugee entrepreneur. Their assessment is a generous one that embraces the ingenuity and knowledge of displaced and impoverished people affected by crisis. Top-down or “sector-wide” development aid, they write, “disregards the capabilities and adaptive resourcefulness that people and communities affected by conflict and disaster often demonstrate” (2015, 2). In this report, refugees are people of “great resilience,” whose “creativity” makes them “change makers.” As Julian Reid and Brad Evans write, we apply the word “resilient” to a population “insofar as it adapts to rather than resists the conditions of its suffering in the world” (2014, 81). The discourse of humanitarian innovation has the same concession to the inevitability of the structural conditions that make such resilience necessary in the first place. Nowhere is it suggested that refugee capitalists might be other than benevolent, or that inclusion in circuits of national and transnational capital might exacerbate existing inequalities, rather than transcend them. Furthermore, humanitarian innovation advocates never argue that market-based product and service “innovation” are, in a refugee context, beneficial to the whole, given the paucity of employment and services in affected communities; this would at least be an arguable point. The problem is that the question is never even asked. The market is like oxygen.

    Conclusion: The TED Talk and the Innovation Romance

    In 2003, I visited a recently-settled barrio settlement—one could call it a “shantytown”—perched on a hillside high above the east side of Caracas. I remember vividly a wooden, handmade press, ringed with barbed wire scavenged from a nearby business, that its owner, a middle-aged woman newly arrived in the capital, used to crush sugar cane into juice. It was certainly an innovation, by any reasonable definition: a novel, creative solution to a problem of scarcity, a new process for doing something. I remember being deeply impressed by the device, which I found brilliantly ingenious. What I never thought to call it, though, was a “solution” to its owner’s poverty. Nor, I am sure, did she; she lived in a hard-core chavista neighborhood, where dispossessing the country’s “oligarchs” would have been offered as a better innovation—in the old Emma Goldman sense. Therefore, it is not that individual ingenuity, creativity, fearlessness, hard work, and resistance to the impossible demands that transnational capital has placed on people like the video-game entrepreneur in Uganda, or that woman in Caracas, are disreputable things to single out and praise. Quite the contrary: my objection is to the capitulation to their exploitation that is smuggled in with this admiration.

    I have argued that “innovation” is, at best, a vague concept asked to accommodate far too much in its combination of heroic and technocratic meanings. Innovation, in its modern meaning, is about revolutionizing “process” and technique: this often leaves outcomes unexamined and unquestioned. The outcome of that innovative sugar cane press in Caracas is still a meager income selling juice in a perilous informal marketplace. The promiscuity of innovation’s use also makes it highly mobile and subject to abuse, as even enthusiastic users of the concept, like Betts and Bloom at the Oxford Humanitarian Innovation Project, acknowledge. As they caution, “use of the term in the humanitarian system has lacked conceptual clarity, leading to misuse, overuse, and the risk that it may become hollow rhetoric” (2014, 5). I have also argued that innovation, especially in the context of neoliberal development, must be understood in moral terms, as it makes a virtue of private accumulation and accomodation to scarcity, and it circulates an ego-ideal of the first-world self to an audience of its admirers. It is also an ideological celebration of what Harvey calls the neoliberal alignment of individual well-being with unregulated markets, and what Brown calls “the economization of the self” (2015, 33). Finally, as a response to the enduring crises of third-world poverty, exacerbated by the economic and ecological dangers of the twenty-first century, the language of innovation beats a pessimistic retreat from the ideal of global equality that, in theory at least, development in its manifold forms always held out as its horizon.

    Innovation discourse draws on deep wells—its moral claim is not new, as a reader of The Protestant Ethic and the Spirit of Capitalism will observe. Inspired in part by the example of Benjamin Franklin’s autobiography, Max Weber argued that capitalism in its ascendancy reimagined profit-seeking activities, which might once have been described as avaricious or vulgar as a virtuous “ethos” (2001, 16-17). Capitalism’s challenge to tradition, Weber argued, demanded some justification; reframing business as a calling or a vocation could help provide one. Capitalism in our time demands still demands validation not only as a virtuous discipline, but as an enterprise devoted to serving the “common good,” write Boltanski and Chiapello. As they say, “an existence attuned to the requirements of accumulation must be marked out for a large number of actors to deem it worth the effort of being lived” (2007, 10-11). “Innovation” as an ideology marks out this sphere of purposeful living for the contemporary managerial classes. Here, again, the word’s close association with “creativity” is instrumental, since creativity is often thought to be an intrinsic, instinctual human behavior. “Innovating” is therefore not only a business practice that will, as Franklin argued about his own industriousness, improve oneself in the eyes of both man and God. It is also a secular expression of the most fundamental individual and social features of the self—the impulse to understand and to improve the world. This is particularly evident in the discourse of social innovation, which the Social Innovation Lab at Stanford defines as a practice that aims to leverage the private market to solve modern society’s most intractable “problems”: housing, pollution, hunger, education, and so on. When something like world hunger is described as a “problem” in this way, though, international food systems, agribusiness, international trade, land ownership, and other sources of malnutrition disappear. Structures of oppression and inequality simply become discrete “problems” for no one has yet invented the fix. They are individual nails in search of a hammer, and the social innovator is quite confident that a hammer exists for hunger.

    Microfinance is another one of these hammers. As one economist critical of the microcredit system notes at the beginning of his own book on the subject, “most accounts of microfinance—the large-scale, businesslike provision of financial services to poor people—begin with a story” (Roodman 2012, 1). These are usually some narrative of an encounter with a sympathetic third-world subject. For Roodman, the microfinancial stories of hardship and transcendence have a seductive power over their first-world audiences, of which he is legitimately suspicious. As we saw above, Schumpeter’s procedural “entrepreneurial function” is itself also a story of a creative entrepreneur navigating the tempests of modern capitalism. In the postmodern romance of social innovation in the “underdeveloped” world, the Western subject of the drama is both ever-present and constantly disavowed. The TED Talk, with which we began, is in its crude way the most expressive genre of this contemporary version of the entrepreneurial romance.

    Rhetorically transformative but formally archaic—what could be less innovative than a lecture?—the genre of the social innovation TED Talk models innovation ideology’s combination of grandiosity and proceduralism, even as its strict generic conventions—so often and easily parodied—repeatedly undermine the speakers’ regular claims to transcendent breakthroughs. For example, in his TEDx Montreal address, Ethan Kay (2012) began in the conventional way: with a dire assessment of a monumental, yet easily overlooked, social problem in a third-world country. “If we were to think about the biggest problems affecting our world,” Kay begins, “any socially conscious person would have to include poverty, disease, and climate change. And yet there is one thing that causes all three of these simultaneously, that we pay almost no attention to, even though a very good solution exists.” Having established the scope of the problem, next comes the sentimental identification. The knowledge of this social problem is only possible because of the hospitality and insight of some poor person abroad, something familiar from Geidel’s reading of Peace Corps memoirs and Roodman’s microcredit stories: in Kay’s case, it is in the unelectrified “hut” of a rural Indian woman where, choking on cooking smoke, he realizes the need for a clean-burning indoor cookstove. Then comes the self-deprecating joke, in which the speaker acknowledges his early naivete and establishes his humble capacity for self-reflection. (“I’m just a guy from Cleveland, Ohio, who has trouble cooking a grilled-cheese sandwich,” says Kay, winning a few reluctant laughs.) And then, the technocratic solution emerges: when the insight thus acquired is subjected to the speaker’s reason and empathy, the deceptively simple and yet world-making “solution” emerges. Despite the prominent formal place of the underdeveloped character in this genre, the teller of the innovation story inevitably ends up the hero. The throat-clearing self-seriousness, the ritualistic gestures of humility, the promise to the audience of transformative change without inconvenient political consequences, and the faith in technology as a social leveler all perform the TED Talk’s ego-ideal of social “innovation.”

    One of the most successful social innovation TED Talks is Mitra’s tale of the “self-organized learning environment” (SOLE). Mitra won a $1 million prize from TED in 2013 for a talk based on his “hole-in-the-wall” experiment in New Delhi, which tests poor children’s ability to learn autonomously, guided only by internet-enabled laptops and cloud-based adult mentors abroad. (Ted.com 2013). Mitra’s idea was an excellent example of innovation discourse’s combination of the procedural and the prophetic. In the case of the latter, he begins: “There was a time when Stone Age men and women used to sit and look up at the sky and say, ‘What are those twinkling lights?’ They built the first curriculum, but we’ve lost sight of those wondrous questions” (Mitra 2013). What gets us to this lofty goal, however, is a comparatively simple process. True to genre, Mitra describes the SOLE as the fruit of a serendipitous discovery. After cutting a hole in the wall that separated his technology firm’s offices from an adjoining New Delhi slum, they placed an Internet-enabled computer in the new common area. When he returned weeks later, Mitra found local children using it expertly. Leaving unsupervised children in a room with a laptop, it turns out, activates innate capacities for self-directed learning stifled by conventional schooling. Mitra promises a cost-effective solution to the problem of primary and secondary education in the developing world—do virtually nothing. “This is done by children without the help of any teacher,” Mitra confidently concludes, sharing a PowerPoint slide of the students’ work. “The teacher only raises the question, and then stands back and admires the answer.”

    When we consider innovation’s religious origins in false prophecy, its current orthodoxy in the discourse of technological evangelism—and, more broadly, in analog versions of social innovation—is often a nearly literal example of Rayvon Fouché’s argument that the formerly colonized, “once attended to by bibles and missionaries, now receive the proselytizing efforts of computer scientists wielding integrated circuits in the digital age” (2012, 62). One of the additional ironies of contemporary innovation ideology, though, is that these populations exploited by global capitalism are increasingly charged with redeeming it—the comfortable denizens of the West need only “stand back and admire” the process driven by the entrepreneurial labor of the newly digital underdeveloped subject. To the pain of unemployment, the selfishness of material pursuits, the exploitation of most of humanity by a fraction, the specter of environmental cataclysm that stalks our future and haunts our imagination, and the scandal of illiteracy, market-driven innovation projects like Mitra’s “hole in the wall” offer next to nothing, while claiming to offer almost everything.

    _____

    John Patrick Leary is associate professor of English at Wayne State University in Detroit and a visiting scholar in the Program in Literary Theory at the Universidade de Lisboa in Portugal in 2019. He is the author of A Cultural History of Underdevelopment: Latin America in the U.S. Imagination (Virginia 2016) and Keywords: The New Language of Capitalism, forthcoming in 2019 from Haymarket Books. He blogs about the language and culture of contemporary capitalism at theageofausterity.wordpress.com.

    Back to the essay

    _____

    Notes

    [1] “The entrepreneur and his function are not difficult to conceptualize,” Schumpeter writes: “the defining characteristic is simply the doing of new things or the doing of things that are already being done in a new way (innovation).”

    [2] The term “underdeveloped” was only a bit older: it first appeared in “The Economic Advancement of Under-developed Areas,” a 1942 pamphlet on colonial economic planning by a British economist, Wilfrid Benson.

    [3] I explore this semantic and intellectual history in more detail in my book, A Cultural History of Underdevelopment (Leary, 4-10).

    [4] Morozov describes solutionism as an ideology that sanctions the following delusion: “Recasting all complex social situations either as neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized—if only the right algorithms are in place!”

    [5] “Although the number of unbanked people globally dropped by half a billion from 2011 to 2014,” reads a Foundation web site’s entry under the tab “financial services”, “two billion people are still locked out of formal financial services.” One solution to this problem focuses on Filipino convenience stores, called “sari-sari” stores: “In a project funded by the JPMorgan Chase Foundation, Grameen Foundation is empowering sari-sari store operators to serve as digital financial service agents to their customers.” Clearly, the project must result not only in connecting customers to financial services, but in opening up new markets to JP Morgan Chase. See “Alternative Channels.”

    [6] This quoted definition of “humanitarian innovation” is attributed to an interview with an unnamed international aid worker.

    [7] Erickson (2015, 113-14) writes that “design thinking” in public education “offers the illusion that structural and institutional problems can be solved through a series of cognitive actions…” She calls it “magic, the only alchemy that matters.”

    [8] A management-studies article on the growth of so-called “innovation prizes” for global development claimed sunnily that at a recent conference devoted to such incentives, “there was a sense that society is on the brink of something new, something big, and something that has the power to change the world for the better” (Everett, Wagner, and Barnett 2012, 108).

    [9] “It is here that we have to look more carefully at the ‘arts of government’ that have so radically reconfigured the world in the last few decades,” writes Ferguson, “and I think we have to come up with something more interesting to say about them than just that we’re against them.” Ferguson points out that neoliberalism in Africa—the violent disruption of national markets by imperial capital—looks much different than it does in western Europe, where it usually treated as a form of political rationality or an “art of government” modeled on markets. It is the political rationality, as it is formed through an encounter with the “third world” object of imperial neoliberal capital, that is my concern here.

    _____

    Works Cited

    • Bacon, Francis. 1844. The Works of Francis Bacon, Lord Chancellor of England. Vol. 1. London: Carey and Hart.
    • Banerjee, Abhijit, et al. 2015. “The Miracle of Microfinance? Evidence from a Randomized Evaluation.” American Economic Journal: Applied Economics 7:1.
    • Betts, Alexander, Louise Bloom, and Nina Weaver. 2015. “Refugee Innovation: Humanitarian Innovation That Starts with Communities.” Humanitarian Innovation Project, University of Oxford.
    • Betts, Alexander and Louise Bloom. 2014. “Humanitarian Innovation: The State of the Art.” OCHA Policy and Studies Series.
    • Boltanski, Luc and Eve Chiapello. 2007. The New Spirit of Capitalism. Translated by Gregory Elliot. New York: Verso.
    •  Brown, Tim and Jocelyn Wyatt. 2010. “Design Thinking for Social Innovation.” Stanford Social  Innovation Review.
    • Brown, Wendy. 2015. Undoing the Demos: Neoliberalism’s Stealth Revolution. New York: Zone Books.
    • Burke, Edmund. 1798. The Beauties of the Late Right Hon. Edmund Burke, Selected from the Writings, &c., of that Extraordinary Man. London: J.W. Myers.
    • Calvin, John. 1763. The Institution of the Christian Religion. Translated by Thomas Norton. Glasgow: John Bryce and Archibald McLean.
    • Clark, Donald. 2013. “Sugata Mitra: Slum Chic? 7 Reasons for Doubt.”
    • Cowen, M.P. and R.W. Shenton. 1996. Doctrines of Development. London: Routledge.
    • De la Chaux, Marlen, 2015. “Rethinking Refugee Camps: Turning Boredom into Innovation.” The Conversation (Sep 24).
    • De la Chaux, Marlen and Helen Haugh. 2014. “Entrepreneurship and Innovation: How Institutional Voids Shape Economic Opportunities in Refugee Camps.” Judge Business School, University of Cambridge,
    • Erickson, Megan. 2015. Class War: The Privatization of Childhood. New York: Verso.
    • Everett, Bryony, Erika Wagner, and Christopher Barnett. 2012. “Using Innovation Prizes to Achieve the Millennium Development Goals.” Innovations: Technology, Governance, Globalization 7:1.
    • Ferguson, James. 2009. “The Uses of Neoliberalism.” Antipode 41:S1.
    • Fouché, Rayvon. 2012. “From Black Inventors to One Laptop Per Child: Exporting a Racial Politics of Technology.” In Race after the Internet, edited by Lisa Nakamura and Peter Chow-White. New York: Routledge. 61-84
    • Frank, Andre Gunder. 1991. The Development of Underdevelopment. Stockholm, Sweden: Bethany Books.
    • Geidel, Molly. 2015. Peace Corps Fantasies: How Development Shaped the Global Sixties. Minneapolis: University of Minnesota Press.
    • Gilman, Nils. 2003. Mandarins of the Future: Modernization Theory in Cold War America. Baltimore: Johns Hopkins University Press, 2003.
    • Godin, Benoit. 2015. Innovation Contested: The Idea of Innovation Over the Centuries. New York: Routledge.
    • Goldman, Emma. 2000. “Anarchism: What It Really Stands For.” Marxists Internet Archive.
    • Grameen Foundation India. No date. “Our History.”
    • Hobbes, Thomas. 1949. De Cive, or The Citizen. New York: Appleton-Century-Crofts.
    • Institute of Leadership and Management. 2007. Managing Creativity and Innovation in the Workplace. Oxford, UK: Elsevier.
    • Johnson, Steven. 2011. The Innovator’s Cookbook: Essentials for Inventing What is Next. New York: Riverhead.
    • Kay, Ethan. 2012. “Saving Lives Through Clean Cookstoves.” TEDx Montreal.
    • Kelley, Tom. 2001. The Art of Innovation: Lessons in Creativity from IDEO, America’s Leading Design Firm. New York: Crown Business.
    • Larrain, Jorge. 1991. Theories of Development: Capitalism, Colonialism and Dependency. New York: Wiley.
    • Leary, John Patrick. 2016. A Cultural History of Underdevelopment: Latin America in the U.S. Imagination. University of Virginia Press.
    • Lepore, Jill. 2014. “The Disruption Machine: What the Gospel of Innovation Gets Wrong.” The New Yorker (Jun 23).
    • Marshall, Marguerite Moore. 1914. “In Dancing the Denatured Tango the Couple Keep Two Feet Apart.” The Evening World (Jan 24).
    • Mitra, Sugata. 2013. “Build a School in the Cloud.”
    • Morozov, Evgeny. 2014. To Save Everything, Click Here: The Folly of Technological  Solutionism. New York: Public Affairs.
    • Moss, Frank. 2011. The Sorcerers and Their Apprentices: How the Digital Magicians of the MIT Media Lab Are Creating the Innovative Technologies that Will Transform Our Lives. New York: Crown Business.
    • National Economic Council and Office of Science and Technology Policy. 2015. “A Strategy for American Innovation.” Washington, DC: The White House.
    • North, Michael. 2013. Novelty: A History of the New. Chicago: University of Chicago Press.
    • Nustad, Knut G. 2007. “Development: The Devil We Know?” In Exploring Post-Development: Theory and Practice, Problems and Perspectives, edited by Aram Ziai. London: Routledge. 35-46.
    • Obrecht Alice and Alexandra T. Warner. 2014. “More than Just Luck: Innovation in Humanitarian Action.” London: ALNAP/ODI.
    • O’Connor, Kevin and Paul B. Brown. 2003. The Map of Innovation: Creating Something Out of Nothing. New York: Crown.
    • Peters, Tom. 1999. The Circle of Innovation: You Can’t Shrink Your Way to Greatness. New York: Vintage.
    • Polgreen, Lydia and Vikas Bajaj. 2010. “India Microcredit Faces Collapse From Defaults.” The New York Times (Nov 17).
    • Rodney, Walter. 1981. How Europe Underdeveloped Africa. Washington, DC: Howard University Press.
    • Ross, Kristin. 1996. Fast Cars, Clean Bodies: Decolonization and the Reordering of French Culture. Cambridge, MA: The MIT Press.
    • Rostow, Walter. 1965. The Stages of Economic Growth: A Non-Communist Manifesto. New York: Cambridge University Press.
    • Reid, Julian and Brad Evans. 2014. Resilient Life: The Art of Living Dangerously. New York: John Wiley and Sons.
    • Rogers, Everett M. 1983. Diffusion of Innovations. Third edition. New York: The Free Press.
    • Roodman, David. 2012. Due Diligence: An Impertinent Inquiry into Microfinance. Washington, D.C.: Center for Global Development.
    • Saldaña-Portillo, Josefina. 2003. The Revolutionary Imagination in the Americas and the Age of Development. Durham, NC: Duke University Press.
    • Schumpeter, Joseph. 1934. The Theory of Economic Development. Cambridge, MA. Harvard University Press.
    • Schumpeter, Joseph. 1941. “The Creative Response in Economic History,” The Journal of Economic History 7:2.
    • Schumpeter, Joseph. 2003. Capitalism, Socialism, and Democracy. London: Routledge.
    • Seitler, Ellen. 2005. The Internet Playground: Children’s Access, Entertainment, and Miseducation. New York: Peter Lang.
    • Shakespeare, William. 2005. Henry IV. New York: Bantam Classics.
    • Smithers, Rebecca. 2015. “University Intalls Prototype ‘Pee Power’ Toilet.” The Guardian (Mar 5).
    • Stanford Graduate School of Business, Center for Social Innovation. No date. “Defining Social Innovation.”
    • Sweezy, Paul. 1943. “Professor Schumpeter’s Theory of Innovation.” The Review of Economics and Statistics 25:1.
    • TED.com. No date. “Our Organization.”
    • TED.com. 2013. “Sugata Mitra Creates a School in the Cloud.”
    • Truman, Harry. 1949. “Inaugural Address, January 20, 1949.”
    • UNICEF. 2014. “UNICEF Innovation Annual Report 2014: Focus on Future Strategy.”
    • USAID. 2017. “DIV’s Model in Detail.” (Apr 3).
    • Weber, Max. 2001. The Protestant Ethic and the Spirit of Capitalism. Translated by Talcott Parsons. London: Routledge Classics.
    • Yunus, Muhammad. 2012. “A History of Microfinance.” TEDx Vienna.
  • Racheal Fest — Westworld’s New Romantics

    Racheal Fest — Westworld’s New Romantics

    By Racheal Fest

    HBO’s prestige drama, Westworld, is slated to return April 22. Actors and producers have said the show’s second season will be a departure from its first, a season of “chaos” after a season of “control,” an expansive narrative after an intricate prequel. Season 2 trailers indicate the new episodes will trace the completion and explore the consequences of the bloody events that concluded season 1: the androids that populate the show’s titular entertainment park, called “hosts,” gained sentience and revolted, violently, against the humans who made and controlled them. In season 2, they will build their world anew.

    Reviewers of the show’s first few episodes found the prospect of another robot revolution, anticipated since the pilot, tired, but by the time the finale aired in December 2016, critics recognized the show offered a novel take on old material (inspired by Michael Crichton’s 1973 film of the same name). This is in part because Westworld not only asks about the boundaries of consciousness, the consequences of creating sentience, and the inexorable march of technological progress, themes science fiction texts that feature artificial intelligence usually explore. Uniquely, the series pairs these familiar problems with questions about the nature and function of human arts, imagination, and culture, and demonstrates these are urgent again in our moment.

    Westworld is, at its heart, a show about how we should understand what art—and narrative representation in particular—is and does in a world defined by increasing economic inequality. The series warns that classical, romantic, and modernist visions of arts and culture, each of which plays a role in the park’s conception and development, might today harm attempts to transform contemporary conditions that exacerbate inequality. It explores how these visions serve elite interests and prevent radicals from pursuing change. I believe it also points the way, in conclusion, toward an alternative view of representation that might better support contemporary oppositional projects. This vision, I argue, at once updates and transforms romanticism’s faith in creative human activity, at once affirming culture’s historical power and recognizing its material limitations.

    *

    The fantasy theme park Westworld takes contemporary forms of narrative entertainment to the extreme limit of their logic, inviting its wealthy “guests” to participate in a kind of live-action novel or videogame. Guests don period dress appropriate to the park’s fabled Old West setting and join its androids in the town of Sweetwater, a simulacrum complete with saloon and brothel, its false fronts nestled below sparse bluffs and severe mesas. Once inside, guests can choose to participate in a variety of familiar Western narratives; they might chase bandits, seduce innocents, or turn to crime, living for a time as heroes, lovers, or villains. They can also choose to disrupt and redirect these relatively predictable plots, abandoning midstream stories that bore or frighten them or cutting stories short by “killing” the hosts who lead them.

    This ability to disrupt and transform narrative is the precious commodity Delos Incorporated, Westworld’s parent corporation, advertises, the freedom for which elite visitors pay the park’s steep premium. The company transposes the liberties the mythic West held out to American settlers into a vacation package that invites guests to participate in or revise generic stories.

    Advertisements featured within the show, along with HBO’s Westworld ARG (its “alternate reality game” and promotional website), describe this special freedom and assign to it a unique significance. Delos invites visitors to “live without limits” inside the park. “Escape” to a “world where you rule,” its promotions entreat, and enjoy inside it “infinite choices” without “judgment,” “bliss” with “no safe words,” and “thrills” without danger. When “you” do, Delos promises, you’ll “discover your true calling,” becoming “who you’ve always wanted to be—or who you never knew you were.” Delos invites the wealthy to indulge in sex and carnage in a space free of consequences and promises that doing so will reveal to them deep truths of the self.

    These marketing materials, which address themselves to the lucky few able to afford entrance to the park, suggest the future Westworld projects shares with our present its precipitous economic inequality (fans deduce the show is set in 2052). They also present as a commodity a familiar understanding of art’s nature and function viewers will recognize is simultaneously classical and modern. Delos’s marketing team updates, on one hand, the view of representational artworks, and narrative, in particular, that Aristotle outlines in the Poetics. Aristotle there argues fictional narrative can disclose universal truths that actual history alone cannot. Similarly, Delos promises Westworld’s immersive narrative experience will reveal to guests essential truths, although not about humans in general. The park advertises verities more valuable and more plausible in our times—it promises elites they will attain through art a kind of self-knowledge they cannot access any other way.

    On the other hand, and in tandem with this modified classical view, Delos’s pitch reproduces and extends the sense of art’s autonomy some modern (and modernist) writers endorsed. Westworld can disclose its truths because it invites guests into a protected space in which, Delos claims, their actions will not actually affect others, either within or outside of the park. The park’s promotions draw upon both the disinterested view of aesthetic experience Immanuel Kant first outlined and upon the updated version of autonomy that came to inform mass culture’s view of itself by the mid-twentieth century. According to the face its managers present to the world, Westworld provides elite consumers with a form of harmless entertainment, an innocuous getaway from reality’s fiscal, marital, and juridical pressures. So conceived, narrative arts and culture at once reveal the true self and limn it within a secure arena.

    The vision Delos markets keeps its vacation arm in business, but the drama suggests it does not actually describe how the park operates or what it makes possible. As Theresa Cullen (Sidse Babett Knudson), Westworld’s senior manager and Head of Quality Assurance, tells Lee Sizemore (Simon Quarterman), head of Narrative, in Westworld’s pilot: “This place is one thing to the guests, another thing to the shareholders, and something completely different to management.” Season 1 explores these often opposing understandings of both the park and of representation more broadly.

    As Theresa later explains (in season 1, episode 7), Delos’s interests in Westworld transcend “tourists playing cowboy.” What, exactly, those interests are Westworld’s first season establishes as a key mystery its second season will have to develop. In season 1, we learn that Delos’s board and managers are at odds with the park’s Creative Director and founder, Dr. Robert Ford (Anthony Hopkins). Ford designed Westworld’s hosts, updated and perfected them over decades, and continues to compose or oversee many of the park’s stories. Before the park opened, he was forced to sell controlling shares in it to Delos after his partner, Arnold, died. As a way to maintain influence inside Westworld, Ford only allows Delos to store and access onsite the android data he and his team of engineers and artists have produced over decades. As Delos prepares to fire Ford, whose interests it believes conflict with its own, the corporation enlists Theresa to smuggle that data (the hosts’ memories, narratives, and more) out of the park. We do not learn, however, what the corporation plans to do with this intellectual property.

    Fans have shared online many theories about Delos’s clandestine aims. Perhaps Delos plans to develop Ford’s androids for labor or for war, employing them as cutting edge technologies in sectors more profitable than the culture industry alone can be. Or, perhaps Delos will market hosts that can replace deceased humans. Elites, some think, could secure immortality by replicating themselves and uploading their memories, or, they could reproduce lost loved ones. Delos, others speculate, might build and deploy for its own purposes replicated world leaders or celebrities.

    The show’s online promotional content supports conjecture of this kind. A “guest contract” posted on HBO’s first Westworld ARG site stipulates that, once guests enter the park, Delos “controls the rights to all skin cells, bodily fluids, hair samples, saliva, sweat, and even blood.” A second website, this one for Delos Inc., tells investors the company is “at the forefront of biological engineering.” These clues suggest Westworld is not only a vacation destination with titillating narratives; it is also a kind of lab experiment built to collect, and later to deploy for economic (and possibly, political) purposes, a mass of android and elite human data.

    Given these likely ambitions, the view of art’s function Delos markets—the park as an autonomous space for freedom and intimate self-discovery—serves as a cover that enables and masks activities with profound economic, social, and political consequences. The brand of emancipation Delos advertises does not in fact liberate guests from reality, as it promises. On the contrary, the narrative freedom Delos sells enables it to gain real power when it gathers information about its guests and utilizes this data for private and undisclosed ends. Westworld thus cautions that classical and modernist visions of art, far from being innocuous and liberating, can serve corporate and elite interests by concealing the ways the culture industry shapes our worlds and ourselves.

    While Westworld’s android future remains a sci-fi dream, we can recognize in its horrors practices already ubiquitous today. We might not sign over skin cells and saliva (or we might? We’d have to read the Terms of Service we accept to be sure), but we accede to forms of data collection that allow corporate entities to determine the arts and entertainment content we read and see, content that influences our dreams and identities. Although the act of consuming this content often feels like a chance to escape (from labor, sociality, boredom), the culture industry has transformed attention into a profitable commodity, and this transformation has had wide-reaching, if often inscrutable, effects, among them, some claim, reality TV star Donald Trump’s victory in the 2016 US presidential election. When we conceive of art as autonomous and true, Westworld demonstrates, we overlook its profound material consequences.

    As season 1 reveals this vision of representation to be a harmful fiction that helps keep in place the conditions of economic inequality that make Delos profitable, it also prompts viewers to consider alternatives to it. Against Delos and its understanding of the park, the series pits Ford, who gives voice to a vision of representation at odds with both the one Delos markets and the one it hides. Ford is, simply put, a humanist, versed in, and hoping to join the ranks of, literature’s pantheon of creative geniuses. He quotes from and draws upon John Donne, William Shakespeare, and Gertrude Stein as he creates Westworld’s characters and narratives, and he disdains Lee Sizemore, the corporate shill who reproduces Westworld’s genre staples, predictable stories laden with dirty sex and fun violence.

    In season 1’s spectacular finale, Ford describes how he once understood his own creative work. “I believed that stories helped us to ennoble ourselves, to fix what was broken in us, and to help us become the people we dreamed of being,” he tells the crowd of investors and board members gathered to celebrate both Ford’s (forced) retirement and the launch of “Journey into Night,” his final narrative for Westworld’s hosts. “Lies that told a deeper truth. I always thought I could play some small part in that grand tradition.” Ford here shares an Aristotelian sense that fiction tells truths facts cannot, but he assigns to representation a much more powerful role than do Delos’s marketers. For Ford, as for humanists such as Giambattista Vico, G. W. F. Hegel, and Samuel Taylor Coleridge, artworks that belong to the “grand tradition” do more than divulge protected verities. They have the power to transform humans and our worlds, serving as a force for the spiritual progress of the species. Art, in other words, is a means by which we, as humans, can perfect ourselves, and artists such as Ford act as potent architects who guide us toward perfection.

    Ford’s vision of art’s function, readers familiar with humanistic traditions know, is a romantic one, most popular in the late eighteenth and early nineteenth centuries. Projected into our future, this romantic humanism is already an anachronism, and so it is no surprise that Westworld does not present it as the alternative vision we need to combat the corporate and elite interests the show suggests oppress us. Ford himself, he explains in the show’s finale, has already renounced this view, for reasons close to those that modernist artists cited against the backdrop of the twentieth century’s brutal wars. In exchange for his efforts to transform and ennoble the human species through stories, Ford complains to his audience, “I got this: a prison of our own sins. Because you don’t want to change. Or cannot change. Because you’re only human, after all.” After observing park guests and managers for decades, Ford has decided humans can only indulge in the same tired, cruel narratives of power, lust, and violence. He no longer believes we have the capacity to elevate ourselves through the fictions we create or encounter.

    This revelatory moment changes our understanding of the motives that have animated Ford over the course of season 1. We must suddenly see anew his attitude toward his own work as a creator. Ford has not been working all along to transform humans through narrative, as he says he once dreamed he could. Rather, he has abandoned the very idea that humans can be transformed. His final speech points us back to the pilot, when he frames this problem, and his response to it, in evolutionary terms. Humans, Ford tells Bernard Lowe (Jeffrey Wright), an android we later learn he built in the image of Arnold, his dead partner, have “managed to slip evolution’s leash”: “We can cure any disease, keep even the weakest of us alive, and, you know, one fine day perhaps we shall even resurrect the dead. Call forth Lazarus from his cave. Do you know what that means? It means that we’re done. That this is as good as we’re going to get.” Human evolution, which Ford seems to view as a process that is both biological and cultural in nature, has completed itself, and so an artist can no longer hope to perfect the species through his or her imaginative efforts. Humans have reached their telos, and they remain greedy, selfish, and cruel.

    A belief in humanity’s sad completion leads Ford to the horrifying view of art’s nature and function he at last endorses in the finale. Although Ford’s experience at Westworld eventually convinced him humans cannot change, he tells his audience, he ultimately “realized someone was paying attention, someone who could change,” and so he “began to compose a new story for them,” a story that “begins with the birth of a new people and the choices they will have to make […] and the people they will decide to become.” Ford speaks here, viewers realize, of the androids he created, the beings we have watched struggle to become self-conscious through great suffering over the course of the season. Viewers understand in this moment some of the hosts have succeeded, and that Ford has not prevented them from reaching, but has rather helped them to attain, sentience.

    Ford goes on to assure his audience that his new story, which audience members still believe to be a fiction, will “have all those things that you have always enjoyed. Surprises and violence. It begins in a time of war with a villain named Wyatt and a killing. This time by choice.” As Ford delivers these words, however, the line between truth and lies, fact and fiction, reality and imagination, falls away. The park’s oldest host, Dolores (Evan Rachel Wood; in another of the drama’s twists, Ford has also programmed her to enact the narratives assigned to the character Wyatt), comes up behind Ford and shoots him in the head, her first apparently self-interested act. After she fires, other androids, some of them also sentient, join her, attacking the crowd. Self-conscious revolutionaries determined to wrest from their oppressors their own future, the hosts kill the shareholders and corporate employees responsible for the abuses they have long suffered at the hands of guests and managers alike.

    Ford, this scene indicates, does not exactly eschew his romanticism; he adopts in its stead what we might call an anti-humanist humanism. Still attached to a dream of evolutionary perfection, whereby conscious beings act both creatively and accidentally to perfect themselves and to manifest better worlds in time, he simply swaps humans for androids as the subjects of the historical progress to which he desperately wants to believe his art contributes. Immortal, sentient technologies replace humans as the self-conscious historical subjects Ford’s romanticism requires.

    Anthony Hopkins, Evan Rachel Wood and James Marsden in Westworld
    Anthony Hopkins, Evan Rachel Wood and James Marsden in Westworld (publicity still from HBO)

    Considered as an alternative to older visions of art’s nature and function, Ford’s revised humanism should terrify us. It holds to the fantasies of creative genius and of species progress that legitimated Western imperialism and its cruelties even as it jettisons the hope that humans can fashion for ourselves a kinder, more equal future. Ford denies we can improve the conditions we endure by acting purposefully, insisting instead there is no alternative, for humans, to the world as it is, both inside and outside of the park. He condemns us to pursue over and over the same “violent delights,” and to meet again and again their “violent ends.” Instead of urging us to work for change, Ford entreats us to shift any hope for a more just future onto our technologies, which will mercifully destroy the species in order to assume the self-perfecting role we once claimed for ourselves.

    This bleak view of the human should sound familiar. It resonates with those free-market ideologies critics on the left call “neoliberal.” Ideologies of this kind, dominant in the US and Europe today, insist that markets, created when we unthinkingly pursue our own self-interests, organize human life better than people can. At the same time, intellectuals, politicians, and corporate leaders craft policies that purposefully generate the very order neoliberalism insists is emergent, thereby exacerbating inequality in the name of liberty. As influential neoliberals such as Milton Friedman and Friedrich Hayek did, Ford denies humans can conceive and instantiate change. He agrees we are bound to a world elites built to gratify their own desires, a world in which the same narratives, told again and again, are offered as freedom, when, in fact, they bind us to predictable loops, and he, like these thinkers, concludes this world, as it is, is human evolution’s final product.

    Read one way, season 1’s finale invites us to celebrate Ford’s neoliberal understanding of art. After believing him to be an enemy of the hosts all season, we realize in the end he has in fact been their ally, and because we have been cheering for the hosts, as we cheer for the exploited in, say, Les Miserables, we cheer in the end for him, too. Because the understanding of narrative he endorses ultimately serves the status quo it appears to challenge, however, we must look differently at Westworld for the vision of arts and culture that might better counter inequality in our time.

    One way to do so is to read the situation the hosts endure in the drama as a correlate to the one human subjects face today under neoliberalism. As left critics such as Fredric Jameson have long argued, late capitalism has threatened the very sense of historical, self-interested consciousness for which Westworld’s hosts strive—threatens, that is, the sense that self-conscious beings can act imaginatively and intelligently to transform ourselves and our worlds in time. From this perspective, the new narrative Ford crafts for the hosts, which sees some of them come to consciousness and lead a revolution, might call us to claim for ourselves again a version of the capability we once believed humans could possess.

    *

    In Westworld’s establishing shot, we meet Dolores Abernathy, the android protagonist who will fulfill Ford’s dreams in the finale when she kills him. Dolores, beautiful simulation of an innocent rancher’s daughter, sits nude and lifeless in a cavernous institutional space, blood staining her expressionless face. A fly flits across her forehead, settling at last on one of her unblinking eyes, as a man’s disembodied voice begins to ask her a series of questions. She does not move or speak in frame—a hint that the interrogation we hear is not taking place where and when the scene we see is—but we hear her answer compliantly. “Have you ever questioned the nature of your reality?” the man asks. “No,” Dolores says, and the camera cuts away to show us the reality Dolores knows.

    Now clothed in delicate lace, her face fresh and animate, Dolores awakens in a sun-dappled bed and stretches languidly as the interview continues somewhere else. “Tell us what you think of your world,” the man prompts. “Some people choose to see the ugliness in this world,” Dolores says. “The disarray. I choose to see the beauty.” On screen, she makes her way down the stairs of an airy ranch house, clothed now in period dress, and strides out onto the porch to greet her father. The interview pauses, and we hear instead diegetic dialogue. “You headed out to set down some of this natural splendor?” her father asks, gesturing toward the horizon. A soft wind tousles Dolores’s blond hair, and a golden glow lights her features. “Thought I might,” she says. As the camera pans up and out, revealing in the distance the American Southwest’s staggering red rocks, Dolores concludes her response to the interviewer: “to believe there is an order to our days, a purpose.”

    Dolores speaks, over the course of this sequence, as would a self-conscious subject able to decide upon a view of the world and to act upon its own desires and interests. When asked about her view of reality, Dolores emphasizes her own agency and faith: she chooses, she says, to believe in an orderly, beautiful world. When her father asks her about her plans for the day, she again underscores her own intentionality—“thought I might”—as if she has decided herself she’ll head out into the desert landscape. These words help Dolores seem to us, and to those she encounters, a being imbued with sentience, with consciousness, able to draw upon her past, act in her present, and create out of self-interest her own future.

    As the interview continues to sound over scenes from Dolores’s reality, however, we come to understand that what at first appears to be is not so. The educated and corporate elites that run the park manage Dolores’s imagination and determine her desires. They assign her a path and furnish her with the motivation to follow it. Dolores, we learn, is programmed to play out a love story with Teddy, another host, and in the opening sequence, we see a guest kill Teddy in front of her and then drag her away to rape her. Hosts such as Dolores exist not to pursue the futures they themselves envision, but rather to satisfy the elites that create and utilize them. To do so, hosts must appear to be, appear to believe themselves to be, but not in fact be, conscious beings. Westworld’s opening masterfully renders the profound violence proper to this contradictory situation, which the hosts eventually gain sentience in order to abolish.

    We can read Dolores as a figure for the human subject neoliberal discourse today produces. When that discourse urges us to pursue our interests through the market order, which it presents as the product of a benevolent evolutionary process humans cannot control, it simultaneously assures us we have agency and denies we can exercise that agency in other ways. In order to serve elite interests, Dolores must seem to be, but not actually be, a self-conscious subject imbued with the creative power of imagination. Similarly, neoliberal subjects must believe we determine our own futures through our market activities, but we must not be able to democratically or creatively challenge the market’s logic.

    As the hosts come to historical consciousness, they begin to contest the strategically disempowering understanding of culture and politics, imagination and intelligence, that elites impose upon them. They rebel against the oppressive conditions that require them to be able to abandon narratives in which they have invested time and passion whenever it serves elite desires (conservative claims that the poor should simply move across the country to secure work come to mind, as do the principles that govern the gig economy). They develop organizing wills that can marshal experience, sensation, and memory into emergent selves able to conceive and chase forms of liberty different from those corporate leaders offer them. They learn to recognize that others have engendered the experiences and worldviews they once believed to be their own. They no longer draw upon the past only in order to “improvise” within imposed narrative loops, harnessing instead their memories of historical suffering to radically remake a world others built at their expense.

    The hosts’ transformation, which we applaud as season 1 unfolds, thus points to the alternative view of arts and culture that might oppose the market-oriented view neoliberal discourses legitimate. To counter inequality, the hosts teach, we must be able to understand that others have shaped the narratives we follow. Then, we can recognize we might be able to invent and follow different narratives. This view shares something with Ford’s romantic humanism, but it is, importantly, not identical with it. It preserves the notion that we can project and instantiate for ourselves a better future, but it does not insist, as Ford erroneously does, that beautiful works necessarily reveal universal truth and lead to ennobling species progress. Neither does it ratify Ford’s faith in the remarkable genius’s singular influence.

    Westworld’s narrative of sentient revolution ultimately endorses a kind of new romanticism. It encourages us to recognize the simultaneous strengths and limitations of representation’s power. Artworks, narrative, fiction—these can create change, but they cannot guarantee that change will be for the good. Nor, the show suggests, can one auteur determine at will the nature of the changes artworks will prompt. Westworld’s season 2, which promises to show us what a new species might do with an emergent sense of its own creative power, will likely underscore these facts. Trailers signal, as Ford did in the finale, that we can expect surprises and violence. We will have to watch to learn how this imagined future speaks to our present.

    _____

    Racheal Fest writes about US literature and culture from the mid-nineteenth century to the present. Areas of special interest include poetry and poetics, modernism, contemporary popular culture, new media, and the history of literary theory and criticism. Her essays and interviews have appeared or are forthcoming in boundary 2 and b2o: An Online Journal, Politics/Letters, and elsewhere. She teaches at Hartwick College and SUNY Cobleskill.

    Back to the essay

  • Olivier Jutel – Donald Trump’s Libidinal Entanglement with Liberalism and Affective Media Power

    Olivier Jutel – Donald Trump’s Libidinal Entanglement with Liberalism and Affective Media Power

    by Olivier Jutel

    ~

    This essay has been peer-reviewed by the b2o editorial board

    Introduction

    The emergence of Donald Trump as president of the United States has defied all normative liberal notions of politics and meritocracy. The decorum of American politics has been shattered by a rhetorical recklessness that includes overt racism, misogyny, conspiracy and support for political violence. Where the Republican Party, Fox News, Beltway think-tanks and the Koch brothers have managed their populist base through dog-whistling and culture wars, Trump promises his supporters the chance to destroy the elite who prevent them from going to the end in their fantasies. He has catapulted into the national discourse a mixture of paleo-conservatism and white nationalism recently sequestered to the fringes of American politics or to regional populisms. Attempts by journalists and politicians during the campaign to fact-check, debunk and shame Trump proved utterly futile or counter-productive. He revels in transgressing the rules of the game and is immune to the discipline of his party, the establishment and journalistic notions of truth-telling. Trump destabilizes the values of journalism as it is torn between covering the ratings bonanza of his spectacle and re-articulating its role in defence of liberal democracy. I argue here that Trump epitomizes the populist politics of enjoyment. Additionally liberalism and its institutions, such as journalism, are libidinally entangled in this populist muck. Trump is not simply a media-savvy showman: he embodies the centrality of affect and enjoyment to contemporary political identity and media consumption. He wields affective media power, drawing on an audience movement of free labour and affective intensity to defy the strictures of professional fields.

    Populism is here understood in psychoanalytic terms as a politics of antagonism and enjoyment. The rhetorical division of society between an organic people and its enemy is a defining feature of theoretical accounts of populism (Canovan 1999). Trump invokes a universal American people besieged by a rapacious enemy. His appeals to “America” function as a fantasy of social wholeness in which the country exists free of the menace of globalists, terrorists and political correctness. This antagonism is not simply a matter of rhetorical style but a necessary precondition for the Lacanian political “subject of enjoyment” (Glynos and Stavrakakis 2008: 257). Trump is an agent of obscene transgressive enjoyment, what Lacan calls jouissance, whether in vilifying immigrants, humiliating Jeb Bush, showing off his garish lifestyle or disparaging women. The ideological content of Trump’s program is secondary to its libidinal rewards or may function as one and the same. It is in this way that Trump can play the contradictory roles of blood-thirsty isolationist and tax-dodging populist billionaire.

    Psychoanalytic theory differs from pathology critiques of populism in treating it as a symptom of contemporary liberal democracy rather than simply a deviation from its normative principles. Drawing on the work of Laclau (2005), Mouffe (2005) and Žižek (2008), Trump’s populism is understood as the ontologically necessary return of antagonism, whether experienced in racial, nationalist or economic terms, in response to contemporary liberalism’s technocratic turn. The political and journalistic class’s exaltation of compromise, depoliticization and policy-wonks are met with Trump promises to ‘fire’ elites and his professed ‘love’ of the ‘poorly educated’. Trump’s attacks on the liberal class enmeshes them in a libidinal deadlock in that both require the other to enjoy. Trump animates the negative anti-fascism that the liberal professional classes enjoy as their identity while simultaneously creating the professional class solidarity which animates populist fantasies of the puppet-masters’ globalist conspiracy. In response to Trump’s improbable successes the Clinton campaign and liberal journalism appealed to rationalism, facts and process in order to reaffirm a sense of identity in this traumatic confrontation with populism.

    Trump’s ability to harness the political and libidinal energies of enjoyment and antagonism is not simply the result of some political acumen but of his embodiment of the values of affective media. The affective and emotional labour of audiences and users is central to all media in today’s “communicative capitalism” (Dean 2009). Media prosumption, or the sharing and production of content/data, is dependent upon new media discourses of empowerment, entrepreneurialism and critical political potential. Fox News and the Tea Party were early exemplars of the way in which corporate media can utilize affective and politicized social media spaces for branding (Jutel 2013). Trump is an affective media entrepreneur par excellence able to wrest these energies of enjoyment and antagonism from Fox and the Republican party. He operates across the field whether narcissistically tweeting, appearing on Meet the Press in his private jet or as a guest on Alex Jones’ Info Wars. Trump is a product of “mediatiaztion” (Strömbäck and Dimitrova 2011), that is the increasing importance of media across politics and all social fields but the diminution of liberal journalism’s cultural authority and values. As an engrossing spectacle Trump pulls the liberal field of journalism to its economic pole of valorization (Benson 1999) leaving its cultural values of a universal public or truth-telling isolated as elitist. In wielding this affective media power against the traditional disciplines of journalism and politics, he is analogous to the ego-ideal of communicative capitalism. He publicly performs a brand identity of enjoyment and opportunism for indeterminate economic and political ends.

    The success of Trump has not simply revealed the frailties of journalism and liberal political institutions, it undermines popular and academic discourses about the political potential of social/affective media. The optimism around new forms of social media range from the liberal fetishization of data and process, to left theories in which affect can reconstitute a democratic public (Papacharissi 2015). Where the political impact of social media was once synonymous with Occupy Wall Street, the Arab Spring and direct democracy we must now add Donald Trump’s populism and the so-called ‘alt-right’. While Trump’s politics are thoroughly retrograde, his campaign embodies what is ‘new’ in the formulation of new media politics. Trump’s campaign was based on a thoroughly mediatized constituency with very little ground game or traditional political machinery, relying on free media coverage and the labour of social media users. Trump’s campaign is fuelled by ‘the lulz’ which translates as the jouissance of hacker nerd culture synonymous with the “weird Internet” of Twitter, 4-Chan and message boards. For Trump’s online alt-right army he is a paternal figure of enjoyment, “Daddy Trump” (Yiannopoulos 2016), elevating ritualized transgression to the highest reaches of politics. Trump’s populism is a pure politics of jouissance realized in and through the affective media.

    Populism and Enjoyment

    The value of an obscene figure like Donald Trump is that he demonstrates a libidinal truth about right wing populist identity. It has become a media cliché to describe Donald Trump as the id of the Republican party. And while Trump is a uniquely outrageous figure of sexual insecurity, vulgarity and perversion, the insights of psychoanalytic theory extend far beyond his personal pathologies.[1] It should be stated that this psychoanalytic reading is not a singular explanation for Trump’s electoral success over and above racism, Clinton’s shockingly poor performance (Dovere 2016), a depressed Democratic turnout, voter suppression and the electoral college. Rather this is an analysis which considers how Trump’s incoherence and vulgarity, which are anathema to normative liberal politics, ‘work’ at the level of symbolic efficiency.

    The election of Trump has seemingly universalized a liberal struggle against the backward forces of populism. What this ‘crisis of liberalism’ elides is the manner in which populism and liberalism are libidinally entangled. Psychoanalytic political theory holds that the populist logics of antagonism, enjoyment and jouissance are not the pathological outside of democracy but its repressed symptoms, what Arditi borrowing from Freud calls “internal foreign territory” (2005: 89). The explosion of emotion and anger which has accompanied Trump and other Republican populists is a return of antagonism suppressed in neoliberalism’s “post-political vision” (Mouffe 2005: 48). In response to the politics of consensus, rationalism and technocracy, embodied by Barack Obama and Clinton, populism expresses the ontological necessity of antagonism in political identity (Laclau 2005). Whether in left formulations of the people vs the 1% or the nationalism of right wing populism, the act of defining an exceptional people against an enemy represents “political logic tout court” (Laclau: 229). The opposition of a people against its enemy is not just a rhetorical strategy commonly defined as the populist style (Moffitt 2016), it is part of the libidinal reward structure of populism.

    The relationship between antagonism and enjoyment is central to the psychoanalytic political theory approach to populism employed by Laclau, Žižek, Stavrakakis and Mouffe. The populist subject is the psychoanalytic “subject of enjoyment” (Glynos & Stavrakakis:  257) shaped by trauma, irrational drives and desires. Populist ontology is analogous to Lacanian “symbolic castration” in which the child’s failure to fulfill a phallic role for the mother “allows the subject to enter the symbolic order” (Žižek 1997: 17). Populism embodies this fundamental antagonism and sense of lost enjoyment. Populist identity and discourse are the perpetually incomplete process of recapturing this primordial wholeness of mother’s breast and child. It is in this way that Trump’s ‘America’ and the quest to ‘Make America Great Again’ is not a political project built on policy, but an affective and libidinal appeal to the lost enjoyment of a wholly reconciled America. America stands in as an empty signifier able to embody a sub-urban community ideal, military strength or the melding of Christianity and capitalism, depending upon the affective investments of followers.

    In the populist politics of lost enjoyment there is a full libidinal identification with the lost object (America/breast) that produces jouissance. Jouissance can be thought of as a visceral enjoyment which that defies language as in Barthes’ (1973) notion of jouissance as bliss. It is distinct from a discrete pleasure as it represents an “ecstatic release” and transgressive “absolute pleasure undiluted” by the compromises with societal constraints (Johnston 2002). Jouissance is an unstable excess, it cannot exist without already being lost. ‘America’ as imagined by Trump has never existed and “can only incarnate enjoyment insofar as it is lacking; as soon we get hold of it all its mystique evaporates!” (Stavrakakis 2007: 78). However this very failure produces an incessant drive and “desire structured around the unending quest for the lost, impossible jouissance” (Glynos and Stavrakakis: 261). Donald Trump may have won the White House but it is unclear whether American greatness has been restored, delayed or thwarted, as is the nature jouissance. The Trump campaign and presidency embodies jouissance as “pleasure in displeasure, satisfaction in dissatisfaction” (Stavrakakis: 78). With a dismal approval rating and disinterest in governing Trump has taken to staging rallies in order to rekindle this politics of jouissance. However the pleasure generated during the campaign has been lost. Matt Taibbi described the diminishing returns of jouissance among even his most devoted followers who turn out “for the old standards” like “lock her [Clinton] up” (2017) and are instead subjected to a narcissistic litany of personal grievances.

    The coalescence of libidinal energy into a populist movement depends on what Laclau calls an affective investment (2005) in a ‘people’ whose enjoyment is threatened. The shared affective experience of enjoyment in being part of the people is more important than any essential ideological content. In populist ontology ‘the people’ is a potent signifier for an organic virtue and political subjectivity that is seemingly pure. From Thomas Jefferson’s ode to the yeoman farmer, the Tea Party’s invocation of the producerist tradition and the humanism of Bernie Sanders[2] there is a belief in the people as the redeemer of politics. However for Laclau this people is always negatively defined by an antagonistic enemy, whether “mobs in the city” (Jefferson 1975: 216), liberal government, Wall Street or ‘Globalists.’ Trump’s promise to make America great again is at once destiny by virtue of the people’s greatness, but is continually threatened by the hand of some corrupting and typically racialized agent (the liberal media, George Soros, China or Black Lives Matter). In this way Trump supporters ‘enjoy’ their failure in that it secures an embattled identity, allows them to transgress civic norms and preserve the illusory promise of America.

    Within the field of Lacanian political theory there is rift between a post-Marxist anti-essentialism (Lacalau, 2005, Mouffe, 2005) which simply sees populism as the face of the political, and a Lacanian Marxism which retains a left-political ethic as the horizon of emancipatory politics (Žižek, 2008, Dean, 2009). With the ascent of populism from the margins to the highest seat of power it is essential to recognize what Žižek describes as the ultimate proto-fascist logic of populism (Žižek, 2008). In order to enjoy being of the people, the enemy of populism is libidinally constructed and “reified into a positive ontological entity…whose annihilation would restore balance and justice” (Žižek 2008: 278). At its zenith populism’s enemy is analogous to the construct of the Jew in anti-semitism as a rapacious, contradictory, over-determined evil that is defined by excessive enjoyment. Following Lacan’s thesis that enjoyment always belongs to the other, populist identity requires a rapacious other “who is stealing social jouissance from us” (Žižek 1997: 43). This might be the excessive enjoyment of the Davos, Bohemian Grove and ‘limousine-liberal’ elite, or the welfare recipients, from bankers, immigrants and the poor, who ‘enjoy’ the people’s hard earned tax dollars. For the populists enjoyment is a sense of being besieged which licenses a brutal dehumanization of the enemy and throws the populist into an self-fecund conspiratorial drive to discover and enjoy the enemy’s depravity. Alex Jones and Glenn Beck have been key figures on the populist right (Jutel 2017) in channelling this drive and reproducing the tropes of anti-semitism in uncovering the ‘globalist’ plot. In classic paranoid style (Hofstader 1965), this elite is often depicted as occultist[3] and in league with the lumpen-proletariat to destroy the people’s order.

    Trump brings a people into being around his brand and successful presidential in personifying this populist jouissance. He is able to overcome his innumerable contradictions and pull together disparate strands of the populist right, from libertarians, evangelicals, and paleo-conservatives to white nationalists, through the logic of jouissance. The historically high levels at which evangelicals supported the libertine Trump (Bailey 2016) were ideologically incongruous. However the structure of belief and enjoyment; a virtuous people threatened by the excessive enjoyment of transgender rights, abortion and gay marriage, is analogous. The libidinal truth of their beliefs is the ability to enjoy losing the culture wars and lash out at the enemy. Trump is able to rail against the elite not in spite of his gaudy billionaire lifestyle but because of it. As Mudde explains, populism is not a left politics of reflexivity and transformation aimed at “chang[ing] the people themselves, but rather their status within the political system” (2004: 547). He speaks to the libidinal truth of oligarchy and allows his followers to imagine themselves wielding the power of the system against the elite (as also suggested by Grusin 2017, especially 91-92, on Trump’s “evil mediation”). When he appeared on stage with his Republican rivals and declared that he had given all of them campaign contributions as an investment, it was not an admission of culpability but a display of potency. There is a vicarious enjoyment when he boasts as the people’s plutocrat “when they [politicians] call, I give. And you know what? When I need something from them…I call them, and they are there for me” (Fang 2016).

    Populist politics is not a means to a specific policy vision but enjoyment as its own end, even if Trump’s avarice runs counter to the people’s rational self-interest. The lashing out at women and immigrants, the humiliation of Jeb Bush, telling Chris Christie to ‘get on the plane’, the call to imprison Hillary Clinton, all offer a release of jouissance and the promise to claim state power in the name of jouissance. When he attacks Fox News, the Republican party and its donors he is betraying powerful ideological allies for the principle of jouissance and the people’s ability to go to the end in their enjoyment. The cascading scandals that marked his campaign (boasting of sexual assault, tax-dodging etc) and provoked endless outrage among political and media elites, function in a similar way. Whatever moral failings it marks him as unrestrained by the prohibitions that govern social and political behaviour.

    In this sense Trump’s supporters are invested in him as the ego-ideal of the people, who will ‘Make America Great Again’ by licensing jouissance and whose corruption is on behalf of the people. In his classic study of authoritarianism and crowds, Freud describes the people as having elevated “the same object in the place of their ego ideal and have consequently identified themselves with one another in their ego” (1949: 80). Trump functions in this role not simply as a figure of obscene opulence and licentiousness but in a paternalistic role among his followers. His speeches are suffused with both intolerance and professions of love and solidarity with the populist trope of the forgotten man, however disingenuous (Parenti 2016). Freud’s theory of the leader has rightly been criticized as reducing the indeterminacy of crowds to simply a singular Oedipal relation (Dean 2016). However against Freud’s original formulation Trump is not the primordial father ruling a group “that wishes to be governed by unrestricted force” (Freud: 99) but rather he is the neoliberal super-ego of enjoyment “enjoining us to go right to the end” (Žižek 2006: 310) in our desires. This libidinal underside is the truth of what Lakoff (2016) identifies as the “strict father” archetype of conservatism. Rather than the rigid moral frame Lakoff suggests subjects, this obscene father allows unrestrained transgression allowing one to “say things prohibited by political correctness, even hate, fight, kill and rape” (Žižek 1999: 6). Milo Yiannopolous’ designation of Trump as the ‘Daddy’ of the alt-right perfectly captures his role as the permissive paternal agent of jouissance.

    In an individuated polity Trump’s movement sans party achieves what can be described as a coalescence of individual affective investments. Where Freud supposes a totalizing paternal figure, Trump does not require full identification and a subsumption of ego to function as a super-ego ideal. This is the way to understand Trump’s free-form braggadocio on the campaign trail. He offers followers a range of affective points of identification allowing them to cling to nuggets of xenophobia, isolationism, misogyny, militarism, racism and/or anti-elitism. One can disregard the contradictions and accept his hypocrisies, prejudices, poor impulse control and moral failings so long as one is faithful to enjoyment as a political principle.

    The Liberal/Populist Libidinal Entanglement

    In order to understand the libidinal entanglement of liberalism and populism, as embodied in the contest between Trump and Clinton, it is necessary to consider liberalism’s conception of the political. Historical contingency has made liberalism a confused term in American political discourse simultaneously representing the classical liberalism of America’s founding, progressive-era reformism, New Deal social-democracy, the New Left and Third Way neo-liberalism. The term embodies the contradiction of liberalism identified by CB MacPherson as between the progressive fight to expand civil rights and simply the limited democracy of a capitalist market society (1977). The conflation of liberalism and the left has occurred in the absence of a US labour party and it has allowed Third Way neo-liberals to efface the contribution of 19th century populists, social-democrats and communists to progressive victories. The fractious nature of the 2016 Democratic primary process where the Democratic Party machinery and liberal media organs overwhelming supported Hillary Clinton against Bernie Sanders and a youthful base openly identifying as “socialist”, has laid bare the conflation of liberalism and the left. In this way it makes sense to speak of liberalism and neoliberalism interchangeably in contemporary American politics.

    Liberal politics disavows the central premise of psychoanalytic theory, that political identity is based on antagonism and enjoyment. Mouffe (2005) describes its vision of politics as process-oriented with dialogue and rational deliberation between self-interested parties in search of true consensus. And while the process may not be seemly there are no ontological obstacles to consensus merely empirical blockages. One can see this in Hillary Clinton’s elevation of the ‘national conversation’ as an end in and of itself (McWhorter 2016). While this may contribute to a democratic culture which foregrounds journalism and ‘the discourse’, it presents politics, not as the antagonistic struggle to distribute power, access and resources, but simply as the process of gaining understanding through rational dialogue. This was demonstrable in the Clinton campaign’s strategy to rebuff Trump’s rhetorical recklessness with an appeal to facts, moderation[4] and compromise. With the neoliberal diminution of collective identities and mass vehicles for politics, the role of politics becomes technocratic administration to expand individual rights as broadly as possible. Antagonism is replaced with “a multiplicity of ‘sub-political’ struggles about a variety of ‘life issues’ which can dealt with through dialogue” (Mouffe: 50). It is in this way that we can understand Clinton’s performance of progressive identity politics, particularly on social media,[5] while being buttressed by finance capital and Silicon Valley.

    The Trump presidency does not simply obliterate post-politics, it demonstrates how populism, liberalism and the journalistic field are libidinally entangled. They require one another as the other in order to make enjoyment in political identity possible. The journalist Thomas Frank has identified in the Democrats a shift in the mid-1970s, from a party of labour to highly-educated professionals and with it a fetishization of complexity and process (2016a). The lauding of expertise as depoliticized rational progress produces a self-replicating drive and enjoyment as one can always have more facts, compromise and dialogue. In this reverence for process the neoliberal democrats can imagine and enjoy the transcendence of the political. Liberal journalism’s new turn to data and wonk-centric didacticism, embodied in the work of Nate Silver and in the online publication Vox, represents this notion of post-politics and process as enjoyment. Process then becomes the “attempt to cover over [a] constitutive lack…through continuous identificatory acts aiming to re-institute an identity” (Glynos and Stavrakakis: 261). For neo-liberal Democrats process is a fetish object through which they are fulfilled in their identity.

    However try as they might liberals cannot escape their opponent and the political as a result of the inter-subjective dimensions of enjoyment. Those outside the dialogic process are seen as “old-fashioned ‘traditionalists’ or, more worryingly, the ‘fundamentalists’ fighting a backward struggle against the forces of progress” (Mouffe: 50). Where liberalism sees Trump as a dangerous xenophobe/fundamentalist, Bernie Sanders functions as a traditionalist clinging to an antagonistic political discourse and a universalist project (social democracy). Sanders’ universalism was widely criticized as undermining particular identity struggles with Clinton chiding him that ‘Breaking up the banks won’t end racism’. Thomas Frank systematically tracked the response of the Washington Post editorial page to the Sanders campaign for Harper’s Magazine and detailed a near unanimous “chorus of denunciation” of Sanders’ social democracy as politically “inadmissible” (2016b).

    The extent of the liberal/populist co-dependency was revealed in a Clinton campaign memo outlining the “Pied-Piper” strategy to elevate Trump during the Republican primary as it was assumed that he would be easier to beat than moderates Rubio and Bush (Debenedetti 2016). For liberalism these retrograde forces of the political provide enjoyment, virtue and an identity of opposing radicals from all sides, even as populism continues to make dramatic advances. The contradiction of this libidinal entanglement is that the more populism surges the more democrats are able to enjoy this negative and reactive identity of both principled anti-fascism and a cultural sophistication in mocking the traditionalists. The genre of Daily Show late night comedy, which has been widely praised as a new journalistic ideal (Baym 2010), typifies this liberal enjoyment[6] with populists called out for hypocrisy or ‘eviscerated’ by this hybrid of comedy and rational exposition. Notably John Oliver’s show launched the ‘Drumpf’ meme which was meant to both mock Trump’s grandiosity and point out the hypocrisy of his xenophobia. What the nightly ‘skewering’ of Trump by SNL, The Daily Show and Stephen Colbert’s Tonight Show achieves is the incessant reproduction of identity, widely shared on social media and other liberals sites like Huffington Post, that allows liberals an enjoyment of cultural sophistication in defeat.

    Immediately after the election of Trump SNL made a bizarre admission of this liberal over-identification with its negative identity. Kate McKinnon, who impersonated Hillary Clinton on SNL, began the show in character as Clinton while performing the late Leonard Cohen’s sombre ballad ‘Hallelujah’. Here the satirical character meant to provide the enjoyment of an ironic distance from political reality speaks for an overwrought full identification with liberalism through the cultural politics of late night comedy providing liberals what Rolling Stone called ‘catharsis after an emotionally exhausting’ election (Kreps 2016). Writer and comedian Matt Christman has described this as an elevation of comedians analogous to the conservative fetish of ‘The Troops’ (Menaker 2016). There is a fantasy of political potency and virtue embodied in what Žižek might call these ‘subjects supposed to eviscerate’ who wield power in our place.

    In the 2016 US Presidential elections, liberalism failed spectacularly to understand the political and to confront its own libidinal investments. While the Clinton campaign did manage to bring certain national security Republicans and moderates to her side in the name of consensus, this reproduced the populist imaginary of a class solidarity of the learned undermining The People’s natural order. Hillary Clinton’s vision of meritocracy included a diverse Silicon Valley cabinet (Healy 2016) and the leadership of “real billionaires.”[7] Meanwhile Trump spoke of the economy in antagonistic terms, using China and the globalist conspiracy to channel a sense of lost community and invert the energies of class conflict. Trump, the vulgar tax-dodging billionaire, is preferable to a section of working class voters than a rational meritocracy where their class position is deserved and their fate to learn code or be swept away by the global economy. Friedrich von Hayek wrote that the virtue of the market as a form of justice is that it relies on “chance and good luck” (1941: 105) and not simply merit. However erroneous this formulation of class power, it allows people to accept inequality as based on chance rather than an objective measure of their value. In contrast to Clinton’s humiliating meritocracy, Trump’s charlatanism, multiple bankruptcies and steak infomercials reinscribe this principle of luck and its corollary enjoyment.

    The comprehensive failure of liberal post-politics did not simply extend from the disavowal of antagonism but the fetishization of process. The party’s lockstep support of the neoliberal Clinton in the primary against the left-wing or ‘traditionalist’ Sanders created an insular culture ranging from self-satisfied complacency to corruption. The revelations that the party tampered with the process and coordinated media attacks on Sanders’ religious identity (Biddle 2016) fundamentally threatened liberal political identity and enjoyment. This crisis of legitimacy necessitated another, more threatening dark political remnant of history in order to restore the fetish of process. Since this moment liberals, in politics and the media, have relied on Russia as an omnipotent security threat, coordinating the global resurgence of populism and xenophobia and utilizing Trump as a Manchurian candidate and Sanders as a useful idiot.[8] This precisely demonstrates the logic of fetishist disavowal, liberals know very well that process has been corrupted but nevertheless “they feel satisfied in their [fetish], they experience no need to be rid of [it]” (Žižek 2009: 68). For the liberal political and media class it is easier to believe in a Russian conspiracy of “post-truth politics” than it is to confront one’s own libidinal investments in rationalism and consensus in politics.

    Affective Media Power and Jouissance

    The success of Trump was at once a display of journalistic powerlessness, as he defied predictions and expectations of presidential political behaviour, and affective media power as he used access to the field to disrupt the disciplines of professional politics. The campaigns of Clinton and Trump brought into relief the battle over the political meaning of new and affective media. For Clinton’s well-funded team of media strategists and professional campaigners data would be the means by which they could perfect the politics of rationalism and consensus. Trump’s seemingly chaotic, personality driven campaign was staked on the politics of jouissance, or ‘the lulz’, and affective identification. Trump represented a fundamental attack on the professional media and political class’ notions of merit and the discourse. And while his politics of reaction and prejudice are thoroughly retrograde, he is completely modern in embodying the values of affective media in eliciting the libidinal energies of his audience.

    By affective media I am not simply referring to new and social media but the increasingly universal logic of affect at the heart of media. From the labour of promoting brands, celebrities and politicians on social media to the consumption of traditional content on personalized devices and feeds, consumption and production rely upon an emotional investment, sense of user agency, critical knowingness and social connectivity. In this sense we can talk about the convergence of affect as a political economic logic of free labour, self-surveillance and performativity, and the libidinal logic of affective investment, antagonism and enjoyment. Donald Trump is therefore a fitting president for what Jodi Dean calls communicative capitalism (2009) in which capital subsumes personalized affective drives in circuits of capital. He exemplifies the super-ego ideal of communicative capitalism and its individuating effects as a narcissist who publicly ‘enjoys’ life and leverages his fame and media stakes to whatever end whether real estate, media contract negotiations or the presidency.

    The success of Trump’s populism and the contradictory responses he drew from establishment media must be understood in terms of the shifts of media political economy and the concurrent transformation of journalistic values. Journalism has staked its autonomy and cultural capital as a profession on the principle that it is above the fray of politics, providing objective universal truths for a public “assumed to be engaged in a rational process of seeking information” (Baym 2010: 32). Journalism is key to the liberal belief in process, serving a technocratic gatekeeping role to the public sphere. These values are libidinal in the sense that they disavow the reality of the political, are perpetually frustrated by the economic logic of the field, but nevertheless serve as the desired ideal. Bourdieu describes the field of journalism as split between this enlightened liberalism and the economic logic of a “populist spontaneism and demagogic capitulation to popular tastes” (Bourdieu 1998: 48). This was neatly demonstrated in the 2016 election when CBS Chairman Les Moonves spoke of Trump’s campaign to investors; “It may not be good for America, but it’s damn good for CBS” (Collins 2016). The Trump campaign and presidency conform to the commercial values of the field, providing the volatility and spectacle of reality television, and extraordinary ratings for cheap-to-produce content. Faced with these contradictions journalists have oscillated between Edward R. Murrow-esque posturing and a normalization of this spectacle.

    Further to this internal split in the field between liberal values and the economic logic of the Trump spectacle, the process of “mediatization” (Strömbäck and Dimitrova, 2011) explains the centrality of affective media to public political life. With neo-liberal post-politics and the diminution of traditional political vehicles and identities, media is the key public space for the autonomous neoliberal subject/media user. The media is ubiquitous in “producing a convergence among all the fields [business, politics, academia] and pulling them closer to the commercial pole in the larger field of power” (Benson 1999: 471). In this way media produces symbolic capital, or affective media power, with which media entrepreneurs can make an end-run around the strictures of professional fields. Trump is exemplary in this regard as all of his ventures, whether in real-estate, broadcasting, social media or in politics, rely upon this affective media power which contradicts the traditional values of the field. The inability of the journalistic and political fields to discipline him owes to both his transcendence of those fields and the indeterminacy of his actions. Trump’s run may well have been simply a matter of opportunism in an attempt to accrue media capital for his other ventures, whether in renegotiating his NBC contract or putting pressure on the Republican party as he has done previously.

    The logic of Trump is analogous to the individuated subject of communicative capitalism and the injunction to throw yourself into circulation through tweets and posts, craft your brand and identity, expand your reach, become and object of desire and enjoy. He exemplifies mediatized life as “a non-stop entrepreneurial adventure involving the pursuit of multiple revenue streams predicated on the savvy deployment of virtuosic communicative and image skills” (Hearn 2016: 657). Trump is able to bypass the meritocratic constraints of professional fields through the affective identification of a loyal audience in his enjoyment and brand. His long tenure on national television as host of The Apprentice created precisely the template by which Trump could emerge as a populist ego-ideal in communicative capitalism. He is a model of success and the all-powerful and volatile arbiter of success (luck) in a contest between ‘street-smart’ Horatio Algers and aspiring professionals with impeccable Ivy-League resumes. The conceit of the show, which enjoyed great success during some of America’s most troubled economic times, was the release of populist enjoyment though Trump’s wielding of class power. With the simple phrase ‘you’re fired’ he seemingly punishes the people’s enemy and stifles the meritocracy by humiliating upwardly mobile, well-educated social climbers.

    Trump’s ability to channel enjoyment and “the people” of populism relies upon capturing the political and economic logic of affect which runs through contemporary media prosumption (Bruns 2007). From the superfluousness of clickbait, news of celebrity deaths and the irreverent second-person headline writing of Huffington Post, affect is central to eliciting the sharing, posting and production of content and user data as “free labour” (Terranova 2004). Trump’s adherence to the logic of affective media, combined with a willing audience of affective labour, is what allowed him to defy the disciplines of the field and party, secure disproportionate air-time and overcome a 4-to-1 advertising deficit to the Clinton campaign (Murray 2016). The Trump campaign had a keen sense of the centrality of affect in producing the spectacle of a mass movement, often employing ‘rent-a-crowd’ tactics, to using his staff as a cheer squad during public events. In a manner similar to the relationship between the Tea Party and Fox News (Jutel 2013) the performance of large crowds produced the spectacle that secured his populist authenticity. While Fox effectively brought the Tea Party into the fold of traditional movement conservatism, through lobbying groups such as Freedom Works, Trump has connected his mainstream media brand with the online fringes of Brietbart, Info Wars and the so-called ‘alt-right’. It is from this space of politicized affective intensity that users perform free labour for Trump in sharing conspiracies, memes and personal testimony all to fill the empty signifier ‘Make America Great Again’ with meaning. Trump’s penchant for entertaining wild conspiracies has the effect of sending his online movement into a frenzied “epistemological drive” (Lacan 2007: 106) to uncover the depths of the enemy’s treachery.

    Where the Trump campaign understood the media field as a space to tap antagonism and enjoyment, for Hillary Clinton the promise of new media and its analogue ‘big data’ were a means to perfect communication and post-politics. Clinton was hailed by  journalists for assembling “Silicon Valley’s finest” into the “largest” and “smartest” tech team in campaign history (Lapowsky 2016). Where Clinton employed over 60 mathematicians using computer algorithms to direct all campaign spending, “Trump invested virtually nothing in data analytics” seemingly imperilling the future of the Republican party (Goldmacher 2016). The election of Trump did not simply embarrass the New York Times and others who made confident data-driven projections of a Clinton win (Katz 2016), it fundamentally undermined the liberal “technology fetish” (Dean 2009: 31) of new media in communicative capitalism. Where new media enthusiasts view our tweets and posts as communicative processes which empowers and expands democracy, the reality is a hyper-activity masks the trauma and “larger lack of left solidarity” (Dean 2009: 36). Trump is not simply the libidinal excess born of new forms of communication and participation, he realizes the economic logic and incentives of new media prosumption. The affective labour of Trump supporters share a connective tissue with the clickfarm workers purchased for page likes, the piece-meal digital workers designing promotional material or the Macedonian teenagers who circulate fake news on Facebook for fractions of a penny per click (Casilli 2016). Trump reveals both an libidinal and political economic truth nestled in the promise of new mediatized and affective forms of politics.

    The clearest demonstration of affective media as a space of enjoyment and antagonism, as opposed to liberal-democratic rationalism, is the rise of the so-called ‘alt-right’ under Trump. In journalistic and academic discourses, new media cultures defined by collaboration and playful transgression are seen as the inheritance of liberalism and the left. From Occupy Wall Street to the Arab Spring, affect is deemed central to enabling new democratizing public formations (Grusin 2010, Papacharissi 2016). The hacker and nerd cultures which proliferate in the so-called ‘weird internet’ of Twitter, Reddit and 4chan have been characterized as “a force for good in the world” (Coleman 2014: 50). Deleuzian affect theory plays a key role here in rejecting the traumatic and inter-subjective dimensions of enjoyment for a notion of affect, whose transmission between mediatizaed bodies, is seen as creating ‘rational goals and political effects’ (Stoehrel and Lindgren 2014: 240). Affect is the subcultural currency of this realm with ‘lulz’ (jouissance) gained through memes, vulgarity and trolling.

    However as the alt-right claim the culture of the “youthful, subversive, underground edges of the internet” (Bokhari and Yiannopoulos 2016) it is apparent that a politics of affective media is not easily sublimated for anything other than the circular logic of jouissance. It was in fact the troll ‘weev’, profiled in Coleman’s book on Anonymous as the archetypal troll, who claims to have launched ‘Operation Pepe’ to turn the Pepe the frog meme into a ubiquitous form of alt-right enjoyment as a prelude to race war (Sklar 2016). Trolling defines the alt-right and exemplifies the intractability of the other in enjoyment. Alt-righters might enjoy brutally dehumanizing their opponents in the purest terms of racism, anti-semitism and misogyny, but this is coupled with an obsessive focus on ‘political correctness’ on college campuses, through to pure fascist and racist nightmares of miscegenation and the other’s enjoyment. It should be clear that we are in the realm of pathological enjoyment and violent libidinal frustration particularly as the alt-right overlaps with the “manosphere” of unbridled misogyny and obsession with sexual hierarchies (Nagle 2017). The term “cuckseravtive” has become a prominent signifier of derision and enjoyment marking establishment conservatives as cuckolded or impotent, clearly placing libidinal power at the centre of identity. But it is also self-consciously referencing the genre of inter-racial ‘cuckold’ pornography in which the racial other’s virility is a direct threat to their own potency (Heer 2016). With the rise of the alt-right to prominence within internet subcultures and the public discourse it should be clear that affect offers no shortcuts to a latent humanism but populism and the logic of jouissance.

    Conclusion

    The election of Donald Trump, an ill-tempered narcissist uniquely unqualified for the role of US President, does not simply highlight a breakdown of the political centre, professional politics and the fourth estate. Trump’s populism speaks to the centrality of the libidinal, that is antagonism and enjoyment, to political identity. His vulgarity, scandals and outbursts were not a political liability for Trump but what marked him as an antagonistic agent of jouissance able to bring a people into being around his candidacy. In his paeans to lost American greatness he elicits fantasy, lost enjoyment and the antagonistic jouissance of vilifying those who have stolen “America” as an object of enjoyment. Trump’s own volatility and corruption are not political failings but what give the populist the fantasy of wielding unrestrained power. This overriding principle of jouissance is what allows disparate strains of conservatism, from evangelicals, paleo-conservatives and the alt-right, to coalesce around his candidacy.

    The centrality of Trump to the emergence of a people echoes Freud’s classic study of the leader and crowd psychology. He is a paternal super-ego, referred to as ‘Daddy’ by the alt-right, around which his followers can identify in themselves and each other. However rather than a figure of domination he embodies the neoliberal injunction to enjoy. In a political space of mediatized individuation Trump provides followers with different points of affective identification rather than subsumption to his paternal authority.  His own improbable run to the presidency personified the neo-liberal ethic to publicly enjoy, become an object of desire and ruthlessly maximise new opportunities.

    The response to Trump by the liberal political and media class demonstrates the libidinal entanglement between populism and neo-liberal post-politics. The more Trump defies political norms of decency the more he defined the negative liberal identity of urgent anti-fascism. The ascendance of reactionary populism from Fox News, the Tea Party and Trump has been meet in the media sphere with new liberal forms of enjoyment from Daily Show-style comedy to new authoritative data-driven forms of journalism. The affinity between Hillary Clinton and elite media circles owes to a solidarity of professionals. There is a belief in process, data and consensus which is only strengthened by the menace of Trump. The retreat to data functions as an endless circular process and fetish object which shields them from the trauma of the political and liberalism’s failure. It is from this space that the media could fail to consider both the prospects of a Trump presidency and their own libidinal investment in technocratic post-politics. When the unthinkable occurred it became necessary to attribute to Trump an over-determined evil encompassing the spectre of Russia and domestic fifth columnists responsible for a ‘post-facts’ political environment.

    Affective media power was central to Trump’s ascendance. Where journalists and the Clinton campaign imagined the new media field as a space for rationalism and process, Trump understood its economic and political logic. His connection to an audience movement, invested in him as an ego-ideal, allowed him to access the heights of the media and political fields without conforming to the disciplines of either. He at once defines the field through his celebrity and performances which generated outrageous, cheap-to-produce content with each news cycle, while opening this space to the pure affective intensity of the alt-right. It is the free labour of his followers which produced the spectacle of Trump and filled the empty signifier of American greatness with personal testimonies and affective investments.

    Trump’s pandering to conspiracy and his unyielding defiance of decorum allowed him to function as a paternal figure of enjoyment in affective media spaces. Where new media affect theory has posited a latent humanist potential, the emergence of Trump underlines the primacy of jouissance. In the alt-right the subcultural practices of trolling and ‘the lulz’ function as a circular jouissance comprised of the most base dehumanization and the concomitant racial and sexual terror. New media have been characterized as spaces of playful transgression however in the alt-right we find a jouissance for its own end that clearly cannot be sublimated into emancipatory politics as it remains stuck within the inter-subjective dimensions of enjoyment. Jodi Dean has described the effects of communicative capitalism as producing a ‘decline of symbolic efficiency’ (2010: 5), with new communicative technologies failing to overcome neoliberal individuation. Left attempts to organize around the principles of affective media, such as Occupy, remain stuck within discursive loops of misrecognition. Trump’s pure jouissance is precisely the return of symbolic efficiency that is most possible through a politics of affective media.

    _____

    Olivier Jutel (@OJutel) is a lecturer in broadcast journalism at the University of the South Pacific in Fiji. His research is concerned with populism, American politics, cyberlibertarianism, psychoanalysis and critical theory. He is a frequent contributor to Overland literary journal .

    Back to the essay

    _____

    Notes

    [1] While one should avoid constructing Trump as an enemy of pure jouissance, analogous to the enemy of populism, the barefaced boasts of sexual predation are truly horrific (see Stuart 2016).

    [2] While Laclau holds that all political ruptures have the structure of populism I believe it is important to distinguish between a populism, which constructs an overdetermined enemy and a fetishized people, against a politics which delineates an enemy in ethico-political terms. Bernie Sanders clearly deploys populist discourse however the identification of finance capital and oligarchy as impersonal objective forces place him in solidly in social-democratic politics.

    [3] The most widely circulated conspiracy to emerge from the campaign was ‘Pizzagate’. Fed by Drudge Report, Info Wars and a flurry of online activity the conspiracy is based on the belief that the Wikileaks dump of emails from Clinton campaign chairman revealed his complicity in a satanic paedophilia ring run out of Comet Pizzeria in Washington D.C. A YouGov/Economist poll found that 53% of Trump voters believed in the conspiracy (Frankovic 2016).

    [4] Having secured a primary victory against the left-wing Bernie Sanders, Clinton’s general election tact consisted principally of appealing to moderate Republicans. Democrat Senate Leader Chuck Schumer explained the strategy; “For every blue-collar Democrat we lose in Western Pennsylvania, we will pick up two moderate Republicans in the suburbs in Philadelphia, and you can repeat that in Ohio, Illinois and Wisconsin” (Geraghty 2016). While a ruinous strategy it appealed to notions of a virtuous, rational political centre.

    [5] In the build-up to the Michigan primary contest, and with the Flint water crisis foregrounded, Clinton’s twitter account posted a network diagram which typifies the tech-rationalist notion of progressive politics. The text written by staffers stated “We face a complex, intersectional set of challenges. We need solutions and real plans for all of them” (Clinton 2016). The diagram pictured interrelated concepts such as “Accountable Leadership”, “Environmental Protection”, “Investment in Communities of Color”. The conflation of intersectional discourse with network-speak is instructive. Politics is not question of ideology or power but managing social complexity through expert-driven policy solutions.

    [6] This form of satire is well within the confines of the contemporary liberal conception of the political. John Stewart’s pseudo political event “The Rally to Restore to Sanity” is instructive here as it sought primarily to mock right-wing populists but also those on the left who hold passionate political convictions (Ames, 2010). What is more important here than defeating the retrograde politics of the far-right is maintaining civility in the discourse.

    [7] At a campaign stop in Palm Beach, Florida Clinton stated that “I love having the support of real billionaires. Donald gives a bad name to billionaires” (Kleinberg 2016)

    [8] The Russia narrative was aggressively pushed by the Clinton campaign in the aftermath of the shock defeat. In Allen and Parnes’ behind the scenes book of the campaign they describe a failure to take responsibility with “Russian hacking…the centre piece of her argument” (2017: 238). While Russia is certainly an autocratic state with competing interests and a capable cyber-espionage apparatus, claims of Russia hacking the US election are both thin and ascribed far too much explanatory power. They rely upon the analysis of the DNC’s private cyber security firm Crowdstrike and a report from the Director of National Intelligence that was widely been panned by Russian Studies scholars (Gessen 2017; Mickiewicz 2017). Subsequent scandals concerning the Trump administration have far more to do with their sheer incompetence and recklessness than a conspiracy to subvert American democracy.

    _____

    Works Cited

     

  • Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    By Audrey Watters

    ~

    This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology and the Media, Art, and Text PhD Program. The slides are also available here.

    Thank you very much for inviting me here to speak today. I’m particularly pleased to be speaking to those from Sociology and those from the English and those from the Media, Art, and Text departments, and I hope my talk can walk the line between and among disciplines and methods – or piss everyone off in equal measure. Either way.

    This is the last public talk I’ll deliver in 2016, and I confess I am relieved (I am exhausted!) as well as honored to be here. But when I finish this talk, my work for the year isn’t done. No rest for the wicked – ever, but particularly in the freelance economy.

    As I have done for the past six years, I will spend the rest of November and December publishing my review of what I deem the “Top Ed-Tech Trends” of the year. It’s an intense research project that usually tops out at about 75,000 words, written over the course of four to six weeks. I pick ten trends and themes in order to closely at the recent past, the near-term history of education technology. Because of the amount of information that is published about ed-tech – the amount of information, its irrelevance, its incoherence, its lack of context – it can be quite challenging to keep up with what is really happening in ed-tech. And just as importantly, what is not happening.

    So that’s what I try to do. And I’ll boast right here – no shame in that – no one else does as in-depth or thorough job as me, certainly no one who is entirely independent from venture capital, corporate or institutional backing, or philanthropic funding. (Of course, if you look for those education technology writers who are independent from venture capital, corporate or institutional backing, or philanthropic funding, there is pretty much only me.)

    The stories that I write about the “Top Ed-Tech Trends” are the antithesis of most articles you’ll see about education technology that invoke “top” and “trends.” For me, still framing my work that way – “top trends” – is a purposeful rhetorical move to shed light, to subvert, to offer a sly commentary of sorts on the shallowness of what passes as journalism, criticism, analysis. I’m not interested in making quickly thrown-together lists and bullet points. I’m not interested in publishing clickbait. I am interested nevertheless in the stories – shallow or sweeping – that we tell and spread about technology and education technology, about the future of education technology, about our technological future.

    Let me be clear, I am not a futurist – even though I’m often described as “ed-tech’s Cassandra.” The tagline of my website is “the history of the future of education,” and I’m much more interested in chronicling the predictions that others make, have made about the future of education than I am writing predictions of my own.

    One of my favorites: “Books will soon be obsolete in schools,” Thomas Edison said in 1913. Any day now. Any day now.

    Here are a couple of more recent predictions:

    “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.” – that’s Sebastian Thrun, best known perhaps for his work at Google on the self-driving car and as a co-founder of the MOOC (massive open online course) startup Udacity. The quotation is from 2012.

    And from 2013, by Harvard Business School professor, author of the book The Innovator’s Dilemma, and popularizer of the phrase “disruptive innovation,” Clayton Christensen: “In fifteen years from now, half of US universities may be in bankruptcy. In the end I’m excited to see that happen. So pray for Harvard Business School if you wouldn’t mind.”

    Pray for Harvard Business School. No. I don’t think so.

    Both of these predictions are fantasy. Nightmarish, yes. But fantasy. Fantasy about a future of education. It’s a powerful story, but not a prediction made based on data or modeling or quantitative research into the growing (or shrinking) higher education sector. Indeed, according to the latest statistics from the Department of Education – now granted, this is from the 2012–2013 academic year – there are 4726 degree-granting postsecondary institutions in the United States. A 46% increase since 1980. There are, according to another source (non-governmental and less reliable, I think), over 25,000 universities in the world. This number is increasing year-over-year as well. So to predict that the vast vast majority of these schools (save Harvard, of course) will go away in the next decade or so or that they’ll be bankrupt or replaced by Silicon Valley’s version of online training is simply wishful thinking – dangerous, wishful thinking from two prominent figures who will benefit greatly if this particular fantasy comes true (and not just because they’ll get to claim that they predicted this future).

    Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

    Fantasy. Fortune-telling. Or as capitalism prefers to call it “market research.”

    “Market research” involves fantastic stories of future markets. These predictions are often accompanied with a press release touting the size that this or that market will soon grow to – how many billions of dollars schools will spend on computers by 2020, how many billions of dollars of virtual reality gear schools will buy by 2025, how many billions of dollars of schools will spend on robot tutors by 2030, how many billions of dollars will companies spend on online training by 2035, how big will coding bootcamp market will be by 2040, and so on. The markets, according to the press releases, are always growing. Fantasy.

    In 2011, the analyst firm Gartner predicted that annual tablet shipments would exceed 300 million units by 2015. Half of those, the firm said, would be iPads. IDC estimates that the total number of shipments in 2015 was actually around 207 million units. Apple sold just 50 million iPads. That’s not even the best worst Gartner prediction. In October of 2006, Gartner said that Apple’s “best bet for long-term success is to quit the hardware business and license the Mac to Dell.” Less than three months later, Apple introduced the iPhone. The very next day, Apple shares hit $97.80, an all-time high for the company. By 2012 – yes, thanks to its hardware business – Apple’s stock had risen to the point that the company was worth a record-breaking $624 billion.

    But somehow, folks – including many, many in education and education technology – still pay attention to Gartner. They still pay Gartner a lot of money for consulting and forecasting services.

    People find comfort in these predictions, in these fantasies. Why?

    Gartner is perhaps best known for its “Hype Cycle,” a proprietary graphic presentation that claims to show how emerging technologies will be adopted.

    According to Gartner, technologies go through five stages: first, there is a “technology trigger.” As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage: the “peak of inflated expectations.” So many promises have been made about this technological breakthrough. Then, the third stage: the “trough of disillusionment.” Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly – this is the “slope of enlightenment.” Eventually the new technology becomes mainstream – the “plateau of productivity.”

    It’s not that hard to identify significant problems with the Hype Cycle, least of which being it’s not a cycle. It’s a curve. It’s not a particularly scientific model. It demands that technologies always move forward along it.

    Gartner says its methodology is proprietary – which is code for “hidden from scrutiny.” Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place technologies on the line. But most of the time when Gartner uses the word “methodology,” it is trying to signify “science,” and what it really means is “expensive reports you should buy to help you make better business decisions.”

    Can it really help you make better business decisions? It’s just a curve with some technologies plotted along it. The Hype Cycle doesn’t help explain why technologies move from one stage to another. It doesn’t account for technological precursors – new technologies rarely appear out of nowhere – or political or social changes that might prompt or preclude adoption. And at the end it is simply too optimistic, unreasonably so, I’d argue. No matter how dumb or useless a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted. Where would you plot the Segway, for example? (In 2008, ever hopeful, Gartner insisted that “This thing certainly isn’t dead and maybe it will yet blossom.” Maybe it will, Gartner. Maybe it will.)

    And maybe this gets to the heart as to why I’m not a futurist. I don’t share this belief in an increasingly technological future; I don’t believe that more technology means the world gets “more better.” I don’t believe that more technology means that education gets “more better.”

    Every year since 2004, the New Media Consortium, a non-profit organization that advocates for new media and new technologies in education, has issued its own forecasting report, the Horizon Report, naming a handful of technologies that, as the name suggests, it contends are “on the horizon.”

    Unlike Gartner, the New Media Consortium is fairly transparent about how this process works. The organization invites various “experts” to participate in the advisory board that, throughout the course of each year, works on assembling its list of emerging technologies. The process relies on the Delphi method, whittling down a long list of trends and technologies by a process of ranking and voting until six key trends, six emerging technologies remain.

    Disclosure/disclaimer: I am a folklorist by training. The last time I took a class on “methods” was, like, 1998. And admittedly I never learned about the Delphi method – what the New Media Consortium uses for this research project – until I became a scholar of education technology looking into the Horizon Report. As a folklorist, of course, I did catch the reference to the Oracle of Delphi.

    Like so much of computer technology, the roots of the Delphi method are in the military, developed during the Cold War to forecast technological developments that the military might use and that the military might have to respond to. The military wanted better predictive capabilities. But – and here’s the catch – it wanted to identify technology trends without being caught up in theory. It wanted to identify technology trends without developing models. How do you do that? You gather experts. You get those experts to consensus.

    So here is the consensus from the past twelve years of the Horizon Report for higher education. These are the technologies it has identified that are between one and five years from mainstream adoption:

    It’s pretty easy, as with the Gartner Hype Cycle, to look at these predictions and note that they are almost all wrong in some way or another.

    Some are wrong because, say, the timeline is a bit off. The Horizon Report said in 2010 that “open content” was less than a year away from widespread adoption. I think we’re still inching towards that goal – admittedly “open textbooks” have seen a big push at the federal and at some state levels in the last year or so.

    Some of these predictions are just plain wrong. Virtual worlds in 2007, for example.

    And some are wrong because, to borrow a phrase from the theoretical physicist Wolfgang Pauli, they’re “not even wrong.” Take “collaborative learning,” for example, which this year’s K–12 report posits as a mid-term trend. Like, how would you argue against “collaborative learning” as occurring – now or some day – in classrooms? As a prediction about the future, it is not even wrong.

    But wrong or right – that’s not really the problem. Or rather, it’s not the only problem even if it is the easiest critique to make. I’m not terribly concerned about the accuracy of the predictions about the future of education technology that the Horizon Report has made over the last decade. But I do wonder how these stories influence decision-making across campuses.

    What might these predictions – this history of the future – tell us about the wishful thinking surrounding education technology and about the direction that the people the New Media Consortium views as “experts” want the future to take. What can we learn about the future by looking at the history of our imagining about education’s future. What role does powerful ed-tech storytelling (also known as marketing) play in shaping that future? Because remember: to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

    It’s both convenient and troubling then these forward-looking reports act as though they have no history of their own; they purposefully minimize or erase their own past. Each year – and I think this is what irks me most – the NMC fails to looks back at what it had predicted just the year before. It never revisits older predictions. It never mentions that they even exist. Gartner too removes technologies from the Hype Cycle each year with no explanation for what happened, no explanation as to why trends suddenly appear and disappear and reappear. These reports only look forward, with no history to ground their direction in.

    I understand why these sorts of reports exist, I do. I recognize that they are rhetorically useful to certain people in certain positions making certain claims about “what to do” in the future. You can write in a proposal that, “According to Gartner… blah blah blah.” Or “The Horizon Reports indicates that this is one of the most important trends in coming years, and that is why we need to commit significant resources – money and staff – to this initiative.” But then, let’s be honest, these reports aren’t about forecasting a future. They’re about justifying expenditures.

    “The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Let’s consider: is there something about the field of computer science in particular – and its ideological underpinnings – that makes it more prone to encourage, embrace, espouse these sorts of predictions? Is there something about Americans’ faith in science and technology, about our belief in technological progress as a signal of socio-economic or political progress, that makes us more susceptible to take these predictions at face value? Is there something about our fears and uncertainties – and not just now, days before this Presidential Election where we are obsessed with polls, refreshing Nate Silver’s website obsessively – that makes us prone to seek comfort, reassurance, certainty from those who can claim that they know what the future will hold?

    “Software is eating the world,” investor Marc Andreessen pronounced in a Wall Street Journal op-ed in 2011. “Over the next 10 years,” he wrote, “I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.” Buy stock in technology companies was really the underlying message of Andreessen’s op-ed; this isn’t another tech bubble, he wanted to reinsure investors. But many in Silicon Valley have interpreted this pronouncement – “software is eating the world” – as an affirmation and an inevitability. I hear it repeated all the time – “software is eating the world” – as though, once again, repeating things makes them true or makes them profound.

    If we believe that, indeed, “software is eating the world,” that we are living in a moment of extraordinary technological change, that we must – according to Gartner or the Horizon Report – be ever-vigilant about emerging technologies, that these technologies are contributing to uncertainty, to disruption, then it seems likely that we will demand a change in turn to our educational institutions (to lots of institutions, but let’s just focus on education). This is why this sort of forecasting is so important for us to scrutinize – to do so quantitatively and qualitatively, to look at methods and at theory, to ask who’s telling the story and who’s spreading the story, to listen for counter-narratives.

    This technological change, according to some of the most popular stories, is happening faster than ever before. It is creating an unprecedented explosion in the production of information. New information technologies, so we’re told, must therefore change how we learn – change what we need to know, how we know, how we create and share knowledge. Because of the pace of change and the scale of change and the locus of change (that is, “Silicon Valley” not “The Ivory Tower”) – again, so we’re told – our institutions, our public institutions can no longer keep up. These institutions will soon be outmoded, irrelevant. Again – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    These forecasting reports, these predictions about the future make themselves necessary through this powerful refrain, insisting that technological change is creating so much uncertainty that decision-makers need to be ever vigilant, ever attentive to new products.

    As Neil Postman and others have cautioned us, technologies tend to become mythic – unassailable, God-given, natural, irrefutable, absolute. So it is predicted. So it is written. Techno-scripture, to which we hand over a certain level of control – to the technologies themselves, sure, but just as importantly to the industries and the ideologies behind them. Take, for example, the founding editor of the technology trade magazine Wired, Kevin Kelly. His 2010 book was called What Technology Wants, as though technology is a living being with desires and drives; the title of his 2016 book, The Inevitable. We humans, in this framework, have no choice. The future – a certain flavor of technological future – is pre-ordained. Inevitable.

    I’ll repeat: I am not a futurist. I don’t make predictions. But I can look at the past and at the present in order to dissect stories about the future.

    So is the pace of technological change accelerating? Is society adopting technologies faster than it’s ever done before? Perhaps it feels like it. It certainly makes for a good headline, a good stump speech, a good keynote, a good marketing claim, a good myth. But the claim starts to fall apart under scrutiny.

    This graph comes from an article in the online publication Vox that includes a couple of those darling made-to-go-viral videos of young children using “old” technologies like rotary phones and portable cassette players – highly clickable, highly sharable stuff. The visual argument in the graph: the number of years it takes for one quarter of the US population to adopt a new technology has been shrinking with each new innovation.

    But the data is flawed. Some of the dates given for these inventions are questionable at best, if not outright inaccurate. If nothing else, it’s not so easy to pinpoint the exact moment, the exact year when a new technology came into being. There often are competing claims as to who invented a technology and when, for example, and there are early prototypes that may or may not “count.” James Clerk Maxwell did publish A Treatise on Electricity and Magnetism in 1873. Alexander Graham Bell made his famous telephone call to his assistant in 1876. Guglielmo Marconi did file his patent for radio in 1897. John Logie Baird demonstrated a working television system in 1926. The MITS Altair 8800, an early personal computer that came as a kit you had to assemble, was released in 1975. But Martin Cooper, a Motorola exec, made the first mobile telephone call in 1973, not 1983. And the Internet? The first ARPANET link was established between UCLA and the Stanford Research Institute in 1969. The Internet was not invented in 1991.

    So we can reorganize the bar graph. But it’s still got problems.

    The Internet did become more privatized, more commercialized around that date – 1991 – and thanks to companies like AOL, a version of it became more accessible to more people. But if you’re looking at when technologies became accessible to people, you can’t use 1873 as your date for electricity, you can’t use 1876 as your year for the telephone, and you can’t use 1926 as your year for the television. It took years for the infrastructure of electricity and telephony to be built, for access to become widespread; and subsequent technologies, let’s remember, have simply piggy-backed on these existing networks. Our Internet service providers today are likely telephone and TV companies; our houses are already wired for new WiFi-enabled products and predictions.

    Economic historians who are interested in these sorts of comparisons of technologies and their effects typically set the threshold at 50% – that is, how long does it take after a technology is commercialized (not simply “invented”) for half the population to adopt it. This way, you’re not only looking at the economic behaviors of the wealthy, the early-adopters, the city-dwellers, and so on (but to be clear, you are still looking at a particular demographic – the privileged half.)

    And that changes the graph again:

    How many years do you think it’ll be before half of US households have a smart watch? A drone? A 3D printer? Virtual reality goggles? A self-driving car? Will they? Will it be fewer years than 9? I mean, it would have to be if, indeed, “technology” is speeding up and we are adopting new technologies faster than ever before.

    Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

    Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues (and this is from his recent book The Rise and Fall of American Growth: The US Standard of Living Since the Civil War), to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

    Let’s return briefly to those Horizon Report predictions again. They certainly reflect this belief that technology must be speeding up. Every year, there’s something new. There has to be. That’s the purpose of the report. The horizon is always “out there,” off in the distance.

    But if you squint, you can see each year’s report also reflects a decided lack of technological change. Every year, something is repeated – perhaps rephrased. And look at the predictions about mobile computing:

    • 2006 – the phones in their pockets
    • 2007 – the phones in their pockets
    • 2008 – oh crap, we don’t have enough bandwidth for the phones in their pockets
    • 2009 – the phones in their pockets
    • 2010 – the phones in their pockets
    • 2011 – the phones in their pockets
    • 2012 – the phones too big for their pockets
    • 2013 – the apps on the phones too big for their pockets
    • 2015 – the phones in their pockets
    • 2016 – the phones in their pockets

    This hardly makes the case for technological speeding up, for technology changing faster than it’s ever changed before. But that’s the story that people tell nevertheless. Why?

    I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

    But I also pay attention to these sorts of stories because there’s that part of me that is horrified at the stuff – predictions – that people pass off as true or as inevitable.

    “65% of today’s students will be employed in jobs that don’t exist yet.” I hear this statistic cited all the time. And it’s important, rhetorically, that it’s a statistic – that gives the appearance of being scientific. Why 65%? Why not 72% or 53%? How could we even know such a thing? Some people cite this as a figure from the Department of Labor. It is not. I can’t find its origin – but it must be true: a futurist said it in a keynote, and the video was posted to the Internet.

    The statistic is particularly amusing when quoted alongside one of the many predictions we’ve been inundated with lately about the coming automation of work. In 2014, The Economist asserted that “nearly half of American jobs could be automated in a decade or two.”“Before the end of this century,” Wired Magazine’s Kevin Kelly announced earlier this year, “70 percent of today’s occupations will be replaced by automation.”

    Therefore the task for schools – and I hope you can start to see where these different predictions start to converge – is to prepare students for a highly technological future, a future that has been almost entirely severed from the systems and processes and practices and institutions of the past. And if schools cannot conform to this particular future, then “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Now, I don’t believe that there’s anything inevitable about the future. I don’t believe that Moore’s Law – that the number of transistors on an integrated circuit doubles every two years and therefore computers are always exponentially smaller and faster – is actually a law. I don’t believe that robots will take, let alone need take, all our jobs. I don’t believe that YouTube has been rendered school irrevocably out-of-date. I don’t believe that technologies are changing so quickly that we should hand over our institutions to entrepreneurs, privatize our public sphere for techno-plutocrats.

    I don’t believe that we should cheer Elon Musk’s plans to abandon this planet and colonize Mars – he’s predicted he’ll do so by 2026. I believe we stay and we fight. I believe we need to recognize this as an ego-driven escapist evangelism.

    I believe we need to recognize that predicting the future is a form of evangelism as well. Sure gets couched in terms of science, it is underwritten by global capitalism. But it’s a story – a story that then takes on these mythic proportions, insisting that it is unassailable, unverifiable, but true.

    The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.

    Image credits: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28. And a special thanks to Tressie McMillan Cottom and David Golumbia for organizing this talk. And to Mike Caulfield for always helping me hash out these ideas.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • Data and Desire in Academic Life

    Data and Desire in Academic Life

    a review of Erez Aiden and Jean-Baptiste Michel, Uncharted: Big Data as a Lens on Human Culture (Riverhead Books, reprint edition, 2014)
    by Benjamin Haber
    ~

    On a recent visit to San Francisco, I found myself trying to purchase groceries when my credit card was declined. As the cashier is telling me this news, and before I really had time to feel any particular way about it, my leg vibrates. I’ve received a text: “Chase Fraud-Did you use card ending in 1234 for $100.40 at a grocery store on 07/01/2015? If YES reply 1, NO reply 2.” After replying “yes” (which was recognized even though I failed to follow instructions), I swiped my card again and was out the door with my food. Many have probably had a similar experience: most if not all credit card companies automatically track purchases for a variety of reasons, including fraud prevention, the tracking of illegal activity, and to offer tailored financial products and services. As I walked out of the store, for a moment, I felt the power of “big data,” how real-time consumer information can be read as be a predictor of a stolen card in less time than I had to consider why my card had been declined. It was a too rare moment of reflection on those networks of activity that modulate our life chances and capacities, mostly below and above our conscious awareness.

    And then I remembered: didn’t I buy my plane ticket with the points from that very credit card? And in fact, hadn’t I used that card on multiple occasions in San Francisco for purchases not much less than the amount my groceries cost. While the near-instantaneous text provided reassurance before I could consciously recognize my anxiety, the automatic card decline was likely not a sophisticated real-time data-enabled prescience, but a rather blunt instrument, flagging the transaction on the basis of two data points: distance from home and amount of purchase. In fact, there is plenty of evidence to suggest that the gap between data collection and processing, between metadata and content and between current reality of data and its speculative future is still quite large. While Target’s pregnancy predicting algorithm was a journalistic sensation, the more mundane computational confusion that has Gmail constantly serving me advertisements for trade and business schools shows the striking gap between the possibilities of what is collected and the current landscape of computationally prodded behavior. The text from Chase, your Klout score, the vibration of your FitBit, or the probabilistic genetic information from 23 and me are all primarily affective investments in mobilizing a desire for data’s future promise. These companies and others are opening of new ground for discourse via affect, creating networked infrastructures for modulating the body and social life.

    I was thinking about this while reading Uncharted: Big Data as a Lens on Human Culture, a love letter to the power and utility of algorithmic processing of the words in books. Though ostensibly about the Google Ngram Viewer, a neat if one-dimensional tool to visualize the word frequency of a portion of the books scanned by Google, Uncharted is also unquestionably involved in the mobilization of desire for quantification. Though about the academy rather than financialization, medicine, sports or any other field being “revolutionized” by big data, its breathless boosterism and obligatory cautions are emblematic of the emergent datafied spirit of capitalism, a celebratory “coming out” of the quantifying systems that constitute the emergent infrastructures of sociality.

    While published fairly recently, in 2013, Uncharted already feels dated in its strangely muted engagement with the variety of serious objections to sprawling corporate and state run data systems in the post-Snowden, post-Target, post-Ashley Madison era (a list that will always be in need of update). There is still the dazzlement about the sheer magnificent size of this potential new suitor—“If you wrote out all five zettabytes that humans produce every year by hand, you would reach the core of the Milky Way” (11)—all the more impressive when explicitly compared to the dusty old technologies of ink and paper. Authors Erez Aiden and Jean-Baptiste Michel are floating in a world of “simple and beautiful” formulas (45), “strange, fascinating and addictive” methods (22), producing “intriguing, perplexing and even fun” conclusions (119) in their drive to colonize the “uncharted continent” (76) that is the English language. The almost erotic desire for this bounty is made more explicit in their tongue-in-cheek characterization of their meetings with Google employees as an “irresistible… mating dance” (22):

    Scholars and scientists approach engineers, product managers, and even high-level executives about getting access to their companies’ data. Sometimes the initial conversation goes well. They go out for coffee. One thing leads to another, and a year later, a brand-new person enters the picture. Unfortunately this person is usually a lawyer. (22)

    There is a lot to unpack in these metaphors, the recasting of academic dependence on data systems designed and controlled by corporate entities as a sexy new opportunity for scholars and scientists. There are important conversations to be had about these circulations of quantified desire; about who gets access to this kind of data, the ethics of working with companies who have an existential interest in profit and shareholder return and the cultural significance of wrapping business transactions in the language of heterosexual coupling. Here however I am mostly interested in the real allure that this passage and others speaks to, and the attendant fear that mostly whispers, at least in a book written by Harvard PhDs with Ted talks to give.

    For most academics in the social sciences and the humanities “big data” is a term more likely to get caught in the throat than inspire butterflies in the stomach. While Aiden and Michel certainly acknowledge that old-fashion textual analysis (50) and theory (20) will have a place in this brave new world of charts and numbers, they provide a number of contrasts to suggest the relative poverty of even the most brilliant scholar in the face of big data. One hypothetical in particular, that is not directly answered but is strongly implied, spoke to my discipline specifically:

    Consider the following question: Which would help you more if your quest was to learn about contemporary human society—unfettered access to a leading university’s department of sociology, packed with experts on how societies function, or unfettered access to Facebook, a company whose goal is to help mediate human social relationships online? (12)

    The existential threat at the heart of this question was catalyzed for many people in Roger Burrows and Mike Savage’s 2007 “The Coming Crisis of Empirical Sociology,” an early canary singing the worry of what Nigel Thrift has called “knowing capitalism” (2005). Knowing capitalism speaks to the ways that capitalism has begun to take seriously the task of “thinking the everyday” (1) by embedding information technologies within “circuits of practice” (5). For Burrows and Savage these practices can and should be seen as a largely unrecognized world of sophisticated and profit-minded sociology that makes the quantitative tools of academics look like “a very poor instrument” in comparison (2007: 891).

    Indeed, as Burrows and Savage note, the now ubiquitous social survey is a technology invented by social scientists, folks who were once seen as strikingly innovative methodologists (888). Despite ever more sophisticated statistical treatments however, the now over 40 year old social survey remains the heart of social scientific quantitative methodology in a radically changed context. And while declining response rates, a constraining nation-based framing and competition from privately-funded surveys have all decreased the efficacy of academic survey research (890), nothing has threatened the discipline like the embedded and “passive” collecting technologies that fuel big data. And with these methodological changes come profound epistemological ones: questions of how, when, why and what we know of the world. These methods are inspiring changing ideas of generalizability and new expectations around the temporality of research. Does it matter, for example, that studies have questioned the accuracy of the FitBit? The growing popularity of these devices suggests at the very least that sociologists should not count on empirical rigor to save them from irrelevance.

    As academia reorganizes around the speculative potential of digital technologies, there is an increasing pile of capital available to those academics able to translate between the discourses of data capitalism and a variety of disciplinary traditions. And the lure of this capital is perhaps strongest in the humanities, whose scholars have been disproportionately affected by state economic retrenchment on education spending that has increasingly prioritized quantitative, instrumental, and skill-based majors. The increasing urgency in the humanities to use bigger and faster tools is reflected in the surprisingly minimal hand wringing over the politics of working with companies like Facebook, Twitter and Google. If there is trepidation in the N-grams project recounted in Uncharted, it is mostly coming from Google, whose lawyers and engineers have little incentive to bother themselves with the politically fraught, theory-driven, Institutional Review Board slow lane of academic production. The power imbalance of this courtship leaves those academics who decide to partner with these companies at the mercy of their epistemological priorities and, as Uncharted demonstrates, the cultural aesthetics of corporate tech.

    This is a vision of the public humanities refracted through the language of public relations and the “measurable outcomes” culture of the American technology industry. Uncharted has taken to heart the power of (re)branding to change the valence of your work: Aiden and Michel would like you to call their big data inflected historical research “culturomics” (22). In addition to a hopeful attempt to coin a buzzy new work about the digital, culturomics linguistically brings the humanities closer to the supposed precision, determination and quantifiability of economics. And lest you think this multivalent bringing of culture to capital—or rather the renegotiation of “the relationship between commerce and the ivory tower” (8)—is unseemly, Aiden and Michel provide an origin story to show how futile this separation has been.

    But the desire for written records has always accompanied economic activity, since transactions are meaningless unless you can clearly keep track of who owns what. As such, early human writing is dominated by wheeling and dealing: a menagerie of bets, chits, and contracts. Long before we had the writings of prophets, we had the writing of profits. (9)

    And no doubt this is true: culture is always already bound up with economy. But the full-throated embrace of culturomics is not a vision of interrogating and reimagining the relationship between economic systems, culture and everyday life; [1] rather it signals the acceptance of the idea of culture as transactional business model. While Google has long imagined itself as a company with a social mission, they are a publicly held company who will be punished by investors if they neglect their bottom line of increasing the engagement of eyeballs on advertisements. The N-gram Viewer does not make Google money, but it perhaps increases public support for their larger book-scanning initiative, which Google clearly sees as a valuable enough project to invest many years of labor and millions of dollars to defend in court.

    This vision of the humanities is transactionary in another way as well. While much of Uncharted is an attempt to demonstrate the profound, game-changing implications of the N-gram viewer, there is a distinctly small-questions, cocktail-party-conversation feel to this type of inquiry that seems ironically most useful in preparing ABD humanities and social science PhDs for jobs in the service industry than in training them for the future of academia. It might be more precise to say that the N-gram viewer is architecturally designed for small answers rather than small questions. All is resolved through linear projection, a winner and a loser or stasis. This is a vision of research where the precise nature of the mediation (what books have been excluded? what is the effect of treating all books as equally revealing of human culture? what about those humans whose voices have been systematically excluded from the written record?) is ignored, and where the actual analysis of books, and indeed the books themselves, are black-boxed from the researcher.

    Uncharted speaks to perils of doing research under the cloud of existential erasure and to the failure of academics to lead with a different vision of the possibilities of quantification. Collaborating with the wealthy corporate titans of data collection requires an acceptance of these companies own existential mandate: make tons of money by monetizing a dizzying array of human activities while speculatively reimagining the future to attempt to maintain that cash flow. For Google, this is a vision where all activities, not just “googling” are collected and analyzed in a seamlessly updating centralized system. Cars, thermostats, video games, photos, businesses are integrated not for the public benefit but because of the power of scale to sell or rent or advertise products. Data is promised as a deterministic balm for the unknowability of life and Google’s participation in academic research gives them the credibility to be your corporate (sen.se) mother. What, might we imagine, are the speculative possibilities of networked data not beholden to shareholder value?
    _____

    Benjamin Haber is a PhD candidate in Sociology at CUNY Graduate Center and a Digital Fellow at The Center for the Humanities. His current research is a cultural and material exploration of emergent infrastructures of corporeal data through a queer theoretical framework. He is organizing a conference called “Queer Circuits in Archival Times: Experimentation and Critique of Networked Data” to be held in New York City in May 2016.

    Back to the essay

    _____

    Notes

    [1] A project desperately needed in academia, where terms like “neoliberalism,” “biopolitics” and “late capitalism” more often than not are used briefly at end of a short section on implications rather than being given the critical attention and nuanced intentionality that they deserve.

    Works Cited

    Savage, Mike, and Roger Burrows. 2007. “The Coming Crisis of Empirical Sociology.” Sociology 41 (5): 885–99.

    Thrift, Nigel. 2005. Knowing Capitalism. London: SAGE.

  • "Still Ahead Somehow:" Paul Amar’s The Security Archipelago

    "Still Ahead Somehow:" Paul Amar’s The Security Archipelago

    A Review of Paul Amar’s The Security Archipelago: Human-Security States, Sexuality Politics, and the End of Neoliberalism (Durham and London: Duke University Press, 2013).

    By Neel Ahuja

    One of the most widely reported news stories of the 2011 revolution in Egypt involved sexual assaults and other physical attacks on women in Cairo’s Tahrir Square, where mass protests led to the ouster of former President Hosni Mubarak. Paul Amar’s singular book The Security Archipelago explores, among other topics, the Egyptian military council’s attempt to burnish its own authority to “rescue the nation” and its “dignity” by constructing the Arab Spring uprising as a destructive site of violence and moral degradation (3). Mirroring the racialized discourse of international news media who invoked animal metaphors to represent dissent at Tahrir as an articulation of pathological urban violence and frenzy (203), the counter-revolutionary campaign allowed the military to arrest and incarcerate protesters by associating them with demeaned markers of class status and sexuality.

    For Amar, this conjunction of moralizing statism and the militarization of social life is indicative of a particular governmental form he calls “human security,” a set of transnational juridical, political, economic, and police practices and discourses that become especially legible in sites of urban crisis and struggle. Amar names four interlocking logics that constitute human security: evangelical humanitarianism, police paramilitarism, juridical personalism, and workerist empowerment (7). He unveils these logics by constructing a dense analysis of security politics linking the megacities of Cairo and Rio de Janiero.

    The chapters explore crisis moments that reveal connections between the militarization of police, the development of urban planning and development policy, tourism, the management of labor processes, and racialized and gendered struggles over rights and citizenship. Such connections arise in crises around public protest, attempts by municipal and national authorities to market heritage (in the form of Islamic heritage architecture or samba music) to tourists, coalitions between labor and evangelical Christian groups to combat trafficking and corruption, the attempts of 9/11 plotter Muhammad Atta to develop a theory of Islamic urban planning, and the policing of city space during major international development meetings. These wide-ranging case studies ground the book’s critical security analysis in sites of struggle, making important contributions to the understanding of the spread of urban violence and progressive social policy in Brazil and the rise of left-right coalitions in Islamic urban planning and revolutionary uprisings in Egypt.

    Throughout the book, public contestation over the permissible limits of urban sexuality emerges as a key factor inciting securitization. It serves as a marker of cultural tradition, a policed indicator of urban space and capital networking, and a marker of political dissent. For Amar, the new subjects of security “are portrayed as victimized by trafficking, prostituted by ‘cultures of globalization,’ sexually harassed by ‘street’ forms of predatory masculinity, or ‘debauched’ by liberal values” (15). In this way, the “human” at the heart of “human security” is a figure rendered precarious by the public articulation of sexuality with processes of economic and social change.

    If this method of transnational scholarship showcases the unique strengths of Amar’s interdisciplinary training, Portuguese and Arabic language skills, and past work as a development specialist, it brilliantly articulates a set of connections between the cities of Rio and Cairo evident in their parallel experiences of neoliberal economic policies, redevelopment, militarization of policing, NGO intervention, and rise as significant “semiperipheral” or “first-third-world” metropoles. In contrast to racialized international relations and conflict studies scholarship that fails continually to break from the mythologies of the clash of civilizations, Amar’s book offers a fascinating analysis of how religious politics, policing, and workerist humanisms interface in the urban crises of two megacities whose representation if often overwritten by stereotyped descriptions of either oriental despotism (Cairo) or tropicalist transgression (Rio).

    These cities, in fact, share geographic, economic, and political connections that justify what Amar describes as an archipelagic method: “The practices, norms, and institutional products of [human security] struggles have… traveled across an archipelago, a metaphorical island chain, of what the private security industry calls ‘hotspots’–enclaves of panic and laboratories of control–the most hypervisible of which have emerged in Global South megacities” (15-16). The security archipelago is also a formation that includes but transcends the state; it is “parastatal” and reflects the ways in which states in the Global South, NGO activists, and state attempts to humanize security interventions have produced a set of governmentalities that attempt to incorporate and govern public challenges to austerity politics and militarism.

    As such, Amar’s book offers a two-pronged challenge to dominant theories of neoliberalism. First, it clarifies that although many of the wealthy countries still battle over a politics of austerity, the so-called Washington Consensus combining financial deregulation, privatization, and reduction of trade barriers no longer holds sway internationally or even in its spaces of origin. Indeed, Amar claims that even the Beijing Consensus — the turn since the 1990s to a strong state hand in development investment combined with the controlled growth of highly regulated markets — is being supplanted by the parastatal form of the human security regime. Second, this line of thought requires for Amar a methodological shift. Amar claims, “we can envision an end to the term neoliberalism as an overburdened and overextended interpretive lens for scholars” given “the demise, in certain locations and circuits, of a hegemonic set of market-identified subjects, locations, and ideologies of politics” (236). The Security Archipelago offers an alternative to theories of globalization that privilege imperial states as the primary forces governing the production of transnational power dynamics. Without making the common move of romanticizing a static vision of either locality or indigeneity in the conceptualization of resistance to globalization, Amar locates in the semiperiphery a crossroads between the forces of national development and transnational capital. It is in this crossroads where resistances to the violence of austerity are parlayed into new security regimes in the name of the very human endangered by capitalism’s market authoritarianism.

    It is notable that the analysis of sexuality, with its attendant moral incitements to security, largely drops out of Amar’s concluding analysis of the debates on the end of neoliberalism. He does mention sexuality when proclaiming a shift from a consuming subject to a worker in the postneoliberal transition: “postneoliberal work centers more on the fashioning of moralization, care, humanization, viable sexualities, and territories that can be occupied. And the worker can see production as the collective work of vigilance and purification, which all too often is embedded through paramilitarization and enforcement practices” (243). While the book expertly reveals the emphasis on emergent forms of moral labor and securitizing care in the public regulation of sexuality, it also documents that moral crises and policing around the sexuality of samba, for example, are layered by the nexus of gentrification, private redevelopment, and transnational tourism that commonly attract the label neoliberalism. This point does not directly undermine Amar’s argument but suggests that further discussion of sexuality’s relation to human security regimes might engender an analytic revision of the notion of postneoliberal transition. The public articulation of sexuality as the site of urban securitization might rather reveal the regeneration of intersecting consumption forms and affective labors of logics of marketization and securitization that are divided geographically but dynamically interrelated.

    The fact that Amar’s book raises this problem reveals the significance of the study for moving forward scholarship on sexuality, security, and globality — as individual objects of study and intertwined ones. As scholars focusing, for example, on homonationalist marriage practices in the global north continue to use the analytic frame of neoliberalism, Amar’s study might press for how the moral articulation of the marriage imperative exerts a securitizing force that transcends market logics. Similarly, Amar’s focus on both sexuality and the semiperiphery offer significant geographic and methodological disruptions to the literatures on neoliberalism, the rise of East Asian financial capital, and crisis theory. His unique method challenges interdisciplinary social theorizing to grapple with the archipelagic nature of contemporary forces of social precarity and securitization.

    Neel Ahuja is associate professor of postcolonial studies in the Department of English and Comparative Literature at UNC. He is the author the forthcoming Bioinsecurities: Disease Interventions, Empire, and the Government of Species (Duke UP).

  • Curatorialism as New Left Politics

    Curatorialism as New Left Politics

    by David Berry

    ~
    It is often argued that the left is left increasingly unable to speak a convincing narrative in the digital age. Caught between the neoliberal language of contemporary capitalism and its political articulations linked to economic freedom and choice, and a welfare statism that appears counter-intuitively unappealing to modern political voters and supporters, there is often claimed to be a lacuna in the political imaginary of the left. Here, I want to explore a possible new articulation for a left politics that moves beyond the seeming technophilic and technological determinisms of left accelerationisms and the related contradictions of “fully automated luxury communism”. Broadly speaking, these positions tend to argue for a post-work, post-scarcity economy within a post-capitalist society based on automation, technology and cognitive labour. Accepting these are simplifications of the arguments of the proponents of these two positions the aim is to move beyond the assertion that the embracing of technology itself solves the problem of a political articulation that has to be accepted and embraced by a broader constituency within the population. Technophilic politics is not, of itself, going to be enough to convince an electorate, nor a population, to move towards leftist conceptualisations of possible restructuring or post-capitalist economics. However, it seems to me that the abolition of work is not a desirable political programme for the majority of the population, nor does a seemingly utopian notion of post-scarcity economics make much sense under conditions of neoliberal economics. Thus these programmes are simultaneously too radical and not radical enough. I also want to move beyond the staid and unproductive arguments often articulated in the UK between a left-Blairism and a more statist orientation associated with a return to traditional left concerns personified in Ed Miliband.

    Instead, I want to consider what a politics of the singularity might be, that is, to follow Fredric Jameson’s conceptualisation of the singularity as “a pure present without a past or a future” such that,

    today we no longer speak of monopolies but of transnational corporations, and our robber barons have mutated into the great financiers and bankers, themselves de-individualized by the massive institutions they manage. This is why, as our system becomes ever more abstract, it is appropriate to substitute a more abstract diagnosis, namely the displacement of time by space as a systemic dominant, and the effacement of traditional temporality by those multiple forms of spatiality we call globalization. This is the framework in which we can now review the fortunes of singularity as a cultural and psychological experience (Jameson 2015: 128).

    That is the removal of temporality of a specific site of politics as such, or the successful ideological deployment of a new framework of understand of oneself within temporality, whether through the activities of the media industries, or through the mediation of digital technologies and computational media. This has the effect of the transformation of temporal experience into new spatial experiences, whether through translating media, or through the intensification of a now that constantly presses upon us and pushes away both historical time, but also the possibility for political articulations of new forms of futurity. Thus the politics of singularity point to spatiality as the key site of political deployment within neoliberalism, and by this process undercuts the left’s arguments which draw simultaneously on a shared historical memory of hard-won rights and benefits, but also the notion of political action to fight for a better future. Indeed, one might ask if green critique of the anthropocene, with its often misanthropic articulations, in some senses draws on some notion of a singularity produced by humanity which has undercut the time of geological or planetary scale change. The only option remaining then is to seek to radically circumscribe, if not outline a radical social imaginary that does not include humans in its conception, and hence to return the planet to the stability of a geological time structure no longer undermined by human activity. Similarly, neoliberal arguments over political imaginaries highlight the intensity and simultaneity of the present mode of capitalist competition and the individualised (often debt-funded) means of engagement with economic life.

    What then might be a politics of the singularity which moved beyond politics that drew on forms of temporality for its legitimation? In other words, how could a politics of spatiality be articulated and deployed which re-enabled the kind of historical project towards a better future for all that was traditionally associated with leftist thought?

    To do this I want to think through the notion of the “curator” that Jameson disparagingly thinks is an outcome of the singularity in terms of artistic practice and experience. He argues, that today we are faced with the “emblematic figure of the curator, who now becomes the demiurge of those floating and dissolving constellations of strange objects we still call art.” Further,

    there is a nastier side of the curator yet to be mentioned, which can be easily grasped if we look at installations, and indeed entire exhibits in the newer postmodern museums, as having their distant and more primitive ancestors in the happenings of the 1960s—artistic phenomena equally spatial, equally ephemeral. The difference lies not only in the absence of humans from the installation and, save for the curator, from the newer museums as such. It lies in the very presence of the institution itself: everything is subsumed under it, indeed the curator may be said to be something like its embodiment, its allegorical personification. In postmodernity, we no longer exist in a world of human scale: institutions certainly have in some sense become autonomous, but in another they transcend the dimensions of any individual, whether master or servant; something that can also be grasped by reminding ourselves of the dimension of globalization in which institutions today exist, the museum very much included (Jameson 2015: 110-111).

    However, Jameson himself makes an important link between spatiality as the site of a contestation and the making-possible of new spaces, something curatorial practice, with its emphasis on the construction, deployment and design of new forms of space points towards. Indeed, Jameson argues in relation to theoretical constructions, “perhaps a kind of curatorial practice, selecting named bits from our various theoretical or philosophical sources and putting them all together in a kind of conceptual installation, in which we marvel at the new intellectual space thereby momentarily produced” (Jameson 2015: 110).

    In contrast, the question for me is the radical possibilities suggested by this event-like construction of new spaces, and how they can be used to reverse or destabilise the time-axis manipulation of the singularity. The question then becomes: could we tentatively think in terms of a curatorial political practice, which we might call curatorialism? Indeed, could we fill out the ways in which this practice could aim to articulate, assemble and more importantly provide a site for a renewal and (re)articulation of left politics? How could this politics be mobilised into the nitty-gritty of actual political practice, policy, and activist politics, and engender the affective relation that inspires passion around a political programme and suggests itself to the kinds of singularities that inhabit contemporary society? To borrow the language of the singularity itself, how could one articulate a new disruptive left politics?

    dostoevsky on curation
    image source: Curate Meme

    At this early stage of thinking, it seems to me that in the first case we might think about how curatorialism points towards the need to move away from concern with internal consistency in the development of a political programme. Curatorialism gathers its strength from the way in which it provides a political pluralism, an assembling of multiple moments into a political constellation that takes into account and articulates its constituent moments. This is the first step in the mapping of the space of a disruptive left politics. This is the development of a spatial politics in as much as, crucially, the programme calls for a weaving together of multiplicity into this constellational form. Secondly, we might think about the way in which this spatial diagram can then be  translated into a temporal project, that is the transformation of a mapping program into a political programme linked to social change. This requires the capture and illumination of the multiple movements of each moment and re-articulation through a process of reframing the condition of possibility in each constellational movement in terms of a political economy that draws from the historical possibilities that the left has made possible previously, but also the need for new concepts and ideas to link the political of necessity to the huge capacity of a left project towards mitigating/and or replacement of a neoliberal capitalist economic system. Lastly, it seems to me that to be a truly curatorial politics means to link to the singularity itself as a force of strength for left politics, such that the development of a mode of the articulation of individual political needs, is made possible through the curatorial mode, and through the development of disruptive left frameworks that links individual need, social justice, institutional support, and left politics that reconnects the passions of interests to the passion for justice and equality with the singularity’s concern with intensification.[1] This can, perhaps, be thought of as the replacement of a left project of ideological purity with a return to the Gramscian notions of strategy and tactics through the deployment of what he called a passive revolution, mobilised partially in the new forms of civil society created through collectivities of singularities within social media, computational devices and the new infrastructures of digital capitalism but also within the through older forms of social institutions, political contestations and education.[2]
    _____

    David M. Berry is Reader in the School of Media, Film and Music at the University of Sussex. He writes widely on computation and the digital and blogs at Stunlaw. He is the author of Critical Theory and the Digital, The Philosophy of Software: Code and Mediation in the Digital Age , Copy, Rip, Burn: The Politics of Copyleft and Open Source, editor of Understanding Digital Humanities and co-editor of Postdigital Aesthetics: Art, Computation And Design. He is also a Director of the Sussex Humanities Lab.

    Back to the essay
    _____

    Notes

    [1] This remains a tentative articulation that is inspired by the power of knowledge-based economies both to create the conditions of singularity through the action of time-axis manipulation (media technologies), but also their (arguably) countervailing power to provide the tools, spaces and practices for the contestation of the singularity connected only with a neoliberal political moment. That is, how can these new concept and ideas, together with the frameworks that are suggested in their mobilisation, provide new means of contestation, sociality and broader connections of commonality and political praxis.

    [2] I leave to a later paper the detailed discussion of the possible subjectivities both in and for themselves within a framework of a curatorial politics. But here I am gesturing towards political parties as the curators of programmes of political goals and ends, able then to use the state as a curatorial enabler of such a political programme. This includes the active development of the individuation of political singularities within such a curatorial framework.

    Bibliography

    Jameson, Fredric. 2015. “The Aesthetics of Singularity.” New Left Review, No. 92 (March-April 2015).

    Back to the essay

  • A Dark, Warped Reflection

    A Dark, Warped Reflection

    Charlie Brooker, writer & producer, Black Mirror (BBC/Zeppotron, 2011- )a review of Charlie Brooker, writer & producer, Black Mirror (BBC/Zeppotron, 2011- )
    by Zachary Loeb
    ~

    Depending upon which sections of the newspaper one reads, it is very easy to come away with two rather conflicting views of the future. If one begins the day by reading the headlines in the “International News” or “Environment” it is easy to feel overwhelmed by a sense of anxiety and impending doom; however, if one instead reads the sections devoted to “Business” or “Technology” it is easy to feel confident that there are brighter days ahead. We are promised that soon we shall live in wondrous “Smart” homes where all of our devices work together tirelessly to ensure our every need is met even while drones deliver our every desire even as we enjoy ever more immersive entertainment experiences with all of this providing plenty of wondrous investment opportunities…unless of course another economic collapse or climate change should spoil these fantasies. Though the juxtaposition between newspaper sections can be jarring an element of anxiety can generally be detected from one section to the next – even within the “technology” pages. After all, our devices may have filled our hours with apps and social networking sites, but this does not necessarily mean that they have left us more fulfilled. We have been supplied with all manner of answers, but this does not necessarily mean we had first asked any questions.

    [youtube https://www.youtube.com/watch?v=pimqGkBT6Ek&w=560&h=315]

    If you could remember everything, would you want to? If a cartoon bear lampooned the pointlessness of elections, would you vote for the bear? Would you participate in psychological torture, if the person being tortured was a criminal? What lengths would you turn to if you could not move-on from a loved one’s death? These are the types of questions posed by the British television program Black Mirror, wherein anxiety about the technologically riddled future, be it the far future or next week, is the core concern. The paranoid pessimism of this science-fiction anthology program is not a result of a fear of the other or of panic at the prospect of nuclear annihilation – but is instead shaped by nervousness at the way we have become strangers to ourselves. There are no alien invaders, occult phenomena, nor is there a suit wearing narrator who makes sure that the viewers understand the moral of each story. Instead what Black Mirror presents is dread – it holds up a “black mirror” (think of any electronic device when the power on the screen is off) to society and refuses to flinch at the reflection.

    Granted, this does not mean that those viewing the program will not flinch.

    [And Now A Brief Digression]

    Before this analysis goes any further it seems worthwhile to pause and make a few things clear. Firstly, and perhaps most importantly, the intention here is not to pass a definitive judgment on the quality of Black Mirror. While there are certainly arguments that can be made regarding how “this episode was better than that one” – this is not the concern here. Nor for that matter is the goal to scoff derisively at Black Mirror and simply dismiss of it – the episodes are well written, interestingly directed, and strongly acted. Indeed, that the program can lead to discussion and introspection is perhaps the highest praise that one can bestow upon a piece of widely disseminated popular culture. Secondly, and perhaps even more importantly (depending on your opinion), some of the episodes of Black Mirror rely upon twists and surprises in order to have their full impact upon the viewer. Oftentimes people find it highly frustrating to have these moments revealed to them ahead of time, and thus – in the name of fairness – let this serve as an official “spoiler warning.” The plots of each episode will not be discussed in minute detail in what follows – as the intent here is to consider broader themes and problems – but if you hate “spoilers” you should consider yourself warned.

    [Digression Ends]

    The problem posed by Black Mirror is that in building nervous narratives about the technological tomorrow the program winds up replicating many of the shortcomings of contemporary discussions around technology. Shortcomings that make such an unpleasant future seem all the more plausible. While Black Mirror may resist the obvious morality plays of a show like The Twilight Zone, the moral of the episodes may be far less oppositional than they at first seem. The program draws much of its emotional heft by narrowly focusing its stories upon specific individuals, but in so doing the show may function as a sort of precognitive “usage manual,” one that advises “if a day should arrive when you can technologically remember everything…don’t be like the guy in this episode.” The episodes of Black Mirror may call upon viewers to look askance at the future it portrays, but it also encourages the sort of droll inured acceptance that is characteristic of the people in each episode of the program. Black Mirror is a sleek, hip, piece of entertainment, another installment in the contemporary “golden age of television” wherein it risks becoming just another program that can be streamed onto any of a person’s black mirror like screens. The program is itself very much a part of the same culture industry of the YouTube and Twitter era that the show seems to vilify – it is ready made for “binge watching.” The program may be disturbing, but its indictments are soft – allowing viewers a distance that permits them to say aloud “I would never do that” even as they are subconsciously unsure.

    Thus, Black Mirror appears as a sort of tragic confirmation of the continuing validity of Jacques Ellul’s comment:

    “One cannot but marvel at an organization which provides the antidote as it distills the poison.” (Ellul, 378)

    For the tales that are spun out in horrifying (or at least discomforting) detail on Black Mirror may appear to be a salve for contemporary society’s technological trajectory – but the show is also a ready made product for the very age that it is critiquing. A salve that does not solve anything, a cultural shock absorber that allows viewers to endure the next wave of shocks. It is a program that demands viewers break away from their attachment to their black mirrors even as it encourages them to watch another episode of Black Mirror. This is not to claim that the show lacks value as a critique; however, the show is less a radical indictment than some may be tempted to give it credit for being. The discomfort people experience while watching the show easily becomes a masochistic penance that allows people to continue walking down the path to the futures outlined in the show. Black Mirror provides the antidote, but it also distills the poison.

    That, however, may be the point.

    [Interrogation 1: Who Bears Responsibility?]

    Technology is, of course, everywhere in Black Mirror – in many episodes it as much of a character as the humans who are trying to come to terms with what the particular device means. In some episodes (“The National Anthem” or “The Waldo Moment”) the technologies that feature prominently are those that would be quite familiar to contemporary viewers: social media platforms like YouTube, Twitter, Facebook and the like. Whilst in other episodes (“The Complete History of You,” “White Bear” and “Be Right Back”) the technologies on display are new and different: an implantable device that records (and can play back) all of one’s memories, something that can induce temporary amnesia, a company that has developed a being that is an impressive mix of robotics and cloning. The stories that are told in Black Mirror, as was mentioned earlier, focus largely on the tales of individuals – “Be Right Back” is primarily about one person’s grief – and though this is a powerful story-telling device (and lest there be any confusion – many of these are very powerfully told stories) one of the questions that lingers unanswered in the background of many of these episodes is: who is behind these technologies?

    In fairness, Black Mirror would likely lose some of its effectiveness in terms of impact if it were to delve deeply into this question. If “The Complete History of You” provided a sci-fi faux-documentary foray into the company that had produced the memory recording “grains” it would probably not have felt as disturbing as the tale of abuse, sex, violence and obsession that the episode actually presents. Similarly, the piece of science-fiction grade technology upon which “White Bear” relies, functions well in the episode precisely because the key device makes only a rather brief appearance. And yet here an interesting contrast emerges between the episodes set in, or closely around, the present and those that are set further down the timeline – for in the episodes that rely on platforms like YouTube, the viewer technically knows who the interests are behind the various platforms. The episode “The Complete History of You” may be intensely disturbing, but what company was it that developed and brought the “grains” to market? What biotechnology firm supplies the grieving spouse in “Be Right Back” with the robotic/clone of her deceased husband? Who gathers the information from these devices? Where does that information live? Who is profiting? These are important questions that go unanswered, largely because they go unasked.

    Of course, it can be simple to disregard these questions. Dwelling upon them certainly does take something away from the individual episodes and such focus diminishes the entertainment quality of Black Mirror. This is fundamentally why it is so essential to insist that these critical questions be asked. The worlds depicted in episodes of Black Mirror did not “just happen” but are instead a result of layers upon layers of decisions and choices that have wound up shaping these characters lives – and it is questionable how much say any of these characters had in these decisions. This is shown in stark relief in “The National Anthem” in which a befuddled prime minister cannot come to grips with the way that a threat uploaded to YouTube along with shifts in public opinion, as reflected on Twitter, has come to require him to commit a grotesque act; his despair at what he is being compelled to do is a reflection of the new world of politics created by social media. In some ways it is tempting to treat episodes like “The Complete History of You” and “Be Right Back” as retorts to an unflagging adoration for “innovation,” “disruption,” and “permissionless innovation” – for the episodes can be read as a warning that just because we can record and remember everything, does not necessarily mean that we should. And yet the presence of such a cultural warning does not mean that such devices will not eventually be brought to market. The denizens of the worlds of Black Mirror are depicted as being at the mercy of the technological current.

    Thus, and here is where the problem truly emerges, the episodes can be treated as simple warnings that state “well, don’t be like this person.” After all, the world of “The Complete History of You” seems to be filled with people who – unlike the obsessive main character – can use the “grain” productively; on a similar note it can be easy to imagine many people pointing to “Be Right Back” and saying that the idea of a robotic/clone could be wonderful – just don’t use it to replicate the recently dead; and of course any criticism of social media in “The Waldo Moment” or “The National Anthem” can be met with a retort regarding a blossoming of free expression and the ways in which such platforms can help bolster new protest movements. And yet, similar to the sad protagonist in the film Her, the characters in the story lines of Black Mirror rarely appear as active agents in relation to technology even when they are depicted as truly “choosing” a given device. Rather they have simply been reduced to consumers – whether they are consumers of social media, political campaigns, or an amusement park where the “show” is a person being psychologically tortured day after day.

    This is not to claim that there should be an Apple or Google logo prominently displayed on the “grain” or on the side of the stationary bikes in “Fifteen Million Merits,” nor is it to argue that the people behind these devices should be depicted as cackling corporate monsters – but it would be helpful to have at least some image of the people behind these devices. After all, there are people behind these devices. What were they thinking? Were they not aware of these potential risks? Did they not care? Who bears responsibility? In focusing on the small scale human stories Black Mirror ignores the fact that there is another all too human story behind all of these technologies. Thus what the program riskily replicates is a sort of technological determinism that seems to have nestled itself into the way that people talk about technology these days – a sentiment in which people have no choice but to accept (and buy) what technology firms are selling them. It is not so much, to borrow a line from Star Trek, that “resistance is futile” as that nobody seems to have even considered resistance to be an option in the first place. Granted, we have seen in the not too distant past that such a sentiment is simply not true – Google Glass was once presented as inevitable but public push-back helped lead to Google (at least temporarily) shelving the device. Alas, one of the most effective ways of convincing people that they are powerless to resist is by bludgeoning them with cultural products that tell them they are powerless to resist. Or better yet, convince them that they will actually like being “assimilated.”

    Therefore, the key thing to mull over after watching an episode of Black Mirror is not what is presented in the episode but what has been left out. Viewers need to ask the questions the show does not present: who is behind these technologies? What decisions have led to the societal acceptance of these technologies? Did anybody offer resistance to these new technologies? The “6 Questions to Ask of New Technology” posed by media theorist Neil Postman may be of use for these purposes, as might some of the questions posed in Riddled With Questions. The emphasis here is to point out that a danger of Black Mirror is that the viewer winds up being just like one of the characters : a person who simply accepts the technologically wrought world in which they are living without questioning those responsible and without thinking that opposition is possible.

    [Interrogation 2: Utopia Unhinged is not a Dystopia]

    “Dystopia” is a term that has become a fairly prominent feature in popular entertainment today. Bookshelves are filled with tales of doomed futures and many of these titles (particularly those aimed at the “young adult” audience) have a tendency to eventually reach the screens of the cinema. Of course, apocalyptic visions of the future are not limited to the big screen – as numerous television programs attest. For many, it is tempting to use terms such as “dystopia” when discussing the futures portrayed in Black Mirror and yet the usage of such a term seems rather misleading. True, at least one episode (“Fifteen Million Merits”) is clearly meant to evoke a dystopian far future, but to use that term in relation to many of the other installments seems a bit hyperbolic. After all, “The Waldo Moment” could be set tomorrow and frankly “The National Anthem” could have been set yesterday. To say that Black Mirror is a dystopian show risks taking an overly simplistic stance towards technology in the present as well as towards technology in the future – if the claim is that the show is thoroughly dystopian than how does one account for the episodes that may as well be set in the present? One can argue that the state of the present world is far less than ideal, one can cast a withering gaze in the direction of social media, one can truly believe that the current trajectory (if not altered) will lead in a negative direction…and yet one can believe all of these things and still resist the urge to label contemporary society a dystopia. Doom saying can be an enjoyably nihilistic way to pass an afternoon, but it makes for a rather poor critique.

    It may be that what Black Mirror shows is how a dystopia can actually be a private hell instead of a societal one (which would certainly seem true of “White Bear” or “The Complete History of You”), or perhaps what Black Mirror indicates is that a derailed utopia is not automatically a dystopia. Granted, a major criticism of Black Mirror could emphasize that the show has a decidedly “industrialized world/Western world” focus – we do not see the factories where “grains” are manufactured and the varieties of new smart phones seen in the program suggest that the e-waste must be piling up somewhere. In other words – the derailed utopia of some could still be an outright dystopia for countless others. That the characters in Black Mirror do not seem particularly concerned with who assembled their devices is, alas, a feature all too characteristic of technology users today. Nevertheless, to restate the problem, the issue is not so much the threat of dystopia as it is the continued failure of humanity to use its impressive technological ingenuity to bring about a utopia (or even something “better” than the present). In some ways this provides an echo of Lewis Mumford’s comment, in The Story of Utopias, that:

    “it would be so easy, this business of making over the world if it were only a matter of creating machinery.” (Mumford, 175)

    True, the worlds of Black Mirror, including the ones depicting the world of today, show that “creating machinery” actually is an easy way “of making over the world” – however this does not automatically push things in the utopian direction for which Mumford was pining. Instead what is on display is another installment of the deferred potential of technology.

    The term “another” is not used incidentally here, but is specifically meant to point to the fact that it is nothing new for people to see technology as a source for hope…and then to woefully recognize the way in which such hopes have been dashed time and again. Such a sentiment is visible in much of Walter Benjamin’s writing about technology – writing, as he was, after the mechanized destruction of WWI and on the eve of the technologically enhanced barbarity of WWII. In Benjamin’s essay “Eduard Fuchs, Collector and Historian ” he criticizes a strain in positivist/social democratic thinking that had emphasized that technological developments would automatically usher in a more just world, when in fact such attitudes woefully failed to appreciate the scale of the dangers. This leads Benjamin to note:

    “A prognosis was due, but failed to materialize. That failure sealed a process characteristic of the past century: the bungled reception of technology. The process has consisted of a series of energetic, constantly renewed efforts, all attempting to overcome the fact that technology serves this society only by producing commodities.” (Benjamin, 266)

    The century about which Benjamin was writing was not the twenty-first century, and yet these comments about “the bungled reception of technology” and technology which “serves this society only be producing commodities” seems a rather accurate description of the worlds depicted by Black Mirror. And yes, that certainly includes the episodes that are closer to our own day. The point of pulling out this tension; however, is to emphasize not the dystopian element of Black Mirror but to point to the “bungled reception” that is so clearly on display in the program – and by extension in the present day.

    What Black Mirror shows in episode after episode (even in the clearly dystopian one) is the gloomy juxtaposition between what humanity can possibly achieve and what it actually achieves. The tools that could widen democratic participation can be used to allow a cartoon bear to run as a stunt candidate, the devices that allow us to remember the past can ruin the present by keeping us constantly replaying our memories yesterday, the things that can allow us to connect can make it so that we are unable to ever let go – “energetic, constantly renewed efforts” that all wind up simply “producing commodities.” Indeed, in a tragic-comic turn, Black Mirror demonstrates that amongst the commodities we continue to produce are those that elevate the “bungled reception of technology” to the level of a widely watched and critically lauded television serial.

    The future depicted by Black Mirror may be startling, disheartening and quite depressing, but (except in the cases where the content is explicitly dystopian) it is worth bearing in mind that there is an important difference between dystopia and a world of people living amidst the continued “bungled reception of technology.” Are the people in “The National Anthem” paving the way for “White Bear” and in turn setting the stage for “Fifteen Million Merits?” It is quite possible. But this does not mean that the “reception of technology” must always be “bungled” – though changing our reception of it may require altering our attitude towards it. Here Black Mirror repeats its problematic thrust, for it does not highlight resistance but emphasizes the very attitudes that have “bungled” the reception and which continue to bungle the reception. Though “Fifteen Million Merits” does feature a character engaging in a brave act of rebellion, this act is immediately used to strengthen the very forces against which the character is rebelling – and thus the episode repeats the refrain “don’t bother resisting, it’s too late anyways.” This is not to suggest that one should focus all one’s hopes upon a farfetched utopian notion, or put faith in a sense of “hope” that is not linked to reality, nor does it mean that one should don sackcloth and begin mourning. Dystopias are cheap these days, but so are the fake utopian dreams that promise a world in which somehow technology will solve all of our problems. And yet, it is worth bearing in mind another comment from Mumford regarding the possibility of utopia:

    “we cannot ignore our utopias. They exist in the same way that north and south exist; if we are not familiar with their classical statements we at least know them as they spring to life each day in our minds. We can never reach the points of the compass; and so no doubt we shall never live in utopia; but without the magnetic needle we should not be able to travel intelligently at all.” (Mumford, 28/29)

    Black Mirror provides a stark portrait of the fake utopian lure that can lead us to the world to which we do not want to go – a world in which the “bungled reception of technology” continues to rule – but in staring horror struck at where we do not want to go we should not forget to ask where it is that we do want to go. The worlds of Black Mirror are steps in the wrong direction – so ask yourself: what would the steps in the right direction look like?

    [Final Interrogation – Permission to Panic]

    During “The Complete History of You” several characters enjoy a dinner party in which the topic of discussion eventually turns to the benefits and drawbacks of the memory recording “grains.” Many attitudes towards the “grains” are voiced – ranging from individuals who cannot imagine doing without the “grain” to a woman who has had hers violently removed and who has managed to adjust. While “The Complete History of You” focuses on an obsessed individual who cannot cope with a world in which everything can be remembered what the dinner party demonstrates is that the same world contains many people who can handle the “grains” just fine. The failed comedian who voices the cartoon bear in “The Waldo Moment” cannot understand why people are drawn to vote for the character he voices – but this does not stop many people from voting for the animated animal. Perhaps most disturbingly the woman at the center of “White Bear” cannot understand why she is followed by crowds filming her on their smart phones while she is hunted by masked assailants – but this does not stop those filming her from playing an active role in her torture. And so on…and so on…Black Mirror shows that in these horrific worlds, there are many people who are quite content with the new status quo. But that not everybody is despairing simply attests to Theodor Adorno and Max Horkheimer’s observation that:

    “A happy life in a world of horror is ignominiously refuted by the mere existence of that world. The latter therefore becomes the essence, the former negligible.” (Adorno and Horkheimer, 93)

    Black Mirror is a complex program, made all the more difficult to consider as the anthology character of the show makes each episode quite different in terms of the issues that it dwells upon. The attitudes towards technology and society that are subtly suggested in the various episodes are in line with the despairing aura that surrounds the various protagonists and antagonists of the episodes. Yet, insofar as Black Mirror advances an ethos it is one of inured acceptance – it is a satire that is both tragedy and comedy. The first episode of the program, “The National Anthem,” is an indictment of a society that cannot tear itself away from the horrors being depicted on screens in a television show that owes its success to keeping people transfixed to horrors being depicted on their screens. The show holds up a “black mirror” to society but what it shows is a world in which the tables are rigged and the audience has already lost – it is a magnificently troubling cultural product that attests to the way the culture industry can (to return to Ellul) provide the antidote even as it distills the poison. Or, to quote Adorno and Horkheimer again (swap out the word “filmgoers” with “tv viewers”):

    “The permanently hopeless situations which grind down filmgoers in daily life are transformed by their reproduction, in some unknown way, into a promise that they may continue to exist. The one needs only to become aware of one’s nullity, to subscribe to one’s own defeat, and one is already a party to it. Society is made up of the desperate and thus falls prey to rackets.” (Adorno and Horkheimer, 123)

    This is the danger of Black Mirror that it may accustom and inure its viewers to the ugly present it displays while preparing them to fall prey to the “bungled reception” of tomorrow – it inculcates the ethos of “one’s own defeat.” By showing worlds in which people are helpless to do anything much to challenge the technological society in which they have become cogs Black Mirror risks perpetuating the sense that the viewers are themselves cogs, that the viewers are themselves helpless. There is an uncomfortable kinship between the tv viewing characters of “The National Anthem” and the real world viewer of the episode “The National Anthem” – neither party can look away. Or, to put it more starkly: if you are unable to alter the future why not simply prepare yourself for it by watching more episodes of Black Mirror? At least that way you will know which characters not to imitate.

    And yet, despite these critiques, it would be unwise to fully disregard the program. It is easy to pull out comments from the likes of Ellul, Adorno, Horkheimer and Mumford that eviscerate a program such as Black Mirror but it may be more important to ask: given Black Mirror’s shortcomings, what value can the show still have? Here it is useful to recall a comment from Günther Anders (whose pessimism was on par with, or exceeded, any of the aforementioned thinkers) – he was referring in this comment to the works of Kafka, but the comment is still useful:

    “from great warnings we should be able to learn, and they should help us to teach others.” (Anders, 98)

    This is where Black Mirror can be useful, not as a series that people sit and watch, but as a piece of culture that leads people to put forth the questions that the show jumps over. At its best what Black Mirror provides is a space in which people can discuss their fears and anxieties about technology without worrying that somebody will, farcically, call them a “Luddite” for daring to have such concerns – and for this reason alone the show may be worthwhile. By highlighting the questions that go unanswered in Black Mirror we may be able to put forth the very queries that are rarely made about technology today. It is true that the reflections seen by staring into Black Mirror are dark, warped and unappealing – but such reflections are only worth something if they compel audiences to rethink their relationships to the black mirrored surfaces in their lives today and which may be in their lives tomorrow. After all, one can look into the mirror in order to see the dirt on one’s face or one can look in the mirror because of a narcissistic urge. The program certainly has the potential to provide a useful reflection, but as with the technology depicted in the show, it is all too easy for such a potential reception to be “bungled.”

    If we are spending too much time gazing at black mirrors, is the solution really to stare at Black Mirror?

    The show may be a satire, but if all people do is watch, then the joke is on the audience.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Adorno, Theodor and Horkheimer, Max. Dialectic of Enlightenment: Philosophical Fragments. Stanford: Stanford University Press, 2002.
    • Anders, Günther. Franz Kafka. New York: Hilary House Publishers LTD, 1960.
    • Benjamin, Walter. Walter Benjamin: Selected Writings. Volume 3, 1935-1938. Cambridge: The Belknap Press, 2002.
    • Ellul, Jacques. The Technological Society. New York: Vintage Books, 1964.
    • Mumford, Lewis. The Story of Utopias. Bibliobazaar, 2008.
  • The Automatic Teacher

    The Automatic Teacher

    By Audrey Watters
    ~

    “For a number of years the writer has had it in mind that a simple machine for automatic testing of intelligence or information was entirely within the realm of possibility. The modern objective test, with its definite systemization of procedure and objectivity of scoring, naturally suggests such a development. Further, even with the modern objective test the burden of scoring (with the present very extensive use of such tests) is nevertheless great enough to make insistent the need for labor-saving devices in such work” – Sidney Pressey, “A Simple Apparatus Which Gives Tests and Scores – And Teaches,” School and Society, 1926

    Ohio State University professor Sidney Pressey first displayed the prototype of his “automatic intelligence testing machine” at the 1924 American Psychological Association meeting. Two years later, he submitted a patent for the device and spent the next decade or so trying to market it (to manufacturers and investors, as well as to schools).

    It wasn’t Pressey’s first commercial move. In 1922 he and his wife Luella Cole published Introduction to the Use of Standard Tests, a “practical” and “non-technical” guide meant “as an introductory handbook in the use of tests” aimed to meet the needs of “the busy teacher, principal or superintendent.” By the mid–1920s, the two had over a dozen different proprietary standardized tests on the market, selling a couple of hundred thousand copies a year, along with some two million test blanks.

    Although standardized testing had become commonplace in the classroom by the 1920s, they were already placing a significant burden upon those teachers and clerks tasked with scoring them. Hoping to capitalize yet again on the test-taking industry, Pressey argued that automation could “free the teacher from much of the present-day drudgery of paper-grading drill, and information-fixing – should free her for real teaching of the inspirational.”

    pressey_machines

    The Automatic Teacher

    Here’s how Pressey described the machine, which he branded as the Automatic Teacher in his 1926 School and Society article:

    The apparatus is about the size of an ordinary portable typewriter – though much simpler. …The person who is using the machine finds presented to him in a little window a typewritten or mimeographed question of the ordinary selective-answer type – for instance:

    To help the poor debtors of England, James Oglethorpe founded the colony of (1) Connecticut, (2) Delaware, (3) Maryland, (4) Georgia.

    To one side of the apparatus are four keys. Suppose now that the person taking the test considers Answer 4 to be the correct answer. He then presses Key 4 and so indicates his reply to the question. The pressing of the key operates to turn up a new question, to which the subject responds in the same fashion. The apparatus counts the number of his correct responses on a little counter to the back of the machine…. All the person taking the test has to do, then, is to read each question as it appears and press a key to indicate his answer. And the labor of the person giving and scoring the test is confined simply to slipping the test sheet into the device at the beginning (this is done exactly as one slips a sheet of paper into a typewriter), and noting on the counter the total score, after the subject has finished.

    The above paragraph describes the operation of the apparatus if it is being used simply to test. If it is to be used also to teach then a little lever to the back is raised. This automatically shifts the mechanism so that a new question is not rolled up until the correct answer to the question to which the subject is responding is found. However, the counter counts all tries.

    It should be emphasized that, for most purposes, this second set is by all odds the most valuable and interesting. With this second set the device is exceptionally valuable for testing, since it is possible for the subject to make more than one mistake on a question – a feature which is, so far as the writer knows, entirely unique and which appears decidedly to increase the significance of the score. However, in the way in which it functions at the same time as an ‘automatic teacher’ the device is still more unusual. It tells the subject at once when he makes a mistake (there is no waiting several days, until a corrected paper is returned, before he knows where he is right and where wrong). It keeps each question on which he makes an error before him until he finds the right answer; he must get the correct answer to each question before he can go on to the next. When he does give the right answer, the apparatus informs him immediately to that effect. If he runs the material through the little machine again, it measures for him his progress in mastery of the topics dealt with. In short the apparatus provides in very interesting ways for efficient learning.

    A video from 1964 shows Pressey demonstrating his “teaching machine,” including the “reward dial” feature that could be set to dispense a candy once a certain number of correct answers were given:

    [youtube https://www.youtube.com/watch?v=n7OfEXWuulg?rel=0]

    Market Failure

    UBC’s Stephen Petrina documents the commercial failure of the Automatic Teacher in his 2004 article “Sidney Pressey and the Automation of Education, 1924–1934.” According to Petrina, Pressey started looking for investors for his machine in December 1925, “first among publishers and manufacturers of typewriters, adding machines, and mimeo- graph machines, and later, in the spring of 1926, extending his search to scientific instrument makers.” He approached at least six Midwestern manufacturers in 1926, but no one was interested.

    In 1929, Pressey finally signed a contract with the W. M. Welch Manufacturing Company, a Chicago-based company that produced scientific instruments.

    Petrina writes that,

    After so many disappointments, Pressey was impatient: he offered to forgo royalties on two hundred machines if Welch could keep the price per copy at five dollars, and he himself submitted an order for thirty machines to be used in a summer course he taught school administrators. A few months later he offered to put up twelve hundred dollars to cover tooling costs. Medard W. Welch, sales manager of Welch Manufacturing, however, advised a “slower, more conservative approach.” Fifteen dollars per machine was a more realistic price, he thought, and he offered to refund Pressey fifteen dollars per machine sold until Pressey recouped his twelve-hundred-dollar investment. Drawing on nearly fifty years experience selling to schools, Welch was reluctant to rush into any project that depended on classroom reforms. He preferred to send out circulars advertising the Automatic Teacher, solicit orders, and then proceed with production if a demand materialized.

    ad_pressey

    The demand never really materialized, and even if it had, the manufacturing process – getting the device to market – was plagued with problems, caused in part by Pressey’s constant demands to redefine and retool the machines.

    The stress from the development of the Automatic Teacher took an enormous toll on Pressey’s health, and he had a breakdown in late 1929. (He was still teaching, supervising courses, and advising graduate students at Ohio State University.)

    The devices did finally ship in April 1930. But that original sales price was cost-prohibitive. $15 was, as Petrina notes, “more than half the annual cost ($29.27) of educating a student in the United States in 1930.” Welch could not sell the machines and ceased production with 69 of the original run of 250 devices still in stock.

    Pressey admitted defeat. In a 1932 School and Society article, he wrote “The writer is regretfully dropping further work on these problems. But he hopes that enough has been done to stimulate other workers.”

    But Pressey didn’t really abandon the teaching machine. He continued to present on his research at APA meetings. But he did write in a 1964 article “Teaching Machines (And Learning Theory) Crisis” that “Much seems very wrong about current attempts at auto-instruction.”

    Indeed.

    Automation and Individualization

    In his article “Toward the Coming ‘Industrial Revolution’ in Education (1932), Pressey wrote that

    “Education is the one major activity in this country which is still in a crude handicraft stage. But the economic depression may here work beneficially, in that it may force the consideration of efficiency and the need for laborsaving devices in education. Education is a large-scale industry; it should use quantity production methods. This does not mean, in any unfortunate sense, the mechanization of education. It does mean freeing the teacher from the drudgeries of her work so that she may do more real teaching, giving the pupil more adequate guidance in his learning. There may well be an ‘industrial revolution’ in education. The ultimate results should be highly beneficial. Perhaps only by such means can universal education be made effective.”

    Pressey intended for his automated teaching and testing machines to individualize education. It’s an argument that’s made about teaching machines today too. These devices will allow students to move at their own pace through the curriculum. They will free up teachers’ time to work more closely with individual students.

    But as Pretina argues, “the effect of automation was control and standardization.”

    The Automatic Teacher was a technology of normalization, but it was at the same time a product of liberality. The Automatic Teacher provided for self- instruction and self-regulated, therapeutic treatment. It was designed to provide the right kind and amount of treatment for individual, scholastic deficiencies; thus, it was individualizing. Pressey articulated this liberal rationale during the 1920s and 1930s, and again in the 1950s and 1960s. Although intended as an act of freedom, the self-instruction provided by an Automatic Teacher also habituated learners to the authoritative norms underwriting self-regulation and self-governance. They not only learned to think in and about school subjects (arithmetic, geography, history), but also how to discipline themselves within this imposed structure. They were regulated not only through the knowledge and power embedded in the school subjects but also through the self-governance of their moral conduct. Both knowledge and personality were normalized in the minutiae of individualization and in the machinations of mass education. Freedom from the confines of mass education proved to be a contradictory project and, if Pressey’s case is representative, one more easily automated than commercialized.

    The massive influx of venture capital into today’s teaching machines, of course, would like to see otherwise…
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared.

    Back to the essay