b2o

Reviews and analysis of scholarly books about digital technology and culture, as well as of articles, legal proceedings, videos, social media, digital humanities projects, and other emerging digital forms, offered from a humanist perspective, in which our primary intellectual commitment is to the deeply embedded texts, figures, themes, and politics that constitute human culture, regardless of the medium in which they occur.

  • Something About the Digital

    Something About the Digital

    By Alexander R. Galloway
    ~

    (This catalog essay was written in 2011 for the exhibition “Chaos as Usual,” curated by Hanne Mugaas at the Bergen Kunsthall in Norway. Artists in the exhibition included Philip Kwame Apagya, Ann Craven, Liz Deschenes, Thomas Julier [in collaboration with Cédric Eisenring and Kaspar Mueller], Olia Lialina and Dragan Espenschied, Takeshi Murata, Seth Price, and Antek Walczak.)

    There is something about the digital. Most people aren’t quite sure what it is. Or what they feel about it. But something.

    In 2001 Lev Manovich said it was a language. For Steven Shaviro, the issue is being connected. Others talk about “cyber” this and “cyber” that. Is the Internet about the search (John Battelle)? Or is it rather, even more primordially, about the information (James Gleick)? Whatever it is, something is afoot.

    What is this something? Given the times in which we live, it is ironic that this term is so rarely defined and even more rarely defined correctly. But the definition is simple: the digital means the one divides into two.

    Digital doesn’t mean machine. It doesn’t mean virtual reality. It doesn’t even mean the computer – there are analog computers after all, like grandfather clocks or slide rules. Digital means the digits: the fingers and toes. And since most of us have a discrete number of fingers and toes, the digital has come to mean, by extension, any mode of representation rooted in individually separate and distinct units. So the natural numbers (1, 2, 3, …) are aptly labeled “digital” because they are separate and distinct, but the arc of a bird in flight is not because it is smooth and continuous. A reel of celluloid film is correctly called “digital” because it contains distinct breaks between each frame, but the photographic frames themselves are not because they record continuously variable chromatic intensities.

    We must stop believing the myth, then, about the digital future versus the analog past. For the digital died its first death in the continuous calculus of Newton and Leibniz, and the curvilinear revolution of the Baroque that came with it. And the digital has suffered a thousand blows since, from the swirling vortexes of nineteenth-century thermodynamics, to the chaos theory of recent decades. The switch from analog computing to digital computing in the middle twentieth century is but a single battle in the multi-millennial skirmish within western culture between the unary and the binary, proportion and distinction, curves and jumps, integration and division – in short, over when and how the one divides into two.

    What would it mean to say that a work of art divides into two? Or to put it another way, what would art look like if it began to meditate on the one dividing into two? I think this is the only way we can truly begin to think about “digital art.” And because of this we shall leave Photoshop, and iMovie, and the Internet and all the digital tools behind us, because interrogating them will not nearly begin to address these questions. Instead look to Ann Craven’s paintings. Or look to the delightful conversation sparked here between Philip Kwame Apagya and Liz Deschenes. Or look to the work of Thomas Julier, even to a piece of his not included in the show, “Architecture Reflecting in Architecture” (2010, made with Cedric Eisenring), which depicts a rectilinear cityscape reflected inside the mirror skins of skyscrapers, just like Saul Bass’s famous title sequence in North By Northwest (1959).

    DSC_0002__560
    Liz Deschenes, “Green Screen 4” (2001)

    All of these works deal with the question of twoness. But it is twoness only in a very particular sense. This is not the twoness of the doppelganger of the romantic period, or the twoness of the “split mind” of the schizophrenic, and neither is it the twoness of the self/other distinction that so forcefully animated culture and philosophy during the twentieth century, particularly in cultural anthropology and then later in poststructuralism. Rather we see here a twoness of the material, a digitization at the level of the aesthetic regime itself.

    Consider the call and response heard across the works featured here by Apagya and Deschenes. At the most superficial level, one might observe that these are works about superimposition, about compositing. Apagya’s photographs exploit one of the oldest and most useful tricks of picture making: superimpose one layer on top of another layer in order to produce a picture. Painters do this all the time of course, and very early on it became a mainstay of photographic technique (even if it often remained relegated to mere “trick” photography), evident in photomontage, spirit photography, and even the side-by-side compositing techniques of the carte de visite popularized by André-Adolphe-Eugène Disdéri in the 1850s. Recall too that the cinema has made productive use of superimposition, adopting the technique with great facility from the theater and its painted scrims and moving backdrops. (Perhaps the best illustration of this comes at the end of A Night at the Opera [1935], when Harpo Marx goes on a lunatic rampage through the flyloft during the opera’s performance, raising and lowering painted backdrops to great comic effect.) So the more “modern” cinematic techniques of, first, rear screen projection, and then later chromakey (known commonly as the “green screen” or “blue screen” effect), are but a reiteration of the much longer legacy of compositing in image making.

    Deschenes’ “Green Screen #4” points to this broad aesthetic history, as it empties out the content of the image, forcing us to acknowledge the suppressed color itself – in this case green, but any color will work. Hence Deschenes gives us nothing but a pure background, a pure something.

    Allowed to curve gracefully off the wall onto the floor, the green color field resembles the “sweep wall” used commonly in portraiture or fashion photography whenever an artist wishes to erase the lines and shadows of the studio environment. “Green Screen #4” is thus the antithesis of what has remained for many years the signal art work about video chromakey, Peter Campus’ “Three Transitions” (1973). Whereas Campus attempted to draw attention to the visual and spatial paradoxes made possible by chromakey, and even in so doing was forced to hide the effect inside the jittery gaps between images, Deschenes by contrast feels no such anxiety, presenting us with the medium itself, minus any “content” necessary to fuel it, minus the powerful mise en abyme of the Campus video, and so too minus Campus’ mirthless autobiographical staging. If Campus ultimately resolves the relationship between images through a version of montage, Deschenes offers something more like a “divorced digitality” in which no two images are brought into relation at all, only the minimal substrate remains, without input or output.

    The sweep wall is evident too in Apagya’s images, only of a different sort, as the artifice of the various backgrounds – in a nod not so much to fantasy as to kitsch – both fuses with and separates from the foreground subject. Yet what might ultimately unite the works by Apagya and Deschenes is not so much the compositing technique, but a more general reference, albeit oblique but nevertheless crucial, to the fact that such techniques are today entirely quotidian, entirely usual. These are everyday folk techniques through and through. One needs only a web cam and simple software to perform chromakey compositing on a computer, just as one might go to the county fair and have one’s portrait superimposed on the body of a cartoon character.

    What I’m trying to stress here is that there is nothing particularly “technological” about digitality. All that is required is a division from one to two – and by extension from two to three and beyond to the multiple. This is why I see layering as so important, for it spotlights an internal separation within the image. Apagya’s settings are digital, therefore, simply by virtue of the fact that he addresses our eye toward two incompatible aesthetic zones existing within the image. The artifice of a painted backdrop, and the pose of a person in a portrait.

    Certainly the digital computer is “digital” by virtue of being binary, which is to say by virtue of encoding and processing numbers at the lowest levels using base-two mathematics. But that is only the most prosaic and obvious exhibit of its digitality. For the computer is “digital” too in its atomization of the universe, into, for example, a million Facebook profiles, all equally separate and discrete. Or likewise “digital” too in the computer interface itself which splits things irretrievably into cursor and content, window and file, or even, as we see commonly in video games, into heads-up-display and playable world. The one divides into two.

    So when clusters of repetition appear across Ann Craven’s paintings, or the iterative layers of the “copy” of the “reconstruction” in the video here by Thomas Julier and Cédric Eisenring, or the accumulations of images that proliferate in Olia Lialina and Dragon Espenschied’s “Comparative History of Classic Animated GIFs and Glitter Graphics” [2007] (a small snapshot of what they have assembled in their spectacular book from 2009 titled Digital Folklore), or elsewhere in works like Oliver Laric’s clipart videos (“787 Cliparts” [2006] and “2000 Cliparts” [2010]), we should not simply recall the famous meditations on copies and repetitions, from Walter Benjamin in 1936 to Gilles Deleuze in 1968, but also a larger backdrop that evokes the very cleavages emanating from western metaphysics itself from Plato onward. For this same metaphysics of division is always already a digital metaphysics as it forever differentiates between subject and object, Being and being, essence and instance, or original and repetition. It shouldn’t come as a surprise that we see here such vivid aesthetic meditations on that same cleavage, whether or not a computer was involved.

    Another perspective on the same question would be to think about appropriation. There is a common way of talking about Internet art that goes roughly as follows: the beginning of net art in the middle to late 1990s was mostly “modernist” in that it tended to reflect back on the possibilities of the new medium, building an aesthetic from the material affordances of code, screen, browser, and jpeg, just as modernists in painting or literature built their own aesthetic style from a reflection on the specific affordances of line, color, tone, or timbre; whereas the second phase of net art, coinciding with “Web 2.0” technologies like blogging and video sharing sites, is altogether more “postmodern” in that it tends to co-opt existing material into recombinant appropriations and remixes. If something like the “WebStalker” web browser or the Jodi.org homepage are emblematic of the first period, then John Michael Boling’s “Guitar Solo Threeway,” Brody Condon’s “Without Sun,” or the Nasty Nets web surfing club, now sadly defunct, are emblematic of the second period.

    I’m not entirely unsatisfied by such a periodization, even if it tends to confuse as many things as it clarifies – not entirely unsatisfied because it indicates that appropriation too is a technique of digitality. As Martin Heidegger signals, by way of his notoriously enigmatic concept Ereignis, western thought and culture was always a process in which a proper relationship of belonging is established in a world, and so too appropriation establishes new relationships of belonging between objects and their contexts, between artists and materials, and between viewers and works of art. (Such is the definition of appropriation after all: to establish a belonging.) This is what I mean when I say that appropriation is a technique of digitality: it calls out a distinction in the object from “where it was prior” to “where it is now,” simply by removing that object from one context of belonging and separating it out into another. That these two contexts are merely different – that something has changed – is evidence enough of the digitality of appropriation. Even when the act of appropriation does not reduplicate the object or rely on multiple sources, as with the artistic ready-made, it still inaugurates a “twoness” in the appropriated object, an asterisk appended to the art work denoting that something is different.

    TMu_Cyborg_2011_18-1024x682
    Takeshi Murata, “Cyborg” (2011)

    Perhaps this is why Takeshi Murata continues his exploration of the multiplicities at the core of digital aesthetics by returning to that age old format, the still life. Is not the still life itself a kind of appropriation, in that it brings together various objects into a relationship of belonging: fig and fowl in the Dutch masters, or here the various detritus of contemporary cyber culture, from cult films to iPhones?

    Because appropriation brings things together it must grapple with a fundamental question. Whatever is brought together must form a relation. These various things must sit side-by-side with each other. Hence one might speak of any grouping of objects in terms of their “parallel” nature, that is to say, in terms of the way in which they maintain their multiple identities in parallel.

    But let us dwell for a moment longer on these agglomerations of things, and in particular their “parallel” composition. By parallel I mean the way in which digital media tend to segregate and divide art into multiple, separate channels. These parallel channels may be quite manifest, as in the separate video feeds that make up the aforementioned “Guitar Solo Threeway,” or they may issue from the lowest levels of the medium, as when video compression codecs divide the moving image into small blocks of pixels that move and morph semi-autonomously within the frame. In fact I have found it useful to speak of this in terms of the “parallel image” in order to differentiate today’s media making from that of a century ago, which Friedrich Kittler and others have chosen to label “serial” after the serial sequences of the film strip, or the rat-ta-tat-tat of a typewriter.

    Thus films like Tatjana Marusic’s “The Memory of a Landscape” (2004) or Takeshi Murata’s “Monster Movie” (2005) are genuinely digital films, for they show parallelity in inscription. Each individual block in the video compression scheme has its own autonomy and is able to write to the screen in parallel with all the other blocks. These are quite literally, then, “multichannel” videos – we might even take a cue from online gaming circles and label them “massively multichannel” videos. They are multichannel not because they require multiple monitors, but because each individual block or “channel” within the image acts as an individual micro video feed. Each color block is its own channel. Thus, the video compression scheme illustrates, through metonymy, how pixel images work in general, and, as I suggest, it also illustrates the larger currents of digitality, for it shows that these images, in order to create “an” image must first proliferate the division of sub-images, which themselves ultimately coalesce into something resembling a whole. In other words, in order to create a “one” they must first bifurcate the single image source into two or more separate images.

    The digital image is thus a cellular and discrete image, consisting of separate channels multiplexed in tandem or triplicate or, greater, into nine, twelve, twenty-four, one hundred, or indeed into a massively parallel image of a virtually infinite visuality.

    For me this generates a more appealing explanation for why art and culture has, over the last several decades, developed a growing anxiety over copies, repetitions, simulations, appropriations, reenactments – you name it. It is common to attribute such anxiety to a generalized disenchantment permeating modern life: our culture has lost its aura and can no longer discern an original from a copy due to endless proliferations of simulation. Such an assessment is only partially correct. I say only partially because I am skeptical of the romantic nostalgia that often fuels such pronouncements. For who can demonstrate with certainty that the past carried with it a greater sense of aesthetic integrity, a greater unity in art? Yet the assessment begins to adopt a modicum of sense if we consider it from a different point of view, from the perspective of a generalized digitality. For if we define the digital as “the one dividing into two,” then it would be fitting to witness works of art that proliferate these same dualities and multiplicities. In other words, even if there was a “pure” aesthetic origin it was a digital origin to begin with. And thus one needn’t fret over it having infected our so-called contemporary sensibilities.

    Instead it is important not to be blinded by the technology. But rather to determine that, within a generalized digitality, there must be some kind of differential at play. There must be something different, and without such a differential it is impossible to say that something is something (rather than something else, or indeed rather than nothing). The one must divide into something else. Nothing less and nothing more is required, only a generic difference. And this is our first insight into the “something” of the digital.

    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay

  • Cultivating Reform and Revolution

    Cultivating Reform and Revolution

    The Fragility of Things: Self-Organizing Processes, Neoliberal Fantasies, and Democratic Activism (Duke University Press, 2013)a review of William E. Connolly, The Fragility of Things: Self-Organizing Processes, Neoliberal Fantasies, and Democratic Activism (Duke University Press, 2013)
    by Zachary Loeb
    ~

    Mountains and rivers, skyscrapers and dams – the world is filled with objects and structures that appear sturdy. Glancing upwards at a skyscraper, or mountain, a person may know that these obelisks will not remain eternally unchanged, but in the moment of the glance we maintain a certain casual confidence that they are not about to crumble suddenly. Yet skyscrapers collapse, mountains erode, rivers run dry or change course, and dams crack under the pressure of the waters they hold. Even equipped with this knowledge it is still tempting to view such structures as enduringly solid. Perhaps the residents of Lisbon, in November of 1755, had a similar faith in the sturdiness of the city they had built, a faith that was shattered in an earthquake – and aftershocks – that demonstrated all too terribly the fragility at the core of all physical things.

    The Lisbon earthquake, along with its cultural reverberations, provides the point of entry for William E. Connolly’s discussion of neoliberalism, ecology, activism, and the deceptive solidness of the world in his book The Fragility of Things. Beyond its relevance as an example of the natural tremors that can reduce the built world into rubble, the Lisbon earthquake provides Connolly (the Krieger-Eisenhower Professor of Political Science at the Johns Hopkins University), a vantage point from which to mark out and critique a Panglossian worldview he sees as prominent in contemporary society. No doubt, were Voltaire’s Pangloss alive today, he could find ready employment as an apologist for neoliberalism (perhaps as one of Silicon Valley’s evangelists). Like Panglossian philosophy, neoliberalism “acknowledges many evils and treats them as necessary effects” (6).

    Though the world has changed significantly since the mid-18th century during which Voltaire wrote, humanity remains assaulted by events that demonstrate the world’s fragility. Connolly councils against the withdrawal to which the protagonists of Candide finally consign themselves while taking up the famous trope Voltaire develops for that withdrawal; today we “cultivate our gardens” in a world in which the future of all gardens is uncertain. Under the specter of climate catastrophe, “to cultivate our gardens today means to engage the multiform relations late capitalism bears to the entire planet” (6). Connolly argues for an “ethic of cultivation” that can show “both how fragile the ethical life is and how important it is to cultivate it” (17). “Cultivation,” as developed in The Fragility of Things, stands in opposition to withdrawal. Instead it entails serious, ethically guided, activist engagement with the world – for us to recognize the fragility of natural, and human-made, systems (Connolly uses the term “force-fields”) and to act to protect this “fragility” instead of celebrating neoliberal risks that render the already precarious all the more tenuous.

    Connolly argues that when natural disasters strike, and often in their wake set off rippling cascades of additional catastrophes, they exemplify the “spontaneous order” so beloved by neoliberal economics. Under neoliberalism, the market is treated as though it embodies a uniquely omniscient, self-organizing and self-guiding principle. Yet the economic system is not the only one that can be described this way: “open systems periodically interact in ways that support, amplify, or destabilize one another” (25). Even in the so-called Anthropocene era the ecosystem, much to humanity’s chagrin, can still demonstrate creative and unpredictable potentialities. Nevertheless, the ideological core of neoliberalism relies upon celebrating the market’s self-organizing capabilities whilst ignoring the similar capabilities of governments, the public sphere, or the natural world. The ascendancy of neoliberalism runs parallel with an increase in fragility as economic inequality widens and as neoliberalism treats the ecosystem as just another profit source. Fragility is everywhere today, and though the cracks are becoming increasingly visible, it is still given – in Connolly’s estimation – less attention than is its due, even in “radical theory.” On this issue Connolly wonders if perhaps “radical theorists,” and conceivably radical activists, “fear that coming to terms with fragility would undercut the political militancy needed to respond to it?” (32). Yet Connolly sees no choice but to “respond,” envisioning a revitalized Left that can take action with a mixture of advocacy for immediate reforms while simultaneously building towards systemic solutions.

    Critically engaging with the thought of core neoliberal thinker and “spontaneous order” advocate Friedrich Hayek, Connolly demonstrates the way in which neoliberal ideology has been inculcated throughout society, even and especially amongst those whose lives have been made more fragile by neoliberalism: “a neoliberal economy cannot sustain itself unless it is supported by a self-conscious ideology internalized by most participants that celebrates the virtues of market individualism, market autonomy and a minimal state” (58). An army of Panglossian commentators must be deployed to remind the wary watchers that everything is for the best. That a high level of state intervention may be required to bolster and disseminate this ideology, and prop up neoliberalism, is wholly justified in a system that recognizes only neoliberalism as a source for creative self-organizing processes, indeed “sometimes you get the impression that ‘entrepreneurs’ are the sole paradigms of creativity in the Hayekian world” (66). Resisting neoliberalism, for Connolly, requires remembering the sources of creativity that occur outside of a market context and seeing how these other systems demonstrate self-organizing capacities.

    Within neoliberalism the market is treated as the ethical good, but Connolly works to counter this with “an ethic of cultivation” which works not only against neoliberalism but against certain elements of Kant’s philosophy. In Connolly’s estimation Kantian ethics provide some of the ideological shoring up for neoliberalism, as at times “Kant both prefigures some existential demands unconsciously folded into contemporary neoliberalism and reveals how precarious they in fact are. For he makes them postulates” (117). Connolly sees a certain similarity between the social conditioning that Kant saw as necessary for preparing the young to “obey moral law” and the ideological conditioning that trains people for life under neoliberalism – what is shared is a process by which a self-organizing system must counter people’s own self-organizing potential by organizing their reactions. Furthermore “the intensity of cultural desires to invest hopes in the images of self-regulating interest within markets and/or divine providence wards off acknowledgment of the fragility of things” (118). Connolly’s “ethic of cultivation” appears as a corrective to this ethic of inculcation – it features “an element of tragic possibility within it” (133) which is the essential confrontation with the “fragility” that may act as a catalyst for a new radical activism.

    In the face of impending doom neoliberalism will once more have an opportunity to demonstrate its creativity even as this very creativity will have reverberations that will potentially unleash further disasters. Facing the possible catastrophe means that “we may need to recraft the long debate between secular, linear, and deterministic images of the world on the one hand and divinely touched, voluntarist, providential, and/or punitive images on the other” (149). Creativity, and the potential for creativity, is once more essential – as it is the creativity in multiple self-organizing systems that has created the world, for better or worse, around us today. Bringing his earlier discussions of Kant into conversation with the thought of Whitehead and Nietzsche, Connolly further considers the place of creative processes in shaping and reshaping the world. Nietzsche, in particular, provides Connolly with a way to emphasize the dispersion of creativity by removing the province of creativity from the control of God to treat it as something naturally recurring across various “force-fields.” A different demand thus takes shape wherein “we need to slow down and divert human intrusions into various planetary force fields, even as we speed up efforts to reconstitute the identities, spiritualities, consumption practices, market faiths, and state policies entangled with them” (172) though neoliberalism knows but one speed: faster.

    An odd dissonance occurs at present wherein people are confronted with the seeming triumph of neoliberal capitalism (one can hear the echoes of “there is no alternative”) and the warnings pointing to the fragility of things. In this context, for Connolly, withdrawal is irresponsible, it would be to “cultivate a garden” when what is needed is an “ethic of cultivation.” Neoliberal capitalism has trained people to accept the strictures of its ideology, but now is a time when different roles are needed; it is a time to become “role experimentalists” (187). Such experiments may take a variety of forms that run the gamut from “reformist” to “revolutionary” and back again, but the process of such experimentation can break the training of neoliberalism and demonstrate other ways of living, interacting, being and having. Connolly does not put forth a simple solution for the challenges facing humanity, instead he emphasizes how recognizing the “fragility of things” allows for people to come to terms with these challenges. After all, it may be that neoliberalism only appears so solid because we have forgotten that it is not actually a naturally occurring mountain but a human built pyramid – and our backs are its foundation.

    * * *

    In the “First Interlude,” on page 45, Connolly poses a question that haunts the remainder of The Fragility of Things, the question – asked in the midst of a brief discussion of the 2011 Lars von Trier film Melancholia – is, “How do you prepare for the end of the world?” It is the sort of disarming and discomforting question that in its cold honesty forces readers to face a conclusion they may not want to consider. It is a question that evokes the deceptively simple acronym FRED (Facing the Reality of Extinction and Doom). And yet there is something refreshing in the question – many have heard the recommendations about what must be done to halt climate catastrophe, but how many believe these steps will be taken? Indeed, even though Connolly claims “we need to slow down” there are also those who, to the contrary, insist that what is needed is even greater acceleration. Granted, Connolly does not pose this question on the first page of his book, and had he done so The Fragility of Things could have easily appeared as a dismissible dirge. Wisely, Connolly recognizes that “a therapist, a priest, or a philosopher might stutter over such questions. Even Pangloss might hesitate” (45); one of the core strengths of The Fragility of Things is that it does not “stutter over such questions” but realizes that such questions require an honest reckoning. Which includes being willing to ask “How do you prepare for the end of the world?”

    William Connolly’s The Fragility of Things is both ethically and intellectually rigorous, demanding readers perceive the “fragility” of the world around them even as it lays out the ways in which the world around them derives its stability from making that very fragility invisible. Though it may seem that there are relatively simple concerns at the core of The Fragility of Things Connolly never succumbs to simplistic argumentation – preferring the fine-toothed complexity that allows moments of fragility to be fully understood. The tone and style of The Fragility of Things feels as though it assumes its readership will consist primarily of academics, activists, and those who see themselves as both. It is a book that wastes no time trying to convince its reader that “climate change is real” or “neoliberalism is making things worse,” and the book is more easily understood if a reader begins with at least a basic acquaintance with the thought of Hayek, Kant, Whitehead, and Nietzsche. Even if not every reader of The Fragility of Things has dwelled for hours upon the question of “How do you prepare for the end of the world?” the book seems to expect that this question lurks somewhere in the subconscious of the reader.

    Amidst Connolly’s discussions of ethics, fragility and neoliberalism, he devotes much of the book to arguing for the need for a revitalized, active, and committed Left – one that would conceivably do more than hold large marches and then disappear. While Connolly cautions against “giving up” on electoral politics he does evince a distrust for US party politics; to the extent that Connolly appears to be a democrat it is a democrat with a lowercase d. Drawing inspiration from the wave of protests in and around 2011 Connolly expresses the need for a multi-issue, broadly supported, international (and internationalist) Left that can organize effectively to win small-scale local reforms while building the power to truly challenge the grip of neoliberalism. The goal, as Connolly envisions it, is to eventually “mobilize enough collective energy to launch a general strike simultaneously in several countries in the near future” even as Connolly remains cognizant of threats that “the emergence of a neofascist or mafia-type capitalism” can pose (39). Connolly’s focus on the, often slow, “traditional” activist strategies of organizing should not be overlooked, as his focus on mobilizing large numbers of people acts as a retort to a utopian belief that “technology will fix everything.” The “general strike” as the democratic response once electoral democracy has gone awry is a theme that Connolly concludes with as he calls for his readership to take part in helping to bring together “a set of interacting minorities in several countries for the time when we coalesce around a general strike launched in several states simultaneously” (195). Connolly emphasizes the types of localized activism and action that are also necessary, but “the general strike” is iconic as the way to challenge neoliberalism. In emphasizing the “the general strike” Connolly stakes out a position in which people have an obligation to actively challenge existing neoliberalism, waiting for capitalism to collapse due to its own contradictions (and trying to accelerate these contradictions) does not appear as a viable tactic.

    All of which raises something of prickly question for The Fragility of Things: which element of the book strikes the reader as more outlandish, the question of how to prepare for the end of the world, or the prospect of a renewed Left launching “a general strike…in the near future”? This question is not asked idly or as provocation; and the goal here is in no way to traffic in Leftist apocalyptic romanticism. Yet experience in current activism and organizing does not necessarily imbue one with great confidence in the prospect of a city-wide general strike (in the US) to say nothing of an international one. Activists may be acutely aware of the creative potentials and challenges faced by repressed communities, precarious labor, the ecosystem, and so forth – but these same activists are aware of the solidity of militarized police forces, a reactionary culture industry, and neoliberal dominance. Current, committed, activists’ awareness of the challenges they face makes it seem rather odd that Connolly suggests that radical theorists have ignored “fragility.” Indeed many radical thinkers, or at least some (Grace Lee Boggs and Franco “Bifo” Berardi, to name just two) seem to have warned consistently of “fragility” – even if they do not always use that exact term. Nevertheless, here the challenge may not be the Sisyphean work of activism but the rather cynical answer many, non-activists, give to the question of “How does one prepare for the end of the world?” That answer? Download some new apps, binge watch a few shows, enjoy the sci-fi cool of the latest gadget, and otherwise eat, drink and be merry because we’ll invent something to solve tomorrow’s problems next week. Neoliberalism has trained people well.

    That answer, however, is the type that Connolly seems to find untenable, and his apparent hope in The Fragility of Things is that most readers will also find this answer unacceptable. Thus Connolly’s “ethic of cultivation” returns and shows its value again. “Our lives are messages” (185) Connolly writes and thus the actions that an individual takes to defend “fragility” and oppose neoliberalism act as a demonstration to others that different ways of being are possible.

    What The Fragility of Things makes clear is that an “ethic of cultivation” is not a one-off event but an ongoing process – cultivating a garden, after all, is something that takes time. Some gardens require years of cultivation before they start to bear fruit.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

  • Trickster Makes This Web: The Ambiguous Politics of Anonymous

    Trickster Makes This Web: The Ambiguous Politics of Anonymous

    Hacker, Hoaxer, Whistleblower, Spy
    a review of Gabriella Coleman, Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous (Verso, 2014)
    by Gavin Mueller
    ~

    Gabriella Coleman’s Hacker, Hoaxer, Whistleblower, Spy (HHWS) tackles a difficult and pressing subject: the amorphous hacker organization Anonymous. The book is not a strictly academic work. Rather, it unfolds as a rather lively history of a subculture of geeks, peppered with snippets of cultural theory and autobiographical portions. As someone interested in a more sustained theoretical exposition of Anonymous’s organizing and politics, I was a bit disappointed, though Coleman has opted for a more readable style. In fact, this is the book’s best asset. However, while containing a number of insights of interest to the general reader, the book ultimately falters as an assessment of Anonymous’s political orientation, or the state of hacker politics in general.

    Coleman begins with a discussion of online trolling, a common antagonistic online cultural practice; many Anons cut their troll teeth at the notorious 4chan message board. Trolling aims to create “lulz,” a kind of digital schadenfreude produced by pranks, insults and misrepresentations. According to Coleman, the lulz are “a form of cultural differentiation and a tool or weapon used to attack, humiliate, and defame” rooted in the use of “inside jokes” of those steeped in the codes of Internet culture (32). Coleman argues that the lulz has a deeper significance: they “puncture the consensus around our politics and ethics, our social lives and our aesthetic sensibilities.” But trolling can be better understood through an offline frame of reference: hazing. Trolling is a means by which geeks have historically policed the boundaries of the subcultural corners of the Internet. If you can survive the epithets and obscene pictures, you might be able to hang. That trolling often takes the form of misogynist, racist and homophobic language is unsurprising: early Net culture was predominantly white and male, a demographic fact which overdetermines the shape of resentment towards “newbies” (or in 4chan’s unapologetically offensive argot, “newfags”). The lulz is joy that builds community, but almost always at someone else’s expense.

    Coleman, drawing upon her background as an anthropologist, conceptualizes the troll as an instantiation of the trickster archetype which recurs throughout mythology and folklore. Tricksters, she argues, like trolls and Anonymous, are liminal figures who defy norms and revel in causing chaos. This kind of application of theory is a common technique in cultural studies, where seemingly apolitical or even anti-social transgressions, like punk rock or skateboarding, can be politicized with a dash of Bakhtin or de Certeau. Here it creates difficulties. There is one major difference between the spider spirit Anansi and Coleman’s main informant on trolling, the white supremacist hacker weev: Anansi is fictional, while weev is a real person who writes op-eds for neo-Nazi websites. The trickster archetype, a concept crafted for comparative structural analysis of mythology, does little to explain the actually existing social practice of trolling. Instead it renders it more complicated, ambiguous, and uncertain. These difficulties are compounded as the analysis moves to Anonymous. Anonymous doesn’t merely enact a submerged politics via style or symbols. It engages in explicitly political projects, complete with manifestos, though Coleman continues to return to transgression as one of its salient features.

    The trolls of 4chan, from which Anonymous emerged, developed a culture of compulsory anonymity. In part, this was technological: unlike other message boards and social media, posting on 4chan requires no lasting profile, no consistent presence. But there was also a cultural element to this. Identifying oneself is strongly discouraged in the community. Fittingly, its major trolling weapon is doxing: revealing personal information to facilitate further harassment offline (prank calls, death threats, embarrassment in front of employers). As Whitney Phillips argues, online trolling often acts as a kind of media critique: by enforcing anonymity and rejecting fame or notoriety, Anons oppose the now-dominant dynamics of social media and personal branding which have colonized much of the web, and threaten their cherished subcultural practices, which are more adequately enshrined in formats such as image boards and IRC. In this way, Anonymous deploys technological means to thwart the dominant social practices of technology, a kind of wired Luddism. Such practices proliferate in the communities of the computer underground, which is steeped in an omnipresent prelapsarian nostalgia since at least the “eternal September” of the early 1990s.

    HHWS’s overarching narrative is the emergence of Anonymous out of the cesspits of 4chan and into political consciousness: trolling for justice instead of lulz. The compulsory anonymity of 4chan, in part, determined Anonymous’s organizational form: Anonymous lacks formal membership, instead formed from entirely ad hoc affiliations. The brand itself can be selectively deployed or disavowed, leading to much argumentation and confusion. Coleman provides an insider perspective on how actions are launched: there is debate, occasionally a rough consensus, and then activity, though several times individuals opt to begin an action, dragging along a number of other participants of varying degrees of reluctance. Tactics are formalized in an experimental, impromptu way. In this, I recognized the way actions formed in the Occupy encampments. Anonymous, as Coleman shows, was an early Occupy Wall Street booster, and her analysis highlights the connection between the Occupy form and the networked forms of sociality exemplified by Anonymous. After reading Coleman’s account, I am much more convinced of Anonymous’s importance to the movement. Likewise, many criticisms of Occupy could also be levelled at Anonymous; Coleman cites Jo Freeman’s “The Tyranny of Structurelessness” as one candidate.

    If Anonymous can be said to have a coherent political vision, it is one rooted in civil liberties, particularly freedom of speech and opposition censorship efforts. Indeed, Coleman earns the trust of several hackers by her affiliation with the Electronic Frontier Foundation, nominally the digital equivalent to the ACLU (though some object to this parallel, due in part to EFF’s strong ties to industry). Geek politics, from Anonymous to Wikileaks to the Pirate Bay, are a weaponized form of the mantra “information wants to be free.” Anonymous’s causes seem fit these concerns perfectly: Scientology’s litigious means of protecting its secrets provoked its wrath, as did the voluntary withdrawal of services to Wikileaks by PayPal and Mastercard, and the Bay Area Rapid Transit police’s blacking out of cell phone signals to scuttle a protest.

    I’ve referred to Anonymous as geeks rather than hackers deliberately. Hackers — skilled individuals who can break into protected systems — participate in Anonymous, but many of the Anons pulled from 4chan are merely pranksters with above-average knowledge of the Internet and computing. This gets the organization in quite a bit of trouble when it engages in the political tactic of most interest to Coleman, the distributed denial of service (DDoS) attack. A DDoS floods a website with requests, overwhelming its servers. This technique has captured the imaginations of a number of scholars, including Coleman, with its resemblance to offline direct action like pickets and occupations. However, the AnonOps organizers falsely claimed that their DDoS app, the Low-Orbit Ion Cannon, ensured user anonymity, leading to a number of Anons facing serious criminal charges. Coleman curiously places the blame for this startling breach of operational security on journalists writing about AnonOps, rather on the organizers themselves. Furthermore, many DDoS attacks, including those launched by Anonymous, have relied on botnets, which draw power from hundreds of hijacked computers, bears little resemblance to any kind of democratic initiative. Of course, this isn’t to say that the harsh punishments meted out to Anons under the auspices of the Computer Fraud and Abuse Act are warranted, but that political tactics must be subjected to scrutiny.

    Coleman argues that Anonymous outgrew its narrow civil libertarian agenda with its involvement in the Arab Spring: “No longer was the group bound to Internet-y issues like censorship and file-sharing” (148). However, by her own account, it is opposition to censorship which truly animates the group. The #OpTunisia manifesto (Anonymous names its actions with the prefix “Op,” for operations, along with the ubiquitous Twitter-based hashtag) states plainly, “Any organization involved in censorship will be targeted” (ibid). Anons were especially animated by the complete shut-off of the Internet in Tunisia and Egypt, actions which shattered the notion of the Internet as a space controlled by geeks, not governments. Anonymous operations launched against corporations did not oppose capitalist exploitation but fought corporate restrictions on online conduct. These are laudable goals, but also limited ones, and are often compatible with Silicon Valley companies, as illustrated by the Google-friendly anti-SOPA/PIPA protests.

    Coleman is eager to distance Anonymous from the libertarian philosophies rife in geek and hacker circles, but its politics are rarely incompatible with such a perspective. The most recent Guy Fawkes Day protest I witnessed in Washington, D.C., full of mask-wearing Anons, displayed a number of slogans emerging from the Ron Paul camp, “End the Fed” prominent among them. There is no accounting for this in HHWS. It is clear that political differences among Anons exists, and that any analysis must be nuanced. But Coleman’s description of this nuance ultimately doesn’t delineate the political positions within the group and how they coalesce, opting to elide these differences in favor of a more protean focus on “transgression.” In this way, she is able to provide a conceptual coherence for Anonymous, albeit at the expense of a detailed examination of the actual politics of its members. In the final analysis, “Anonymous became a generalized symbol for dissent, a medium to channel deep disenchantment… basically, with anything” (399).

    As political concerns overtake the lulz, Anonymous wavers as smaller militant hacker crews LulzSec and AntiSec take the fore, doxing white hat security executives, leaking documents, and defacing websites. This frustrates Coleman: “Anonymous had been exciting to me for a specific reason: it was the largest and most populist disruptive grassroots movement the Internet had, up to that time, fomented. But it felt, suddenly like AnonOps/Anonymous was slipping into a more familiar state of hacker-vanguardism” (302). Yet it is at this moment that Coleman offers a revealing account of hacker ideology: its alignment with the philosophy of Friedrich Nietzsche. From 4chan’s trolls scoffing at morality and decency, to hackers disregarding technical and legal restraints to accessing information, to the collective’s general rejection any standard form of accountability, Anonymous truly seems to posit itself as beyond good and evil. Coleman herself confesses to being “overtly romantic” as she supplies alibis for the group’s moral and strategic failures (it is, after all, incredibly difficult for an ethnographer to criticize her informants). But Nietzsche was a profoundly undemocratic thinker, whose avowed elitism should cast more of a disturbing shadow over the progressive potentials behind hacker groups than it does for Coleman, who embraces the ability of hackers to “cast off — at least momentarily — the shackles of normativity and attain greatness” (275). Coleman’s previous work on free software programmers convincingly makes the case for a Nietzschean current running through hacker culture; I am considerably more skeptical than she is about the liberal democratic viewpoint this engenders.

    Ultimately, Coleman concludes that Anonymous cannot work as a substitute for existing organizations, but that its tactics should be taken up by other political formations: “The urgent question is how to promote cross-pollination” between Anonymous and more formalized structures (374). This may be warranted, but there needs to be a fuller accounting of the drawbacks to Anonymous. Because anyone can fly its flag, and because its actions are guided by talented and charismatic individuals working in secret, Anonymous is ripe for infiltration. Historically, hackers have proven to be easy for law enforcement and corporations to co-opt, not the least because of the ferocious rivalries amongst hackers themselves. Tactics are also ambiguous. A DDoS can be used by anti-corporate activists, or by corporations against their rivals and enemies. Document dumps can ruin a diplomatic initiative, or a woman’s social life. Public square occupations can be used to advocate for democracy, or as a platform for anti-democratic coups. Currently, a lot of the same geek energy behind Anonymous has been devoted to the misogynist vendetta GamerGate (in a Reddit AMA, Coleman adopted a diplomatic tone, referring to GamerGate as “a damn Gordian knot”). Without a steady sense of Anonymous’s actual political commitments, outside of free speech, it is difficult to do much more than marvel at the novelty of their media presence (which wears thinner with each overwrought communique). With Hoaxer, Hacker, Whistleblower, Spy, Coleman has offered a readable account of recent hacker history, but I remain unconvinced of Anonymous’s political potential.

    _____

    Gavin Mueller (@gavinsaywhat) is a PhD candidate in cultural studies at George Mason University, and an editor at Jacobin and Viewpoint Magazine.

    Back to the essay

  • Is the Network a Brain?

    Is the Network a Brain?

    Pickering, Cybernetic Braina review of Andrew Pickering, The Cybernetic Brain: Sketches of Another Future (University of Chicago Press, 2011)
    by Jonathan Goodwin
    ~

    Evgeny Morozov’s recent New Yorker article about Project Cybersyn in Allende’s Chile caused some controversy when critics accused Morozov of not fully acknowledging his sources. One of those sources was sociologist of science Andrew Pickering’s The Cybernetic Brain. Morozov is quoted as finding Pickering’s book “awful.” It’s unlikely that Morozov meant “awful” in the sense of “awe-inspiring,” but that was closer to my reaction after reading Pickering’s 500+ pp. work on the British tradition in cybernetics. This tradition was less militarist and more artistic, among other qualities, in Pickering’s account, than is popularly understood. I found myself greatly intrigued—if not awed—by the alternate future that his subtitle and final chapter announces. Cybernetics is now a largely forgotten dead-end in science. And the British tradition that Pickering describes had relatively little influence within cybernetics itself. So what is important about it now, and what is the nature of this other future that Pickering sketches?

    The major figures of this book, which proceeds with overviews of their careers, views, and accomplishments, are Grey Walter, Ross Ashby, Gregory Bateson, R. D. Laing, Stafford Beer, and Gordon Pask. Stuart Kauffman’s and Stephen Wolfram’s work on complexity theory also makes an appearance.[1] Laing and Bateson’s relevance may not be immediately clear. Pickering’s interest in them derives from their extension of cybernetic ideas to the emerging technologies of the self in the 1960s. Both Bateson and Laing approached schizophrenia as an adaptation to the increasing “double-binds” of Western culture, and both looked to Eastern spiritual traditions and chemical methods of consciousness-alteration as potential treatments. The Bateson and Laing material makes the most direct reference to the connection between the cybernetic tradition and the “Californian Ideology” that animates much Silicon Valley thinking. Stewart Brand was influenced by Bateson’s Steps to an Ecology of Mind (183), for example. Pickering identifies Northern California as the site where cybernetics migrated into the counterculture. As a technology of control, it is arguable that this countercultural migration has become part of the ruling ideology of the present moment. Pickering recognizes this but seems to concede that the inherent topicality would detract from the focus of his work. It is a facet that would be of interest to the readers of this “Digital Studies” section of The b2 Review, however, and I will thus return to it at the end of this review.

    Pickering’s path to Bateson and Laing originates with Grey Walter’s and Ross Ashby’s pursuit of cybernetic models of the brain. Computational models of the brain, though originally informed by cybernetic research, quickly replaced it in Pickering’s account (62). He asks why computational models of the brain quickly gathered so much cultural interest. Rodney Brooks’s robots, with their more embodied approach, Pickering argues, are in the tradition of Walter’s tortoises and outside the symbolic tradition of artificial intelligence. I find it noteworthy that the neurological underpinnings of early cybernetics were so strongly influenced by behaviorism. Computationalist approaches, associated by Pickering with the establishment or “royal” science, here, were intellectually formed by an attack on behaviorism. Pickering even addresses this point obliquely, when he wonders why literary scholars had not noticed that the octopus in Gravity’s Rainbow was apparently named “Grigori” in homage to Gregory Bateson (439n13).[2] I think one reason this hasn’t been noticed is that it’s much more likely that the name was random but for its Slavic form, which is clearly in the same pattern of references to Russian behaviorist psychology that informs Pynchon’s novel. An offshoot of behaviorism inspiring a countercultural movement devoted to freedom and experimentation seems peculiar.

    One of Pickering’s key insights into this alternate tradition of cybernetics is that its science is performative. Rather than being as theory-laden as are the strictly computationalist approaches, cybernetic science often studied complex systems as assemblages whose interactions generated novel insights. Contrast this epistemology to what critics point to as the frequent invocation of the Duhem-Quine thesis by Noam Chomsky.[3] For Pickering, Ross Ashby’s version of cybernetics was a “supremely general and protean science” (147). As it developed, the brain lost its central place and cybernetics became a “freestanding general science” (147). As I mentioned, the chapter on Ashby closes with a consideration of the complexity science of Stuart Kauffman and Stephen Wolfram. That Kauffman and Wolfram largely have worked outside mainstream academic institutions is important for Pickering.[4] Christopher Alexander’s pattern language in architecture is a third example. Pickering mentions that Alexander’s concept was influential in some areas of computer science; the notion of “object-oriented programming” is sometimes considered to have been influenced by Alexander’s ideas.

    I mention this connection because many of the alternate traditions in cybernetics have become mainstream influences in contemporary digital culture. It is difficult to imagine Laing and Bateson’s alternative therapeutic ideas having any resonance in that culture, however. The doctrine that “selves are endlessly complex and endlessly explorable” (211) is sometimes proposed as something the internet facilitates, but the inevitable result of anonymity and pseudonymity in internet discourse is the enframing of hierarchical relations. I realize this point may sound controversial to those with a more benign or optimistic view of digital culture. That this countercultural strand of cybernetic practice has clear parallels with much digital libertarian rhetoric is hard to dispute. Again, Pickering is not concerned in the book with tracing these contemporary parallels. I mention them because of my own interest and this venue’s presumed interest in the subject.

    The progression that begins with some variety of conventional rationalism, extends through a career in cybernetics, and ends in some variety of mysticism is seen with almost all of the figures that Pickering profiles in The Cybernetic Brain. Perhaps the clearest example—and most fascinating in general—is that of Stafford Beer. Philip Mirowski’s review of Pickering’s book refers to Beer as “a slightly wackier Herbert Simon.” Pickering enjoys recounting the adventures of the wizard of Prang, a work that Beer composed after he had moved to a remote Welsh village and renounced many of the world’s pleasures. Beer’s involvement in Project Cybersyn makes him perhaps the most well-known of the figures profiled in this book.[5] What perhaps fascinate Pickering more than anything else in Beer’s work is the concept of viability. From early in his career, Beer advocated for upwardly viable management strategies. The firm would not need a brain, in his model, “it would react to changing circumstances; it would grow and evolve like an organism or species, all without any human intervention at all” (225). Mirowski’s review compares Beer to Friedrich Hayek and accuses Pickering of refusing to engage with this seemingly obvious intellectual affinity.[6] Beer’s intuitions in this area led him to experiment with biological and ecological computing; Pickering surmises that Douglas Adams’s superintelligent mice derived from Beer’s murine experiments in this area (241).

    In a review of a recent translation of Stanislaw Lem’s Summa Technologiae, Pickering mentions that natural adaptive systems being like brains and being able to be utilized for intelligence amplification is the most “amazing idea in the history of cybernetics” (247).[7] Despite its association with the dreaded “synergy” (the original “syn” of Project Cybersyn), Beer’s viable system model never became a management fad (256). Alexander Galloway has recently written here about the “reticular fallacy,” the notion that de-centralized forms of organization are necessarily less repressive than are centralized or hierachical forms. Beer’s viable system model proposes an emergent and non-hierarchical management system that would increase the general “eudemony” (general well-being, another of Beer’s not-quite original neologisms [272]). Beer’s turn towards Tantric mysticism seems somehow inevitable in Pickering’s narrative of his career. The syntegric icosahedron, one of Beer’s late baroque flourishes, reminded me quite a bit of a Paul Laffoley painting. Syntegration as a concept takes reticularity to a level of mysticism rarely achieved by digital utopians. Pickering concludes the chapter on Beer with a discussion of his influence on Brian Eno’s ambient music.

    Laffoley, "The Orgone Motor"
    Paul Laffoley, “The Orgone Motor” (1981). Image source: paullaffoley.net.

    The discussion of Eno chides him for not reading Gordon Pask’s explicitly aesthetic cybernetics (308). Pask is the final cybernetician of Pickering’s study and perhaps the most eccentric. Pickering describes him as a model for Patrick Troughton’s Dr. Who (475n3), and his synaesthetic work in cybernetics with projects like the Musicolor are explicitly theatrical. A theatrical performance that directly incorporates audience feedback into the production, not just at the level of applause or hiss, but in audience interest in a particular character—a kind of choose-your-own adventure theater—was planned with Joan Littlewood (348-49). Pask’s work in interface design has been identified as an influence on hypertext (464n17). A great deal of the chapter on Pask involves his influence on British countercultural arts and architecture movements in the 1960s. Mirowski’s review shortly notes that even the anti-establishment Gordon Pask was funded by the Office of Naval Research for fifteen years (194). Mirowski also accuses Pickering of ignoring the computer as the emblematic cultural artifact of the cybernetic worldview (195). Pask is the strongest example offered of an alternate future of computation and social organization, but it is difficult to imagine his cybernetic present.

    The final chapter of Pickering’s book is entitled “Sketches of Another Future.” What is called “maker culture” combined with the “internet of things” might lead some prognosticators to imagine an increasingly cybernetic digital future. Cybernetic, that is, not in the sense of increasing what Mirowski refers to as the neoliberal “background noise of modern culture” but as a “challenge to the hegemony of modernity” (393). Before reading Pickering’s book, I would have regarded such a prediction with skepticism. I still do, but Pickering has argued that an alternate—and more optimistic—perspective is worth taking seriously.

    _____

    Jonathan Goodwin is Associate Professor of English at the University of Louisiana, Lafayette. He is working on a book about cultural representations of statistics and probability in the twentieth century.

    Back to the essay

    _____

    [1] Wolfram was born in England, though he has lived in the United States since the 1970s. Pickering taught at the University of Illinois while this book was being written, and he mentions having several interviews with Wolfram, whose company Wolfram Research is based in Champaign, Illinois (457n73). Pickering’s discussion of Wolfram’s A New Kind of Science is largely neutral; for a more skeptical view, see Cosma Shalizi’s review.

    [2] Bateson experimented with octopuses, as Pickering describes. Whether Pynchon knew about this, however, remains doubtful. Pickering’s note may also be somewhat facetious.

    [3] See the interview with George Lakoff in Ideology and Linguistic Theory: Noam Chomsky and the Deep Structure Debates, ed. Geoffrey J. Huck and John A. Goldsmith (New York: Routledge, 1995), p. 115. Lakoff’s account of Chomsky’s philosophical justification for his linguistic theories is tendentious; I mention it here because of the strong contrast, even in caricature, with the performative quality of the cybernetic research Pickering describes. (1999).

    [4] Though it is difficult to think of the Santa Fe Institute this way now.

    [5] For a detailed cultural history of Project Cybersyn, see Eden Medina, Cybernetic Revolutionaries: Technology and Politics in Allende’s Chile (MIT Press, 2011). Medina notes that Beer formed the word “algedonic” from two words meaning “pain” and “pleasure,” but the OED notes an example in the same sense from 1894. This citation does not rule out independent coinage, of course. Curiously enough, John Fowles uses the term in The Magus (1966), where it could have easily been derived from Beer.

    [6] Hayek’s name appears neither in the index nor the reference list. It does seem a curious omission in the broader intellectual context of cybernetics.

    [7] Though there is a reference to Lem’s fiction in an endnote (427n25), Summa Technologiae, a visionary exploration of cybernetic philosophy dating from the early 1960s, does not appear in Pickering’s work. A complete English translation only recently appeared, and I know of no evidence that Pickering’s principal figures were influenced by Lem at all. The book, as Pickering’s review acknowledges, is astonishingly prescient and highly recommended for anyone interested in the culture of cybernetics.

  • Network Pessimism

    Network Pessimism

    By Alexander R. Galloway
    ~

    I’ve been thinking a lot about pessimism recently. Eugene Thacker has been deep in this material for some time already. In fact he has a new, lengthy manuscript on pessimism called Infinite Resignation, which is a bit of departure from his other books in terms of tone and structure. I’ve read it and it’s excellent. Definitely “the worst” he’s ever written! Following the style of other treatises from the history of philosophical pessimism–Leopardi, Cioran, Schopenhauer, Kierkegaard, and others–the bulk of the book is written in short aphorisms. It’s very poetic language, and some sections are driven by his own memories and meditations, all in an attempt to plumb the deepest, darkest corners of the worst the universe has to offer.

    Meanwhile, the worst can’t stay hidden. Pessimism has made it to prime time, to NPR, and even right-wing media. Despite all this attention, Eugene seems to have little interest in showing his manuscript to publishers. A true pessimist! Not to worry, I’m sure the book will see the light of day eventually. Or should I say dead of night? When it does, the book is sure to sadden, discourage, and generally worsen the lives of Thacker fans everywhere.

    Interestingly pessimism also appears in a number of other authors and fields. I’m thinking, for instance, of critical race theory and the concept of Afro-pessimism. The work of Fred Moten and Frank B. Wilderson, III is particularly interesting in that regard. Likewise queer theory has often wrestled with pessimism, be it the “no future” debates around reproductive futurity, or what Anna Conlan has simply labeled “homo-pessimism,” that is, the way in which the “persistent association of homosexuality with death and oppression contributes to a negative stereotype of LGBTQ lives as unhappy and unhealthy.”[1]

    In his review of my new book, Andrew Culp made reference to how some of this material has influenced me. I’ll be posting more on Moten and these other themes in the future, but let me here describe, in very general terms, how the concept of pessimism might apply to contemporary digital media.

    *

    A previous post was devoted to the reticular fallacy, defined as the false assumption that the erosion of hierarchical organization leads to an erosion of organization as such. Here I’d like to address the related question of reticular pessimism or, more simply, network pessimism.

    Network pessimism relies on two basic assumptions: (1) “everything is a network”; (2) “the best response to networks is more networks.”

    Who says everything is a network? Everyone, it seems. In philosophy, Bruno Latour: ontology is a network. In literary studies, Franco Moretti: Hamlet is a network. In the military, Donald Rumsfeld: the battlefield is a network. (But so too our enemies are networks: the terror network.) Art, architecture, managerial literature, computer science, neuroscience, and many other fields–all have shifted prominently in recent years toward a network model. Most important, however, is the contemporary economy and the mode of production. Today’s most advanced companies are essentially network companies. Google monetizes the shape of networks (in part via clustering algorithms). Facebook has rewritten subjectivity and social interaction along the lines of canalized and discretized network services. The list goes on and on. Thus I characterize the first assumption — “everything is a network” — as a kind of network fundamentalism. It claims that whatever exists in the world appears naturally in the form of a system, an ecology, an assemblage, in short, as a network.

    Ladies and gentlemen, behold the good news, postmodernism is definitively over! We have a new grand récit. As metanarrative, the network will guide us into a new Dark Age.

    If the first assumption expresses a positive dogma or creed, the second is more negative or nihilistic. The second assumption — that the best response to networks is more networks — is also evident in all manner of social and political life today. Eugene and I described this phenomena at greater length in The Exploit, but consider a few different examples from contemporary debates… In military theory: network-centric warfare is the best response to terror networks. In Deleuzian philosophy: the rhizome is the best response to schizophrenic multiplicity. In autonomist Marxism: the multitude is the best response to empire. In the environmental movement: ecologies and systems are the best response to the systemic colonization of nature. In computer science: distributed architectures are the best response to bottlenecks in connectivity. In economics: heterogenous “economies of scope” are the best response to the distributed nature of the “long tail.”

    To be sure, there are many sites today where networks still confront power centers. The point is not to deny the continuing existence of massified, centralized sovereignty. But at the same time it’s important to contextualize such confrontations within a larger ideological structure, one that inoculates the network form and recasts it as the exclusive site of liberation, deviation, political maturation, complex thinking, and indeed the very living of life itself.

    Why label this a pessimism? For the same reasons that queer theory and critical race theory are grappling with pessimism: Is alterity a death sentence? Is this as good as it gets? Is this all there is? Can we imagine a parallel universe different from this one? (Although the pro-pessimism camp would likely state it in the reverse: We must destabilize and annihilate all normative descriptions of the “good.” This world isn’t good, and hooray for that!)

    So what’s the problem? Why should we be concerned about network pessimism? Let me state clearly so there’s no misunderstanding, pessimism isn’t the problem here. Likewise, networks are not the problem. (Let no one label me “anti network” nor “anti pessimism” — in fact I’m not even sure what either of those positions would mean.) The issue, as I see it, is that network pessimism deploys and sustains a specific dogma, confining both networks and pessimism to a single, narrow ideological position. It’s this narrow-mindedness that should be questioned.

    Specifically I can see three basic problems with network pessimism, the problem of presentism, the problem of ideology, and the problem of the event.

    The problem of presentism refers to the way in which networks and network thinking are, by design, allergic to historicization. This exhibits itself in a number of different ways. Networks arrive on the scene at the proverbial “end of history” (and they do so precisely because they help end this history). Ecological and systems-oriented thinking, while admittedly always temporal by nature, gained popularity as a kind of solution to the problems of diachrony. Space and landscape take the place of time and history. As Fredric Jameson has noted, the “spatial turn” of postmodernity goes hand in hand with a denigration of the “temporal moment” of previous intellectual movements.

    man machines buy fritz kahn
    Fritz Kahn, “Der Mensch als Industriepalast (Man as Industrial Palace)” (Stuttgart, 1926). Image source: NIH

    From Hegel’s history to Luhmann’s systems. From Einstein’s general relativity to Riemann’s complex surfaces. From phenomenology to assemblage theory. From the “time image” of cinema to the “database image” of the internet. From the old mantra always historicize to the new mantra always connect.

    During the age of clockwork, the universe was thought to be a huge mechanism, with the heavens rotating according to the music of the spheres. When the steam engine was the source of newfound power, the world suddenly became a dynamo of untold thermodynamic force. After full-fledged industrialization, the body became a factory. Technologies and infrastructures are seductive metaphors. So it’s no surprise (and no coincidence) that today, in the age of the network, a new template imprints itself on everything in sight. In other words, the assumption “everything is a network” gradually falls apart into a kind of tautology of presentism. “Everything right now is a network…because everything right now has been already defined as a network.”

    This leads to the problem of ideology. Again we’re faced with an existential challenge, because network technologies were largely invented as a non-ideological or extra-ideological structure. When writing Protocol I interviewed some of the computer scientists responsible for the basic internet protocols and most of them reported that they “have no ideology” when designing networks, that they are merely interested in “code that works” and “systems that are efficient and robust.” In sociology and philosophy of science, figures like Bruno Latour routinely describe their work as “post-critical,” merely focused on the direct mechanisms of network organization. Hence ideology as a problem to be forgotten or subsumed: networks are specifically conceived and designed as those things that both are non-ideological in their conception (we just want to “get things done”), but also post-ideological in their architecture (in that they acknowledge and co-opt the very terms of previous ideological debates, things like heterogeneity, difference, agency, and subject formation).

    The problem of the event indicates a crisis for the very concept of events themselves. Here the work of Alain Badiou is invaluable. Network architectures are the perfect instantiation of what Badiou derisively labels “democratic materialism,” that is, a world in which there are “only bodies and languages.” In Badiou’s terms, if networks are the natural state of the situation and there is no way to deviate from nature, then there is no event, and hence no possibility for truth. Networks appear, then, as the consummate “being without event.”

    What could be worse? If networks are designed to accommodate massive levels of contingency — as with the famous Robustness Principle — then they are also exceptionally adept at warding off “uncontrollable” change wherever it might arise. If everything is a network, then there’s no escape, there’s no possibility for the event.

    Jameson writes as much in The Seeds of Time when he says that it is easier to imagine the end of the earth and the end of nature than it is to imagine the ends of capitalism. Network pessimism, in other words, is really a kind of network defeatism in that it makes networks the alpha and omega of our world. It’s easier to imagine the end of that world than it is to discard the network metaphor and imagine a kind of non-world in which networks are no longer dominant.

    In sum, we shouldn’t give in to network pessimism. We shouldn’t subscribe to the strong claim that everything is a network. (Nor should we subscribe to the softer claim, that networks are merely the most common, popular, or natural architecture for today’s world.) Further, we shouldn’t think that networks are the best response to networks. Instead we must ask the hard questions. What is the political fate of networks? Did heterogeneity and systematicity survive the Twentieth Century? If so, at what cost? What would a non-net look like? And does thinking have a future without the network as guide?

    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay
    _____

    Notes

    [1] Anna Conlan, “Representing Possibility: Mourning, Memorial, and Queer Museology,” in Gender, Sexuality and Museums, ed. Amy K. Levin (London: Routledge, 2010). 253-263.

  • Flat Theory

    Flat Theory

    By David M. Berry
    ~

    The world is flat.[1] 6Or perhaps better, the world is increasingly “layers.” Certainly the augmediated imaginaries of the major technology companies are now structured around a post-retina vision of mediation made possible and informed by the digital transformations ushered in by mobile technologies – whether smartphones, wearables, beacons or nearables – an internet of places and things. These imaginaries provide a sense of place, as well as sense for management, of the complex real-time streams of information and data broken into shards and fragments of narrative, visual culture, social media and messaging. Turned into software, they reorder and re-present information, decisions and judgment, amplifying the sense and senses of (neoliberal) individuality whilst reconfiguring what it means to be a node in the network of post digital capitalism.  These new imaginaries serve as abstractions of abstractions, ideologies of ideologies, a prosthesis to create a sense of coherence and intelligibility in highly particulate computational capitalism (Berry 2014). To explore the experimentation of the programming industries in relation to this it is useful to explore the design thinking and material abstractions that are becoming hegemonic at the level of the interface.

    Two new competing computational interface paradigms are now deployed in the latest version of Apple and Google’s operating systems, but more notably as regulatory structures to guide the design and strategy related to corporate policy. The first is “flat design” which has been introduced by Apple through iOS 8 and OS X Yosemite as a refresh of the aging operating systems’ human computer interface guidelines, essentially stripping the operating system of historical baggage related to techniques of design that disguised the limitations of a previous generation of technology, both in terms of screen but also processor capacity. It is important to note, however, that Apple avoids talking about “flat design” as its design methodology, preferring to talk through its platforms specificity, that is about iOS’ design or OS X’s design. The second is “material design” which was introduced by Google into its Android L, now Lollipop, operating system and which also sought to bring some sense of coherence to a multiplicity of Android devices, interfaces, OEMs and design strategies. More generally “flat design” is “the term given to the style of design in which elements lose any type of stylistic characters that make them appear as though they lift off the page” (Turner 2014). As Apple argues, one should “reconsider visual indicators of physicality and realism” and think of the user interface as “play[ing] a supporting role”, that is that techniques of mediation through the user interface should aim to provide a new kind of computational realism that presents “content” as ontologically prior to, or separate from its container in the interface (Apple 2014). This is in contrast to “rich design,” which has been described as “adding design ornaments such as bevels, reflections, drop shadows, and gradients” (Turner 2014).

    color_family_a_2xI want to explore these two main paradigms – and to a lesser extent the flat-design methodology represented in Windows 7 and the, since renamed, Metro interface – through a notion of a comprehensive attempt by both Apple and Google to produce a rich and diverse umwelt, or ecology, linked through what what Apple calls “aesthetic integrity” (Apple 2014). This is both a response to their growing landscape of devices, platforms, systems, apps and policies, but also to provide some sense of operational strategy in relation to computational imaginaries. Essentially, both approaches share an axiomatic approach to conceptualizing the building of a system of thought, in other words, a primitivist predisposition which draws from both a neo-Euclidian model of geons (for Apple), but also a notion of intrinsic value or neo-materialist formulations of essential characteristics (for Google). That is, they encapsulate a version of what I am calling here flat theory. Both of these companies are trying to deal with the problematic of multiplicities in computation, and the requirement that multiple data streams, notifications and practices have to be combined and managed within the limited geography of the screen. In other words, both approaches attempt to create what we might call aggregate interfaces by combining techniques of layout, montage and collage onto computational surfaces (Berry 2014: 70).

    The “flat turn” has not happened in a vacuum, however, and is the result of a new generation of computational hardware, smart silicon design and retina screen technologies. This was driven in large part by the mobile device revolution which has not only transformed the taken-for-granted assumptions of historical computer interface design paradigms (e.g. WIMP) but also the subject position of the user, particularly structured through the Xerox/Apple notion of single-click functional design of the interface. Indeed, one of the striking features of the new paradigm of flat design, is that it is a design philosophy about multiplicity and multi-event. The flat turn is therefore about modulation, not about enclosure, as such, indeed it is a truly processual form that constantly shifts and changes, and in many ways acts as a signpost for the future interfaces of real-time algorithmic and adaptive surfaces and experiences. The structure of control for the flat design interfaces is following that of the control society, is “short-term and [with] rapid rates of turnover, but also continuous and without limit” (Deleuze 1992). To paraphrase Deleuze: Humans are no longer in enclosures, certainly, but everywhere humans are in layers.

    manipulation_2x

    Apple uses a series of concepts to link its notion of flat design which include, aesthetic integrity, consistency, direct manipulation, feedback, metaphors, and user control (Apple 2014). Reinforcing the haptic experience of this new flat user interface has been described as building on the experience of “touching glass” to develop the “first post-Retina (Display) UI (user interface)” (Cava 2013). This is the notion of layered transparency, or better, layers of glass upon which the interface elements are painted through a logical internal structure of Z-axis layers. This laminate structure enables meaning to be conveyed through the organization of the Z-axis, both in terms of content, but also to place it within a process or the user interface system itself.

    Google, similarly, has reorganized it computational imaginary around a flattened layered paradigm of representation through the notion of material design. Matias Duarte, Google’s Vice President of Design and a Chilean computer interface designer, declared that this approach uses the notion that it “is a sufficiently advanced form of paper as to be indistinguishable from magic” (Bohn 2014). But magic which has constraints and affordances built into it, “if there were no constraints, it’s not design — it’s art” Google claims (see Interactive Material Design) (Bohn 2014). Indeed, Google argues that the “material metaphor is the unifying theory of a rationalized space and a system of motion”, further arguing:

    The fundamentals of light, surface, and movement are key to conveying how objects move, interact, and exist in space and in relation to each other. Realistic lighting shows seams, divides space, and indicates moving parts… Motion respects and reinforces the user as the prime mover… [and together] They create hierarchy, meaning, and focus (Google 2014).

    This notion of materiality is a weird materiality in as much as Google “steadfastly refuse to name the new fictional material, a decision that simultaneously gives them more flexibility and adds a level of metaphysical mysticism to the substance. That’s also important because while this material follows some physical rules, it doesn’t create the “trap” of skeuomorphism. The material isn’t a one-to-one imitation of physical paper, but instead it’s ‘magical’” (Bohn 2014). Google emphasises this connection, arguing that “in material design, every pixel drawn by an application resides on a sheet of paper. Paper has a flat background color and can be sized to serve a variety of purposes. A typical layout is composed of multiple sheets of paper” (Google Layout, 2014). The stress on material affordances, paper for Google and glass for Apple are crucial to understanding their respective stances in relation to flat design philosophy.[2]

    • Glass (Apple): Translucency, transparency, opaqueness, limpidity and pellucidity.
    • Paper (Google): Opaque, cards, slides, surfaces, tangibility, texture, lighted, casting shadows.
    Paradigmatic Substances for Materiality

    In contrast to the layers of glass paper-notes-templatethat inform the logics of transparency, opaqueness and translucency of Apple’s flat design, Google uses the notion of remediated “paper” as a digital material, that is this “material environment is a 3D space, which means all objects have x, y, and z dimensions. The z-axis is perpendicularly aligned to the plane of the display, with the positive z-axis extending towards the viewer.  Every sheet of material occupies a single position along the z-axis and has a standard 1dp thickness” (Google 2014). One might think then of Apple as painting on layers of glass, and Google as thin paper objects (material) placed upon background paper. However a key difference lies in the use of light and shadow in Google’s notion which enables the light source, located in a similar position to the user of the interface, to cast shadows of the material objects onto the objects and sheets of paper that lie beneath them (see Jitkoff 2014). Nonetheless, a laminate structure is key to the representational grammar that constitutes both of these platforms.

    armin_hofmann_2
    Armin Hofmann, head of the graphic design department at the Schule für Gestaltung Basel (Basel School of Design) and was instrumental in developing the graphic design style known as the Swiss Style. Designs from 1958 and 1959.

    Interestingly, both design strategies emerge from an engagement with and reconfiguration of the principles of design that draw from the Swiss style (sometimes called the International Typographic Style) in design (Ashghar 2014, Turner 2014).[3] This approach emerged in the 1940s, and

    mainly focused on the use of grids, sans-serif typography, and clean hierarchy of content and layout. During the 40’s and 50’s, Swiss design often included a combination of a very large photograph with simple and minimal typography (Turner 2014).

    The design grammar of the Swiss style has been combined with minimalism and the principle of “responsive design”, that is that the materiality and specificity of the device should be responsive to the interface and context being displayed. Minimalism is a “term used in the 20th century, in particular from the 1960s, to describe a style characterized by an impersonal austerity, plain geometric configurations and industrially processed materials” (MoMA 2014).

    img-robert-morris-1_125225955286
    Robert Morris: Untitled (Scatter Piece), 1968-69, felt, steel, lead, zinc, copper, aluminum, brass, dimensions variable; at Leo Castelli Gallery, New York. Photo Genevieve Hanson. All works © 2010 Robert Morris/Artists Rights Society (ARS), New York.

    Robert Morris, one of the principle artists of Minimalism, and author of the influential Notes on Sculpture used “simple, regular and irregular polyhedrons. Influenced by theories in psychology and phenomenology” which he argued “established in the mind of the beholder ‘strong gestalt sensation’, whereby form and shape could be grasped intuitively” (MoMA 2014).[4]

    The implications of these two competing world-views are far-reaching in that much of the worlds initial contact, or touch points, for data services, real-time streams and computational power is increasingly through the platforms controlled by these two companies. However, they are also deeply influential across the programming industries, and we see alternatives and multiple reconfigurations in relation to the challenge raised by the “flattened” design paradigms. That is, they both represent, if only in potentia, a situation of a power relation and through this an ideological veneer on computation more generally. Further, with the proliferation of computational devices – and the screenic imaginary associated with them in the contemporary computational condition – there appears a new logic which lies behind, justifies and legitimates these design methodologies.

    It seems to me that these new flat design philosophies, in the broad sense, produce an order in precepts and concepts in order to give meaning and purpose not only in the interactions with computational platforms, but also more widely in terms of everyday life. Flat design and material design are competing philosophies that offer alternative patterns of both creation and interpretation, which are meant to have not only interface design implications, but more broadly in the ordering of concepts and ideas, the practices and the experience of computational technologies broadly conceived. Another way to put this could be to think about these moves as being a computational founding, the generation of, or argument for, an axial framework for building, reconfiguration and preservation.

    Indeed, flat design provides and more importantly serves, as a translational or metaphorical heuristic for both re-presenting the computational, but also teaches consumers and users how to use and manipulate new complex computational systems and stacks. In other words, in a striking visual technique flat design communicates the vertical structure of the computational stack, on which the Stack corporations are themselves constituted. But also begins to move beyond the specificity of the device as privileged site of a computational interface interaction from beginning to end. For example, interface techniques are abstracted away from the specificity of the device, for example through Apple’s “handoff” continuity framework which also potentially changes reading and writing practices in interesting ways and new use-cases for wearables and nearables.

    These new interface paradigms, introduced by the flat turn, have very interesting possibilities for the application of interface criticism, through unpacking and exploring the major trends and practices of the Stacks, that is, the major technology companies. I think that further than this, the notion of layers are instrumental in mediating the experience of an increasingly algorithmic society (e.g. think dashboards, personal information systems, quantified self, etc.), and as such provide an interpretative frame for a world of computational patterns but also a constituting grammar for building these systems in the first place. There is an element in which the notion of the postdigital may also be a useful way into thinking about the question of the link between art, computation and design given here (see Berry and Dieter, forthcoming) but also the importance of notions of materiality for the conceptualization deployed by designers working within both the flat design and material design paradigms – whether of paper, glass, or some other “material” substance.[5]
    _____

    David M. Berry is Reader in the School of Media, Film and Music at the University of Sussex. He writes widely on computation and the digital and blogs at Stunlaw. He is the author of Critical Theory and the Digital, The Philosophy of Software: Code and Mediation in the Digital Age , Copy, Rip, Burn: The Politics of Copyleft and Open Source, editor of Understanding Digital Humanities and co-editor of Postdigital Aesthetics: Art, Computation And Design. He is also a Director of the Sussex Humanities Lab.

    Back to the essay
    _____

    Notes

    [1] Many thanks to Michael Dieter and Søren Pold for the discussion which inspired this post.

    [2] The choice of paper and glass as the founding metaphors for the flat design philosophies of Google and Apple raise interesting questions for the way in which these companies articulate the remediation of other media forms, such as books, magazines, newspapers, music, television and film, etc. Indeed, the very idea of “publication” and the material carrier for the notion of publication is informed by the materiality, even if only a notional affordance given by this conceptualization. It would be interesting to see how the book is remediated through each of the design philosophies that inform both companies, for example.

    [3] One is struck by the posters produced in the Swiss style which date to the 1950s and 60s but which today remind one of the mobile device screens of the 21st Century.

    [4] There is also some interesting links to be explored between the Superflat style and postmodern art movement, founded by the artist Takashi Murakami, which is influenced by manga and anime, both in terms of the aesthetic but also in relation to the cultural moment in which “flatness” is linked to “shallow emptiness.”

    [5] There is some interesting work to be done in thinking about the non-visual aspects of flat theory, such as the increasing use of APIs, such as the RESTful api, but also sound interfaces that use “flat” sound to indicate spatiality in terms of interface or interaction design. There are also interesting implications for the design thinking implicit in the Apple Watch, and the Virtual Reality and Augmented Reality platforms of Oculus Rift, Microsoft HoloLens, Meta and Magic Leap.

    Bibliography
  • What Drives Automation?

    What Drives Automation?

    glass-cagea review of Nicholas Carr, The Glass Cage: Automation and Us (W.W. Norton, 2014)
    by Mike Bulajewski
    ~

    Debates about digital technology are often presented in terms of stark polar opposites: on one side, cyber-utopians who champion the new and the cutting edge, and on the other, cyber-skeptics who hold on to obsolete technology. The framing is one-dimensional in the general sense that it is superficial, but also in a more precise and mathematical sense that it implicitly treats the development of technology as linear. Relative to the present, there are only two possible positions and two possible directions to move; one can be either for or against, ahead or behind.[1]

    Although often invoked as a prelude to transcending the division and offering a balanced assessment, in describing the dispute in these pro or con terms one has already betrayed one’s orientation, tilting the field against the critical voice by assigning it an untenable position. Criticizing a new technology is misconstrued as a simple defense of the old technology or of no technology, which turns legitimate criticism into mere conservative fustiness, a refusal to adapt and a failure to accept change.

    Few critics of technology match these descriptions, and those who do, like the anarcho-primitivists who claim to be horrified by contemporary technology, nonetheless accede to the basic framework set by technological apologists. The two sides disagree only on the preferred direction of travel, making this brand of criticism more pro-technology than it first appears. One should not forget that the high-tech futurism of Silicon Valley is supplemented by the balancing counterweight of countercultural primitivism, with Burning Man expeditions, technology-free Waldorf schools for children of tech workers, spouses who embrace premodern New Age beliefs, romantic agrarianism, and restorative digital detox retreats featuring yoga and meditation. The diametric opposition between pro- and anti-technology is internal to the technology industry, perhaps a symptom of the repression of genuine debate about the merits of its products.

    ***

    Nicholas Carr’s most recent book, The Glass Cage: Automation and Us, is a critique of the use of automation and a warning of its human consequences, but to conclude, as some reviewers have, that he is against automation or against technology as such is to fall prey to this one-dimensional fallacy.[2]

    The book considers the use of automation in areas like medicine, architecture, finance, manufacturing and law, but it begins with an example that’s closer to home for most of us: driving a car. Transportation and wayfinding are minor themes throughout the book, and with Google and large automobile manufacturers promising to put self-driving cars on the street within a decade, the impact of automation in this area may soon be felt in our daily lives like never before. Early in the book, we are introduced to problems that human factors engineers working with airline autopilot systems have discovered and may be forewarnings of a future of the unchecked automating of transportation.

    Carr discusses automation bias—the tendency for operators to assume the system is correct and external signals that contradict it are wrong—and the closely related problem of automation complacency, which occurs when operators assume the system is infallible and so abandon their supervisory role. These problems have been linked to major air disasters and are behind less-catastrophic events like oblivious drivers blindly following their navigation systems into nearby lakes or down flights of stairs.

    The chapter dedicated to deskilling is certain to raise the ire of skeptical readers because it begins with an account of the negative impact of GPS technology on Inuit hunters who live in the remote northern reaches of Canada. As GPS devices proliferated, hunters lost what a tribal elder describes as “the wisdom and knowledge of the Inuit”: premodern wayfinding methods that rely on natural phenomena like wind, stars, tides and snowdrifts to navigate. Inuit wayfinding skills are truly impressive. The anthropologist Claudio Aporta reports traveling with a hunter across twenty square kilometers of flat featureless land as he located seven fox traps that he had never seen before, set by his uncle twenty five years prior. These talents have been eroded as Inuit hunters have adopted GPS devices that seem to do the job equally well, but have the unexpected side effect of increasing injuries and deaths as hunters succumb to equipment malfunctions and the twin perils of automation complacency and bias.

    Laboring under the misconceptions of the one-dimensional fallacy, it would be natural to take this as a smoking gun of Carr’s alleged anti-technology perspective and privileging of the premodern, but the closing sentences of the chapter point us away from that conclusion:

    We ignore the ways that software programs and automated systems might be reconfigured so as not to weaken our grasp on the world but to strengthen it. For, as human factors researchers and other experts on automation have found, there are ways to break the glass cage without losing the many benefits computers grant us. (151)

    These words segue into the following chapter, where Carr identifies the dominant philosophy that designs automation technologies to inadvertently produce problems that he identified earlier: technology-centered automation. This approach to design is distrustful of humans, perhaps even misanthropic. It views us as weak, inefficient, unreliable and error-prone, and seeks to minimize our involvement in the work to be done. It institutes a division of labor between human and machine that gives the bulk of the work over to the machine, only seeking human input in anomalous situations. This philosophy is behind modern autopilot systems that hand off control to human pilots for only a few minutes in a flight.

    The fundamental argument of the book is that this design philosophy can lead to undesirable consequences. Carr seeks an alternative he calls human-centered automation, an approach that ensures the human operator remains engaged and alert. Autopilot systems designed with this philosophy might return manual control to the pilots at irregular intervals to ensure they remain vigilant and practice their flying skills. It could provide tactile feedback of its operations so that pilots are involved in a visceral way rather than passively monitoring screens. Decision support systems like those used in healthcare could take a secondary role of reviewing and critiquing a decision made by a doctor made rather than the other way around.

    The Glass Cage calls for a fundamental shift in how we understand error. Under the current regime, an error is an inefficiency or an inconvenience, to be avoided at all costs. As defined by Carr, a human-centered approach to design treats error differently, viewing it as an opportunity for learning. He illustrates this with a personal experience of repeatedly failing a difficult mission in the video game Red Dead Redemption, and points to the satisfaction of finally winning a difficult game as an example of what is lost when technology is designed to be too easy. He offers video games as a model for the kinds of technologies he would like to see: tools that engage us in difficult challenges, that encourage us to develop expertise and to experience flow states.

    But Carr has an idiosyncratic definition of human-centered design which becomes apparent when he counterposes his position against the prominent design consultant Peter Merholz. Echoing premises almost universally adopted by designers, Merholz calls for simple, frictionless interfaces and devices that don’t require a great deal of skill, memorization or effort to operate. Carr objects that that eliminates learning, skill building and mental engagement—perhaps a valid criticism, but it’s strange to suggest that this reflects a misanthropic technology-centered approach.

    A frequently invoked maxim of human-centered design is that technology should adapt to people, rather than people adapting to technology. In practice, the primary consideration is helping the user achieve his or her goal as efficiently and effectively as possible, removing unnecessary obstacles and delays that stand in the way. Carr argues for the value of challenges, difficulties and demands placed on users to learn and hone skills, all of which fall under the prohibited category of people adapting to technology.

    In his example of playing Red Dead Redemption, Carr prizes the repeated failure and frustration before finally succeeding at the game. From the lens of human-centered design, that kind of experience is seen as a very serious problem that should be eliminated quickly, which is probably why this kind of design is rarely employed at game studios. In fact, it doesn’t really make sense to think of a game player as having a goal, at least not from the traditional human-centered standpoint. The driver of a car has a goal: to get from point A to point B; a Facebook user wants to share pictures with friends; the user of a word processor wants to write a document; and so on. As designers, we want to make these tasks easy, efficient and frictionless. The most obvious way of framing game play is to say that the player’s goal is to complete the game. We would then proceed to remove all obstacles, frustrations, challenges and opportunities for error that stand in the way so that they may accomplish this goal more efficiently, and then there would be nothing left for them to do. We would have ruined the game.

    This is not necessarily the result of a misanthropic preference for technology over humanity, though it may be. It is also the likely outcome of a perfectly sincere and humanistic belief that we shouldn’t inconvenience the user with difficulties that stand in the way of their goal. As human factors researcher David Woods puts it, “The road to technology-centered systems is paved with human-centered intentions,”[3] a phrasing which suggests that these two philosophies aren’t quite as distinct as Carr would have us believe.

    Carr’s vision of human-centered design differs markedly from contemporary design practice, which stresses convenience, simplicity, efficiency for the user and ease of use. In calling for less simplicity and convenience, he is in effect critical of really existing human-centeredness, and that troubles any reading of The Glass Cage that views it a book about restoring our humanity in a world driven mad by machines.

    It might be better described as a book about restoring one conception of humanity in a world driven mad by another. It is possible to argue that the difference between the two appears in psychoanalytic theory as the difference between drive and desire. The user engages with a technology in order to achieve a goal because they perceive themselves as lacking something. Through the use of this tool, they believe they can regain it and fill in this lack. It follows that designers ought to help the user achieve their goal—to reach their desire—as quickly and efficiently as possible because this will satisfy them and make them happy.

    But the insight of psychoanalysis is that lack is ontological and irreducible, it cannot be filled in any permanent way because any concrete lack we experience is in fact metonymic for a constitutive lack of being. As a result, as desiring subjects we are caught in an endless loop of seeking out that object of desire, feeling disappointed when we find it because it didn’t fulfill our fantasies and then finding a new object to chase. The alternative is to shift from desire to drive, turning this failure into a triumph. Slavoj Žižek describes drive as follows: “the very failure to reach its goal, the repetition of this failure, the endless circulation around the object, generates a satisfaction of its own.”[4]

    This satisfaction is perhaps what Carr aims at when he celebrates the frustrations and challenges of video games and of work in general. That video games can’t be made more efficient without ruining them indicates that what players really want is for their goal to be thwarted, evoking the psychoanalytic maxim that summarizes the difference between desire and drive: from the missing/lost object, to loss itself as an object. This point is by no means tangential. Early on, Carr introduces the concept of miswanting, defined as the tendency to desire what we don’t really want and won’t make us happy—in this case, leisure and ease over work and challenge. Psychoanalysts holds that all human wanting (within the register of desire) is miswanting. Through fantasy, we imagine an illusory fullness or completeness of which actual experience always falls short.[5]

    Carr’s revaluation of challenge, effort and, ultimately, dissatisfaction cannot represent a correction of the error of miswanting­–of rediscovering the true source of pleasure and happiness in work. Instead, it radicalizes the error: we should learn to derive a kind of satisfaction from our failure to enjoy. Or, in the final chapter, as Carr says of the farmer in Robert Frost’s poem Mowing, who is hard at work and yet far from the demands of productivity: “He’s not seeking some greater truth beyond the work. The work is the truth.”

    ***

    Nicholas Carr has a track record of provoking designers to rethink their assumptions. With The Shallows, along with other authors making related arguments, he influenced software developers to create a new class of tools that cut off the internet, eliminate notifications or block social media web sites to help us concentrate. Starting with OS X Lion in 2011, Apple began offering a full screen mode that hides distracting interface elements and background windows from inactive applications.

    What transformative effects could The Glass Cage have on the way software is designed? The book certainly offers compelling reasons to question whether ease of use should always be paramount. Advocates for simplicity are rarely challenged, but they may now find themselves facing unexpected objections. Software could become more challenging and difficult to use—not in the sense of a recalcitrant WiFi router that emits incomprehensible error codes, but more like a game. Designers might draw inspiration from video games, perhaps looking to classics like the first level of Super Mario Brothers, a masterpiece of level design that teaches the fundamental rules of the game without ever requiring the player to read the manual or step through a tutorial.

    Everywhere that automation now reigns, new possibilities announce themselves. A spell checker might stop to teach spelling rules, or make a game out of letting the user take a shot at correcting mistakes it has detected. What if there was a GPS navigation device that enhanced our sense of spatial awareness rather than eroding it, that engaged our attention on to the road rather than let us tune out. Could we build an app that helps drivers maintain good their skills by challenging them to adopt safer and more fuel-efficient driving techniques?

    Carr points out that the preference for easy-to-use technologies that reduce users’ engagement is partly a consequence of economic interests and cost reduction policies that profit from the deskilling and reduction of the workforce, and these aren’t dislodged simply by pressing for new design philosophies. But to his credit, Carr has written two best-selling books aimed at the general interest reader on the fairly obscure topic of human-computer interaction. User experience designers working in the technology industry often face an uphill battle in trying to build human-centered products (however that is defined). When these matters attract public attention and debate, it makes their job a little easier.

    _____

    Mike Bulajewski (@mrteacup) is a user experience designer with a Master’s degree from University of Washington’s Human Centered Design and Engineering program. He writes about technology, psychoanalysis, philosophy, design, ideology & Slavoj Žižek at MrTeacup.org. He has previously written about the Spike Jonze film Her for The b2 Review Digital Studies section.

    Back to the essay

    _____

    [1] Differences between individual technologies are ignored and replaced by the monolithic master category of Technology. Jonah Lehrer’s review of Nicholas Carr’s 2010 book The Shallows in the New York Times exemplifies such thinking. Lehrer finds contradictory evidence against Carr’s argument that the internet is weakening our mental faculties in scientific studies that attribute cognitive improvements to playing video games, a non sequitur which gains meaning only by subsuming these two very different technologies under a single general category of Technology. Evgeny Morozov is one of the sharpest critics of this tendency. Here one is reminded of his retort in his article “Ghosts in the Machine” (2013): “That dentistry has been beneficial to civilization tells us very little about the wonders of data-mining.”

    [2] There are a range of possible causes for this constrictive linear geometry: a tendency to see a progressive narrative of history; a consumerist notion of agency which only allows shoppers to either upgrade or stick with what they have; or the oft-cited binary logic of digital technology. One may speculate about the influence of the popular technology marketing book by Geoffrey A. Moore, Crossing the Chasm (2014) whose titular chasm is the gap between the elite group of innovators and early adopters—the avant-garde—and the recalcitrant masses bringing up the rear who must be persuaded to sign on to their vision.

    [3] David D. Woods and David Tinapple (1999). “W3: Watching Human Factors Watch People at Work.” Proceedings of the 43rd Annual Meeting of the Human Factors and Ergonomics Society (1999).

    [4] Slavoj Žižek, The Parallax View (2006), 63.

    [5] The cultural and political implications of this shift are explored at length in Todd McGowan’s two books The End of Dissatisfaction: Jacques Lacan and the Emerging Society of Enjoyment (2003) and Enjoying What We Don’t Have: The Political Project of Psychoanalysis (2013).

  • Frank Pasquale — To Replace or Respect: Futurology as if People Mattered

    Frank Pasquale — To Replace or Respect: Futurology as if People Mattered

    a review of Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (W.W. Norton, 2014)

    by Frank Pasquale

    ~

    Business futurism is a grim discipline. Workers must either adapt to the new economic realities, or be replaced by software. There is a “race between education and technology,” as two of Harvard’s most liberal economists insist. Managers should replace labor with machines that require neither breaks nor sick leave. Superstar talents can win outsize rewards in the new digital economy, as they now enjoy global reach, but they will replace thousands or millions of also-rans. Whatever can be automated, will be, as competitive pressures make fairly paid labor a luxury.

    Thankfully, Erik Brynjolfsson and Andrew McAfee’s The Second Machine Age (2MA)  downplays these zero-sum tropes. Brynjolffson & McAfee (B&M) argue that the question of distribution of the gains from automation is just as important as the competitions for dominance it accelerates. 2MA invites readers to consider how societies will decide what type of bounty from automation they want, and what is wanted first.  The standard, supposedly neutral economic response (“whatever the people demand, via consumer sovereignty”) is unconvincing. As inequality accelerates, the top 5% (of income earners) do 35% of the consumption. The top 1% is responsible for an even more disproportionate share of investment. Its richest members can just as easily decide to accelerate the automation of the wealth defense industry as they can allocate money to robotic construction, transportation, or mining.

    A humane agenda for automation would prioritize innovations that complement (jobs that ought to be) fulfilling vocations, and substitute machines for dangerous or degrading work. Robotic meat-cutters make sense; robot day care is something to be far more cautious about. Most importantly, retarding automation that controls, stigmatizes, and cheats innocent people, or sets up arms races with zero productive gains, should be a much bigger part of public discussions of the role of machines and software in ordering human affairs.

    2MA may set the stage for such a human-centered automation agenda. Its diagnosis of the problem of rapid automation (described in Part I below) is compelling. Its normative principles (II) are eclectic and often humane. But its policy vision (III) is not up to the challenge of channeling and sequencing automation. This review offers an alternative, while acknowledging the prescience and insight of B&M’s work.

    I. Automation’s Discontents

    For B&M, the acceleration of automation ranks with the development of agriculture, or the industrial revolution, as one of the “big stories” of human history (10-12). They offer an account of the “bounty and spread” to come from automation. “Bounty” refers to the increasing “volume, variety, and velocity” of any imaginable service or good, thanks to its digital reproduction or simulation (via, say, 3-D printing or robots). “Spread” is “ever-bigger differences among people in economic success” that they believe to be just as much an “economic consequence” of automation as bounty.[1]

    2MA briskly describes various human workers recently replaced by computers.  The poor souls who once penned corporate earnings reports for newspapers? Some are now replaced by Narrative Science, which seamlessly integrates new data into ready-made templates (35). Concierges should watch out for Siri (65). Forecasters of all kinds (weather, home sales, stock prices) are being shoved aside by the verdicts of “big data” (68). “Quirky,” a startup, raised $90 million by splitting the work of making products between a “crowd” that “votes on submissions, conducts research, suggest improvements, names and brands products, and drives sales” (87), and Quirky itself, which “handles engineering, manufacturing, and distribution.” 3D printing might even disintermediate firms like Quirky (36).

    In short, 2MA presents a kaleidoscope of automation realities and opportunities. B&M skillfully describe the many ways automation both increases the “size of the pie,” economically, and concentrates the resulting bounty among the talented, the lucky, and the ruthless. B&M emphasize that automation is creeping up the value chain, potentially substituting machines for workers paid better than the average.

    What’s missing from the book are the new wave of conflicts that would arise if those at very top of the value chain (or, less charitably, the rent and tribute chain) were to be replaced by robots and algorithms. When BART workers went on strike, Silicon Valley worthies threatened to replace them with robots. But one could just as easily call for the venture capitalists to be replaced with algorithms. Indeed, one venture capital firm added an algorithm to its board in 2013.  Travis Kalanick, the CEO of Uber, responded to a question on driver wage demands by bringing up the prospect of robotic drivers. But given Uber’s multiple legal and PR fails in 2014, a robot would probably would have done a better job running the company than Kalanick.

    That’s not “crazy talk” of communistic visions along the lines of Marx’s “expropriate the expropriators,” or Chile’s failed Cybersyn.[2]  Thiel Fellow and computer programming prodigy Vitaly Bukherin has stated that automation of the top management functions at firms like Uber and AirBnB would be “trivially easy.”[3] Automating the automators may sound like a fantasy, but it is a natural outgrowth of mantras (e.g., “maximize shareholder value”) that are commonplaces among the corporate elite. To attract and retain the support of investors, a firm must obtain certain results, and the short-run paths to attaining them (such as cutting wages, or financial engineering) are increasingly narrow.  And in today’s investment environment of rampant short-termism, the short is often the only term there is.

    In the long run, a secure firm can tolerate experiments. Little wonder, then, that the largest firm at the cutting edge of automation—Google—has a secure near-monopoly in search advertising in numerous markets. As Peter Thiel points out in his recent From Zero to One, today’s capitalism rewards the best monopolist, not the best competitor. Indeed, even the Department of Justice’s Antitrust Division appeared to agree with Thiel in its 1995 guidelines on antitrust enforcement in innovation markets. It viewed intellectual property as a good monopoly, the rightful reward to innovators for developing a uniquely effective process or product. And its partner in federal antitrust enforcement, the Federal Trade Commission, has been remarkably quiescent in response to emerging data monopolies.

    II. Propertizing Data

    For B&M, intellectual property—or, at least, the returns accruing to intellectual insight or labor—plays a critical role in legitimating inequalities arising out of advanced technologies.  They argue that “in the future, ideas will be the real scarce inputs in the world—scarcer than both labor and capital—and the few who provide good ideas will reap huge rewards.”[4] But many of the leading examples of profitable automation are not “ideas” per se, or even particularly ingenious algorithms. They are brute force feats of pattern recognition: for example, Google’s studying past patterns of clicks to see what search results, and what ads, are personalized to delight and persuade each of its hundreds of millions of users. The critical advantage there is the data, not the skill in working with it.[5] Google will demur, but if they were really confident, they’d license the data to other firms, confident that others couldn’t best their algorithmic prowess.  They don’t, because the data is their critical, self-reinforcing advantage. It is a commonplace in big data literatures to say that the more data one has, the more valuable any piece of it becomes—something Googlers would agree with, as long as antitrust authorities aren’t within earshot.

    As sensors become more powerful and ubiquitous, feats of automated service provision and manufacture become more easily imaginable.  The Baxter robot, for example, merely needs to have a trainer show it how to move in order to ape the trainer’s own job. (One is reminded of the stories of US workers flying to India to train their replacements how to do their job, back in the day when outsourcing was the threat du jour to U.S. living standards.)

    how to train a robot
    How to train a Baxter robot. Image source: Inc. 

    From direct physical interaction with a robot, it is a short step to, say, programmed holographic or data-driven programming.  For example, a surveillance camera on a worker could, after a period of days, months, or years, potentially record every movement or statement of the worker, and replicate it, in response to whatever stimuli led to the prior movements or statements of the worker.

    B&M appear to assume that such data will be owned by the corporations that monitor their own workers.  For example, McDonalds could train a camera on every cook and cashier, then download the contents into robotic replicas. But it’s just as easy to imagine a legal regime where, say, workers’ rights to the data describing their movements would be their property, and firms would need to negotiate to purchase the rights to it.  If dance movements can be copyrighted, so too can the sweeps and wipes of a janitor. Consider, too, that the extraordinary advances in translation accomplished by programs like Google Translate are in part based on translations by humans of United Nations’ documents released into the public domain.[6] Had the translators’ work not been covered by “work-made-for-hire” or similar doctrines, they might well have kept their copyrights, and shared in the bounty now enjoyed by Google.[7]

    Of course, the creativity of translation may be greater than that displayed by a janitor or cashier. Copyright purists might thus reason that the merger doctrine denies copyrightability to the one best way (or small suite of ways) of doing something, since the idea of the movement and its expression cannot be separated. Grant that, and one could still imagine privacy laws giving workers the right to negotiate over how, and how pervasively, they are watched. There are myriad legal regimes governing, in minute detail, how information flows and who has control over it.

    I do not mean to appropriate here Jaron Lanier’s ideas about micropayments, promising as they may be in areas like music or journalism. A CEO could find some critical mass of stockers or cooks or cashiers to mimic even if those at 99% of stores demanded royalties for the work (of) being watched. But the flexibility of legal regimes of credit, control, and compensation is under-recognized. Living in a world where employers can simply record everything their employees do, or Google can simply copy every website that fails to adopt “robots.txt” protection, is not inevitable. Indeed, according to renowned intellectual property scholar Oren Bracha, Google had to “stand copyright on its head” to win that default.[8]

    Thus B&M are wise to acknowledge the contestability of value in the contemporary economy.  For example, they build on the work of MIT economists Daron Acemoglu and David Autor to demonstrate that “skill biased technical change” is a misleading moniker for trends in wage levels.  The “tasks that machines can do better than humans” are not always “low-skill” ones (139). There is a fair amount of play in the joints in the sequencing of automation: sometimes highly skilled workers get replaced before those with a less complex and difficult-to-learn repertoire of abilities.  B&M also show that the bounty predictably achieved via automation could compensate the “losers” (of jobs or other functions in society) in the transition to a more fully computerized society. By seriously considering the possibility of a basic income (232), they evince a moral sensibility light years ahead of the “devil-take-the-hindmost” school of cyberlibertarianism.

    III. Proposals for Reform

    Unfortunately, some of B&M’s other ideas for addressing the possibility of mass unemployment in the wake of automation are less than convincing.  They praise platforms like Lyft for providing new opportunities for work (244), perhaps forgetting that, earlier in the book, they described the imminent arrival of the self-driving car (14-15). Of course, one can imagine decades of tiered driving, where the wealthy get self-driving cars first, and car-less masses turn to the scrambling drivers of Uber and Lyft to catch rides. But such a future seems more likely to end in a deflationary spiral than  sustainable growth and equitable distribution of purchasing power. Like the generation traumatized by the Great Depression, millions subjected to reverse auctions for their labor power, forced to price themselves ever lower to beat back the bids of the technologically unemployed, are not going to be in a mood to spend. Learned helplessness, retrenchment, and miserliness are just as likely a consequence as buoyant “re-skilling” and self-reinvention.

    Thus B&M’s optimism about what they call the “peer economy” of platform-arranged production is unconvincing.  A premier platform of digital labor matching—Amazon’s Mechanical Turk—has occasionally driven down the wage for “human intelligence tasks” to a penny each. Scholars like Trebor Scholz and Miriam Cherry have discussed the sociological and legal implications of platforms that try to disclaim all responsibility for labor law or other regulations. Lilly Irani’s important review of 2MA shows just how corrosive platform capitalism has become. “With workers hidden in the technology, programmers can treat [them] like bits of code and continue to think of themselves as builders, not managers,” she observes in a cutting aside on the self-image of many “maker” enthusiasts.

    The “sharing economy” is a glidepath to precarity, accelerating the same fate for labor in general as “music sharing services” sealed for most musicians. The lived experience of many “TaskRabbits,” which B&M boast about using to make charts for their book, cautions against reliance on disintermediation as a key to opportunity in the new digital economy. Sarah Kessler describes making $1.94 an hour labeling images for a researcher who put the task for bid on Mturk.  The median active TaskRabbit in her neighborhood made $120 a week; Kessler cleared $11 an hour on her best day.

    Resistance is building, and may create fairer terms online.  For example, Irani has helped develop a “Turkopticon” to help Turkers rate and rank employers on the site. Both Scholz and Mike Konczal have proposed worker cooperatives as feasible alternatives to Uber, offering drivers both a fairer share of revenues, and more say in their conditions of work. But for now, the peer economy, as organized by Silicon Valley and start-ups, is not an encouraging alternative to traditional employment. It may, in fact, be worse.

    Therefore, I hope B&M are serious when they say “Wild Ideas [are] Welcomed” (245), and mention the following:

    • Provide vouchers for basic necessities. . . .
    • Create a national mutual fund distributing the ownership of capital widely and perhaps inalienably, providing a dividend stream to all citizens and assuring the capital returns do not become too highly concentrated.
    • Depression-era Civilian Conservation Corps to clean up the environment, build infrastructure.

    Speaking of the non-automatable, we could add the Works Progress Administration (WPA) to the CCC suggestion above.  Revalue the arts properly, and the transition may even add to GDP.

    Soyer, Artists on the WPA
    Moses Soyer, “Artists on WPA” (1935). Image source: Smithsonian American Art Museum

    Unfortunately, B&M distance themselves from the ideas, saying, “we include them not necessarily to endorse them, but instead to spur further thinking about what kinds of interventions will be necessary as machines continue to race ahead” (246).  That is problematic, on at least two levels.

    First, a sophisticated discussion of capital should be at the core of an account of automation,  not its periphery. The authors are right to call for greater investment in education, infrastructure, and basic services, but they need a more sophisticated account of how that is to be arranged in an era when capital is extraordinarily concentrated, its owners have power over the political process, and most show little to no interest in long-term investment in the skills and abilities of the 99%. Even the purchasing power of the vast majority of consumers is of little import to those who can live off lightly taxed capital gains.

    Second, assuming that “machines continue to race ahead” is a dodge, a refusal to name the responsible parties running the machines.  Someone is designing and purchasing algorithms and robots. Illah Reza Nourbaksh’s Robot Futures suggests another metaphor:

    Today most nonspecialists have little say in charting the role that robots will play in our lives.  We are simply watching a new version of Star Wars scripted by research and business interests in real time, except that this script will become our actual world. . . . Familiar devices will become more aware, more interactive and more proactive; and entirely new robot creatures will share our spaces, public and private, physical and digital. . . .Eventually, we will need to read what they write, we will have to interact with them to conduct our business transactions, and we will often mediate our friendships through them.  We will even compete with them in sports, at jobs, and in business. [9]

    Nourbaksh nudges us closer to the truth, focusing on the competitive angle. But the “we” he describes is also inaccurate. There is a group that will never have to “compete” with robots at jobs or in business—rentiers. Too many of them are narrowly focused on how quickly they can replace needy workers with undemanding machines.

    For the rest of us, another question concerning automation is more appropriate: how much can we be stuck with? A black-card-toting bigshot will get the white glove treatment from AmEx; the rest are shunted into automated phone trees. An algorithm determines the shifts of retail and restaurant workers, oblivious to their needs for rest, a living wage, or time with their families.  Automated security guards, police, and prison guards are on the horizon. And for many of the “expelled,” the homines sacres, automation is a matter of life and death: drone technology can keep small planes on their tracks for hours, days, months—as long as it takes to execute orders.

    B&M focus on “brilliant technologies,” rather than the brutal or bumbling instances of automation.  It is fun to imagine a souped-up Roomba making the drudgery of housecleaning a thing of the past.  But domestic robots have been around since 2000, and the median wage-earner in the U.S. does not appear to be on a fast track to a Jetsons-style life of ease.[10] They are just as likely to be targeted by the algorithms of the everyday, as they are to be helped by them. Mysterious scoring systems routinely stigmatize persons, without them even knowing. They reflect the dark side of automation—and we are in the dark about them, given the protections that trade secrecy law affords their developers.

    IV. Conclusion

    Debates about robots and the workers “struggling to keep up” with them are becoming stereotyped and stale. There is the standard economic narrative of “skill-biased technical change,” which acts more as a tautological, post hoc, retrodictive, just-so story than a coherent explanation of how wages are actually shifting. There is cyberlibertarian cornucopianism, as Google’s Ray Kurzweil and Eric Schmidt promise there is nothing to fear from an automated future. There is dystopianism, whether intended as a self-preventing prophecy, or entertainment. Each side tends to talk past the other, taking for granted assumptions and values that its putative interlocutors reject out of hand.

    Set amidst this grim field, 2MA is a clear advance. B&M are attuned to possibilities for the near and far future, and write about each in accessible and insightful ways.  The authors of The Second Machine Age claim even more for it, billing it as a guide to epochal change in our economy. But it is better understood as the kind of “big idea” book that can name a social problem, underscore its magnitude, and still dodge the elaboration of solutions controversial enough to scare off celebrity blurbers.

    One of 2MA’s blurbers, Clayton Christensen, offers a backhanded compliment that exposes the core weakness of the book. “[L]earners and teachers alike are in a perpetual mode of catching up with what is possible. [The Second Machine Age] frames a future that is genuinely exciting!” gushes Christensen, eager to fold automation into his grand theory of disruption. Such a future may be exciting for someone like Christensen, a millionaire many times over who won’t lack for food, medical care, or housing if his forays fail. But most people do not want to be in “perpetually catching up” mode. They want secure and stable employment, a roof over their heads, decent health care and schooling, and some other accoutrements of middle class life. Meaning is found outside the economic sphere.

    Automation could help stabilize and cheapen the supply of necessities, giving more persons the time and space to enjoy pursuits of their own choosing. Or it could accelerate arms races of various kinds: for money, political power, armaments, spying, stock trading. As long as purchasing power alone—whether of persons or corporations—drives the scope and pace of automation, there is little hope that the “brilliant technologies” B&M describe will reliably lighten burdens that the average person experiences. They may just as easily entrench already great divides.

    All too often, the automation literature is focused on replacing humans, rather than respecting their hopes, duties, and aspirations. A central task of educators, managers, and business leaders should be finding ways to complement a workforce’s existing skills, rather than sweeping that workforce aside. That does not simply mean creating workers with skill sets that better “plug into” the needs of machines, but also, doing the opposite: creating machines that better enhance and respect the abilities and needs of workers.  That would be a “machine age” welcoming for all, rather than one calibrated to reflect and extend the power of machine owners.

    _____

    Frank Pasquale (@FrankPasquale) is a Professor of Law at the University of Maryland Carey School of Law. His recent book, The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press, 2015), develops a social theory of reputation, search, and finance.  He blogs regularly at Concurring Opinions. He has received a commission from Triple Canopy to write and present on the political economy of automation. He is a member of the Council for Big Data, Ethics, and Society, and an Affiliate Fellow of Yale Law School’s Information Society Project. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    [1] One can quibble with the idea of automation as necessarily entailing “bounty”—as Yves Smith has repeatedly demonstrated, computer systems can just as easily “crapify” a process once managed well by humans. Nor is “spread” a necessary consequence of automation; well-distributed tools could well counteract it. It is merely a predictable consequence, given current finance and business norms and laws.

    [2] For a definition of “crazy talk,” see Neil Postman, Stupid Talk, Crazy Talk: How We Defeat Ourselves by the Way We Talk and What to Do About It (Delacorte, 1976). For Postman, “stupid talk” can be corrected via facts, whereas “crazy talk” “establishes different purposes and functions than the ones we normally expect.” If we accept the premise of labor as a cost to be minimized, what better to cut than the compensation of the highest paid persons?

    [3] Conversation with Sam Frank at the Swiss Institute, Dec. 16, 2014, sponsored by Triple Canopy.

    [4] In Brynjolfsson, McAfee, and Michael Spence, “New World Order: Labor, Capital, and Ideas in the Power Law Economy,” an article promoting the book. Unfortunately, as with most statements in this vein, B&M&S give us little idea how to identify a “good idea” other than one that “reap[s] huge rewards”—a tautology all too common in economic and business writing.

    [5] Frank Pasquale, The Black Box Society (Harvard University Press, 2015).

    [6] Programs, both in the sense of particular software regimes, and the program of human and technical efforts to collect and analyze the translations that were the critical data enabling the writing of the software programs behind Google Translate.

    [9] Illah Reza Nourbaksh, Robot Futures (MIT Press, 2013), pp. xix-xx.

    [10] Erwin Prassler and Kazuhiro Kosuge, “Domestic Robotics,” in Bruno Siciliano and Oussama Khatib, eds., Springer Handbook of Robotics (Springer, 2008), p. 1258.

  • Warding Off General Ludd: The Absurdity of “The Luddite Awards”

    Warding Off General Ludd: The Absurdity of “The Luddite Awards”

    By Zachary Loeb
    ~

    Of all the dangers looming over humanity no threat is greater than that posed by the Luddites.

    If the previous sentence seems absurdly hyperbolic, know that it only seems that way because it is, in fact, quite ludicrous. It has been over two hundred years since the historic Luddites rose up against “machinery hurtful to commonality,” but as their leader the myth enrobed General Ludd was never apprehended there are always those who fear that General Ludd is still out there, waiting with sledge hammer at the ready. True, there have been some activist attempts to revive the spirit of the Luddites (such as the neo-Luddites of the late 1980s and 1990s) – but in the midst of a society enthralled by (and in thrall to) smart phones, start-ups, and large tech companies – to see Luddites lurking in every shadow is a sign of either ideology, paranoia, or both.

    Yet, such an amusing mixture of unabashed pro-technology ideology and anxiety at the possibility of any criticism of technology is on full display in the inaugural “Luddite Awards” presented by The Information Technology and Innovation Foundation (ITIF). Whereas the historic Luddites needed sturdy hammers, and other such implements, to engage in machine breaking the ITIF seems to believe that the technology of today is much more fragile – it can be smashed into nothingness simply by criticism or even skepticism. As their name suggests, the ITIF is a think tank committed to the celebration of, and advocating for, technological innovation in its many forms. Thus it should not be surprising that a group committed to technological innovation would be wary of what it perceives as a growing chorus of “neo-Ludditism” that it imagines is planning to pull the plug on innovation. Therefore the ITIF has seen fit to present dishonorable “Luddite Awards” to groups it has deemed insufficiently enamored with innovation, these groups include (amongst others): The Vermont Legislature, The French Government, the organization Free Press, the National Rifle Association, and the Electronic Frontier Foundation. The ITIF “Luddite Awards” may mark the first time that any group has accused the Electronic Frontier Foundation of being a secret harbor for neo-Ludditism.

    luddite
    Unknown artist, “The Leader of the Luddites,” engraving, 1812 (image source: Wikipedia)

    The full report on “The 2014 ITIF Luddite Awards,” written by the ITIF’s president Robert D. Atkinson, presents the current state of technological innovation as being dangerously precarious. Though technological innovation is currently supplying people with all manner of devices, the ITIF warns against a growing movement born of neo-Ludditism that will aim to put a stop to further innovation. Today’s neo-Ludditism, in the estimation of the ITIF is distinct from the historic Luddites, and yet the goal of “ideological Ludditism” is still “to ‘smash’ today’s technology.” Granted, adherents of neo-Ludditism are not raiding factories with hammers, instead they are to be found teaching at universities, writing columns in major newspapers, disparaging technology in the media, and otherwise attempting to block the forward movement of progress. According to the ITIF (note the word “all”):

    “what is behind all ideological Ludditism is the general longing for a simpler life from the past—a life with fewer electronics, chemicals, molecules, machines, etc.” (ITIF, 3)

    Though the chorus of Ludditisim has, in the ITIF’s reckoning, grown to an unacceptable volume of late, the foundation is quick to emphasize that Ludditism is nothing new. What is new, as the ITIF puts it, is that these nefarious Luddite views have, apparently, moved from the margins and infected the larger public discourse around technology. A diverse array of figures and groups from figures like environmentalist Bill McKibben, conservative thinker James Pethokoukis, economist Paul Krugman, writers for Smithsonian Magazine, to foundations like Free Press, the EFF and the NRA – are all tarred with the epithet “Luddite.”The neo-Luddites, according to ITIF, issue warnings against unmitigated acceptance of innovation when they bring up environmental concerns, mention the possibility of jobs being displaced by technology, write somewhat approvingly of the historic Luddites, or advocate for Net Neutrality.

    While the ITIF holds to the popular, if historically inaccurate, definition of Luddite as “one who resists technological change,” their awards make clear that the ITIF would like to add to this definition the words “or even mildly opposes any technological innovation.” The ten groups awarded “Luddite Awards” are a mixture of non-profit public advocacy organizations and various governments – though the ITIF report seems to revel in attacking Bill McKibben he was not deemed worthy of an award (maybe next year). The awardees include: the NRA for opposing smart guns, The Vermont legislature for requiring the labeling of GMOS, Free Press’s support of net neutrality which is deemed as an affront to “smarter broadband networks,” news reports which “claim that ‘robots are killing jobs,” the EFF is cited as it “opposes Health IT,” and various governments in several states are reprimanded for “cracking down” on companies like Airbnb, Uber and Lyft. The ten recipients of Luddite awards may be quite surprised to find that they have been deemed adherents of neo-Ludditism, but in the view of the ITIF the actions these groups have taken indicate that General Ludd is slyly guiding their moves. Though the Luddite Awards may have a somewhat silly feeling, the ITIF cautions that the threat is serious, as the report ominously concludes:

    “But while we can’t stop the Luddites from engaging in their anti-progress, anti-innovation activities, we can recognize them for what they are: actions and ideas that are profoundly anti-progress, that if followed would mean a our children [sic] will live lives as adults nowhere near as good as the lives they could live if we instead embraced, rather than fought innovation.” (ITIF, 19)

    Credit is due to the ITIF for their ideological consistency. In putting together their list of recipients for the inaugural “Luddite Awards” – the foundation demonstrates that they are fully committed to technological innovation and they are unflagging in their support of that cause. Nevertheless, while the awards (and in particular the report accompanying the awards) may be internally ideologically consistent it is also a work of dubious historical scholarship, comical neoliberal paranoia, and evinces a profound anti-democratic tendency. Though the ITIF awards aim to target what it perceives as “neo-Ludditism” even a cursory glance at their awardees makes it abundantly clear that what the organization actually opposes is any attempt to regulate technology undertaken by a government, or advocated for by a public interest group. Even in a country as regulation averse as the contemporary United States it is still safer to defame Luddites than to simply state that you reject regulation. The ITIF carefully cloaks its ideology in the aura of terms with positive connotations such as “innovation,” “progress,” and “freedom” but these terms are only so much fresh paint over the same “free market” ideology that only values innovation, progress and freedom when they are in the service of neoliberal economic policies. Nowhere does the ITIF engage seriously with the questions of “who profits from this innovation?” “who benefits from this progress?” “is this ‘freedom’ equally distributed or does it reinforce existing inequities?” – the terms are used as ideological sledgehammers far blunter than any tool the Luddites ever used. This raw ideology is on perfect display in the very opening line of the award announcement, which reads:

    “Technological innovation is the wellspring of human progress, bringing higher standards of living, improved health, a cleaner environment, increased access to information and many other benefits.” (ITIF, 1)

    One can only applaud the ITIF for so clearly laying out their ideology at the outset, and one can only raise a skeptical eyebrow at this obvious case of the logical fallacy of Begging the Question. To claim that “technological innovation is the wellspring of human progress” is an assumption that demands proof, it is not a conclusion in and of itself. While arguments can certainly be made to support this assumption there is little in the ITIF report that suggests the ITIF is willing to engage in the type of critical reflection, which would be necessary for successfully supporting this argument (though, to be fair, the ITIF has published many other reports some of which may better lay out this claim). The further conclusions that such innovation brings “higher standards of living, improved health, a cleaner environment” and so forth are further assumptions that require proof – and in the process of demonstrating this proof one is forced (if engaging in honest argumentation) to recognize the validity of competing claims. Particularly as many of the “benefits” ITIF seeks to celebrate do not accrue evenly. True, an argument can be made that technological innovation has an important role to play in ushering in a “cleaner environment” – but tell that to somebody who lives next to an e-waste dump where mountains of the now obsolete detritus of “technological innovation” leach toxins into the soil. The ITIF report is filled with such pleasant sounding “common sense” technological assumptions that have been, at the very least, rendered highly problematic by serious works of inquiry and scholarship in the field of the history of technology. As classic works in the scholarly literature of the Science and Technology Studies field, such as Ruth Schwartz Cowan’s More Work for Mother, make clear “technological innovation” does not always live up to its claims. Granted, it is easy to imagine that the ITIF would offer a retort that simply dismisses all such scholarship as tainted by neo-Ludditism. Yet recognizing that not all “innovation” is a pure blessing does not represent a rejection of “innovation” as such – it just recognize that “innovation” is only one amongst many competing values a society must try to balance.

    Instead of engaging with critics of “technological innovation” in good faith, the ITIF jumps from one logical fallacy to another, trading circular reasoning for attacking the advocate. The author of the ITIF report seems to delight in pillorying Bill McKibben but also aims its barbs at scholars like David Noble and Neil Postman for exposing impressionable college aged minds to their “neo-Luddite” biases. That the ITIF seems unconcerned with business schools, start-up culture, and a “culture industry” that inculcates an adoration for “technological innovation” to the same “impressionable minds” is, obviously, not commented upon. However, if a foundation is attempting to argue that universities are currently a hotbed of “neo-Ludditism” than it is questionable why the ITIF should choose to signal out two professors for special invective who are both deceased – Postman died in 2003 and David Noble died in 2010.

    It almost seems as if the ITIF report cites serious humanistic critics of “technological innovation” as a way to make it seem as though it has actually wrestled with the thought of such individuals. After all, the ITIF report deigns to mention two of the most prominent thinkers in the theoretical legacy of the critique of technology, Lewis Mumford and Jacques Ellul, but it only mentions them in order to dismiss them out of hand. The irony, naturally, is that thinkers like Mumford and Ellul (to say nothing of Postman and Noble) would have not been surprised in the least by the ITIF report as their critiques of technology also included a recognition of the ways that the dominant forces in technological society (be it in the form of Ellul’s “Technique” or Mumford’s “megamachine”) depended upon the ideological fealty of those who saw their own best interests as aligning with that of the new technological regimes of power. Indeed, the ideological celebrants of technology have become a sort of new priesthood for the religion of technology, though as Mumford quipped in Art and Technics:

    “If you fall in love with a machine there is something wrong with your love-life. If you worship a machine there is something wrong with your religion.” (Art and Technics, 81)

    Trade out the word “machine” in the above quotation with “technological innovation” and it applies perfectly to the ITIF awards document. And yet, playful gibes aside, there are many more (many, many more) barbs that one can imagine Mumford directing at the ITIF. As Mumford wrote in The Pentagon of Power:

    “Consistently the agents of the megamachine act as if their only responsibility were to the power system itself. The interests and demands of the populations subjected to the megamachine are not only unheeded but deliberately flouted.” (The Pentagon of Power, 271)

    The ITIF “Luddite Awards” are a pure demonstration of this deliberate flouting of “the interests and demands of the populations” who find themselves always on the receiving end of “technological innovation.” For the ITIF report shows an almost startling disregard for the concerns of “everyday people” and though the ITIF is a proudly nonpartisan organization the report demonstrates a disturbingly anti-democratic tendency. That the group does not lean heavily toward Democrats or Republicans only demonstrates the degree to which both parties eat from the same neoliberal trough – routinely filled with fresh ideological slop by think tanks like ITIF. Groups that advocate in the interest of their supporters in the public sphere (such as Free Press, the EFF, and the NRA {yes, even them}) are treated as interlopers worthy of mockery for having the audacity to raise concerns; similarly elected governmental bodies are berated for daring to pass timid regulations. The shape of the “ideal society” that one detects in the ITIF report is one wherein “technological innovation” knows no limits, and encounters no opposition, even if these limits are relatively weak regulations or simply citizens daring to voice a contrary opinion – consequences be damned! On the high-speed societal train of “technological innovation” the ITIF confuses a few groups asking for a slight reduction of speed with groups threatening to derail the train.

    Thus the key problem of the ITIF “Luddite Awards” emerges – and it is not simply that the ITIF continues to use Luddite as an epithet – it is that the ITIF seems willfully ignorant of any ethical imperatives other than a broadly defined love of “technological innovation.” In handing out “Luddite Awards” the ITIF reveals that it recognizes “technological innovation” as the crowning example of “the good.” It is not simply one “good” amongst many that must carefully compromise with other values (such as privacy, environmental concerns, labor issues, and so forth), rather it is the definitive and ultimate case of “the good.” This is not to claim that “technological innovation” is not amongst values that represent “the good,” but it is not the only value – treating it as such lead to confusing (to borrow a formulation from Lewis Mumford) “the goods life with the good life.” By fully privileging “technological innovation” the ITIF treats other values and ethical claims as if they are to be discarded – the philosopher Hans Jonas’s The Imperative of Responsibility (which advocated for a cautious approach to technological innovation that emphasized the potential risks inherent in new technologies) is therefore tossed out the window to be replaced by “the imperative of innovation” along with a stack of business books and perhaps an Ayn Rand novel, or two, for good measure.

    Indeed, responsibility for the negative impacts of innovation is shrugged off in the ITIF awards, even as many of the awardees (such as the various governments) wrestle with the responsibility that tech companies seem to so happily flaunt. The disrupters hate being disrupted. Furthermore, as should come as no surprise, the ITIF report maintains an aura that smells strongly of colonialism and disregard for the difficulties faced by those who are “disrupted” by “technological innovation.” The ITIF may want to reprimand organizations for trying to gently slow (which is not the same as stopping) certain forms of “technological innovation,” but the report has nothing to say about those who work mining the coltan that powers so many innovative devices, no concern for the factory workers who assemble these devices, and – of course – nothing to say about e-waste. Evidently to think such things are worthy of concern, to even raise the issue of consequences, is a sign of Ludditism. The ITIF holds out the promise of “better days ahead” and shows no concern for those whose lives must be trampled upon in the process. Granted, it is easy to ignore such issues when you work for a think tank in Washington DC and not as a coltan miner, a device assembler, a resident near an e-waste dump, or an individual whose job has just been automated.

    The ITIF “Luddite Awards” are yet another installment of the tech world/business press game of “Who’s Afraid of General Ludd” in which the group shouting the word “Luddite” at all opponents reveals that it has a less nuanced understanding of technology than was had by the historic Luddites. After all, the Luddites were not opposed to technology as such, nor were they opposed to “technological innovation,” rather, as E.P. Thompson describes in The Making of the English Working Class:

    “What was at issue was the ‘freedom’ of the capitalist to destroy the customs of the trade, whether by new machinery, by the factory-system, or by unrestricted competition, beating-down wages, undercutting his rivals, and undermining standards of craftsmanship…They saw laissez faire, not as freedom but as ‘foul Imposition”. They could see no ‘natural law’ by which one man, or a few men, could engage in practices which brought manifest injury to their fellows.” (Thompson, 548)

    What is at issue in the “Luddite Awards” is the “freedom” of “technological innovators” (the same-old “capitalists”) to force their priorities upon everybody else – and while the ITIF may want to applaud such “freedom” it is clear that they do not intend to extend such freedom to the rest of the population. The fear that can be detected in the ITIF “Luddite Awards” is not ultimately directed at the award recipients, but at an aspect of the historic Luddites that the report seems keen on forgetting: namely, that the Luddites organized a mass movement that enjoyed incredible popular support – which was why it was ultimately the military (not “seeing the light” of “technological innovation”) that was required to bring the Luddite uprisings to a halt. While it is questionable whether many of the recipients of “Luddite Awards” will view the award as an honor, the term “Luddite” can only be seen as a fantastic compliment when it is used as a synonym for a person (or group) that dares to be concerned with ethical and democratic values other than a simple fanatical allegiance to “technological innovation.” Indeed, what the ITIF “Luddite Awards” demonstrate is the continuing veracity of the philosopher Günther Anders statement, in the second volume of The Obsolescence of Man, that:

    “In this situation, it is no use to brandish scornful words like ‘Luddites’. If there is anything that deserves scorn it is, to the contrary, today’s scornful use of the term, ‘Luddite’ since this scorn…is currently more obsolete than the allegedly obsolete Luddism.” (Anders, Introduction – Section 7)

    After all, as Anders might have reminded the people at ITIF: gas chambers, depleted uranium shells, and nuclear weapons are also “technological innovations.”

    Works Cited

    • Anders, Günther. The Obsolescence of Man: Volume II – On the Destruction of Life in the Epoch of the Third Industrial Revolution. (translated by Josep Monter Pérez, Pre-Textos, Valencia, 2011). Available online: here.
    • Atkinson, Robert D. The 2014 Luddite Awards. January 2015.
    • Mumford, Lewis. The Myth of the Machine, volume 2 – The Pentagon of Power. New York: Harvest/Harcourt Brace Jovanovich, 1970.
    • Mumford, Lewis. Art and Technics. New York: Columbia University Press, 2000.
    • Thompson, E.P. The Making of the English Working Class. New York: Vintage Books, 1966.
    • Not cited but worth a look – Eric Hobsbawm’s classic article “The Machine Breakers.”


    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian,” Loeb writes at the blog LibrarianShipwreck, where this post first appeared. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

  • The Reticular Fallacy

    The Reticular Fallacy

    By Alexander R. Galloway
    ~

    We live in an age of heterogenous anarchism. Contingency is king. Fluidity and flux win over solidity and stasis. Becoming has replaced being. Rhizomes are better than trees. To be political today, one must laud horizontality. Anti-essentialism and anti-foundationalism are the order of the day. Call it “vulgar ’68-ism.” The principles of social upheaval, so associated with the new social movements in and around 1968, have succeed in becoming the very bedrock of society at the new millennium.

    But there’s a flaw in this narrative, or at least a part of the story that strategically remains untold. The “reticular fallacy” can be broken down into two key assumptions. The first is an assumption about the nature of sovereignty and power. The second is an assumption about history and historical change. Consider them both in turn.

    (1) First, under the reticular fallacy, sovereignty and power are defined in terms of verticality, centralization, essence, foundation, or rigid creeds of whatever kind (viz. dogma, be it sacred or secular). Thus the sovereign is the one who is centralized, who stands at the top of a vertical order of command, who rests on an essentialist ideology in order to retain command, who asserts, dogmatically, unchangeable facts about his own essence and the essence of nature. This is the model of kings and queens, but also egos and individuals. It is what Barthes means by author in his influential essay “Death of the Author,” or Foucault in his “What is an Author?” This is the model of the Prince, so often invoked in political theory, or the Father invoked in psycho-analytic theory. In Derrida, the model appears as logos, that is, the special way or order of word, speech, and reason. Likewise, arkhe: a term that means both beginning and command. The arkhe is the thing that begins, and in so doing issues an order or command to guide whatever issues from such a beginning. Or as Rancière so succinctly put it in his Hatred of Democracy, the arkhe is both “commandment and commencement.” These are some of the many aspects of sovereignty and power as defined in terms of verticality, centralization, essence, and foundation.

    (2) The second assumption of the reticular fallacy is that, given the elimination of such dogmatic verticality, there will follow an elimination of sovereignty as such. In other words, if the aforementioned sovereign power should crumble or fall, for whatever reason, the very nature of command and organization will also vanish. Under this second assumption, the structure of sovereignty and the structure of organization become coterminous, superimposed in such a way that the shape of organization assumes the identical shape of sovereignty. Sovereign power is vertical, hence organization is vertical; sovereign power is centralized, hence organization is centralized; sovereign power is essentialist, hence organization, and so on. Here we see the claims of, let’s call it, “naïve” anarchism (the non-arkhe, or non foundation), which assumes that repressive force lies in the hands of the bosses, the rulers, or the hierarchy per se, and thus after the elimination of such hierarchy, life will revert so a more direct form of social interaction. (I say this not to smear anarchism in general, and will often wish to defend a form of anarcho-syndicalism.) At the same time, consider the case of bourgeois liberalism, which asserts the rule of law and constitutional right as a way to mitigate the excesses of both royal fiat and popular caprice.

    reticular connective tissue
    source: imgkid.com

    We name this the “reticular” fallacy because, during the late Twentieth Century and accelerating at the turn of the millennium with new media technologies, the chief agent driving the kind of historical change described in the above two assumptions was the network or rhizome, the structure of horizontal distribution described so well in Deleuze and Guattari. The change is evident in many different corners of society and culture. Consider mass media: the uni-directional broadcast media of the 1920s or ’30s gradually gave way to multi-directional distributed media of the 1990s. Or consider the mode of production, and the shift from a Fordist model rooted in massification, centralization, and standardization, to a post-Fordist model reliant more on horizontality, distribution, and heterogeneous customization. Consider even the changes in theories of the subject, shifting as they have from a more essentialist model of the integral ego, however fraught by the volatility of the unconscious, to an anti-essentialist model of the distributed subject, be it postmodernism’s “schizophrenic” subject or the kind of networked brain described by today’s most advanced medical researchers.

    Why is this a fallacy? What is wrong about the above scenario? The problem isn’t so much with the historical narrative. The problem lies in an unwillingness to derive an alternative form of sovereignty appropriate for the new rhizomatic societies. Opponents of the reticular fallacy claim, in other words, that horizontality, distributed networks, anti-essentialism, etc., have their own forms of organization and control, and indeed should be analyzed accordingly. In the past I’ve used the concept of “protocol” to describe such a scenario as it exists in digital media infrastructure. Others have used different concepts to describe it in different contexts. On the whole, though, opponents of the reticular fallacy have not effectively made their case, myself included. The notion that rhizomatic structures are corrosive of power and sovereignty is still the dominant narrative today, evident across both popular and academic discourses. From talk of the “Twitter revolution” during the Arab Spring, to the ideologies of “disruption” and “flexibility” common in corporate management speak, to the putative egalitarianism of blog-based journalism, to the growing popularity of the Deleuzian and Latourian schools in philosophy and theory: all of these reveal the contemporary assumption that networks are somehow different from sovereignty, organization, and control.

    To summarize, the reticular fallacy refers to the following argument: since power and organization are defined in terms of verticality, centralization, essence, and foundation, the elimination of such things will prompt a general mollification if not elimination of power and organization as such. Such an argument is false because it doesn’t take into account the fact that power and organization may inhabit any number of structural forms. Centralized verticality is only one form of organization. The distributed network is simply a different form of organization, one with its own special brand of management and control.

    Consider the kind of methods and concepts still popular in critical theory today: contingency, heterogeneity, anti-essentialism, anti-foundationalism, anarchism, chaos, plasticity, flux, fluidity, horizontality, flexibility. Such concepts are often praised and deployed in theories of the subject, analyses of society and culture, even descriptions of ontology and metaphysics. The reticular fallacy does not invalidate such concepts. But it does put them in question. We can not assume that such concepts are merely descriptive or neutrally empirical. Given the way in which horizontality, flexibility, and contingency are sewn into the mode of production, such “descriptive” claims are at best mirrors of the economic infrastructure and at worse ideologically suspect. At the same time, we can not simply assume that such concepts are, by nature, politically or ethically desirable in themselves. Rather, we ought to reverse the line of inquiry. The many qualities of rhizomatic systems should be understood not as the pure and innocent laws of a newer and more just society, but as the basic tendencies and conventional rules of protocological control.


    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here earlier in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay

  • Teacher Wars and Teaching Machines

    Teacher Wars and Teaching Machines

    teacher warsa review of Dana Goldstein, The Teacher Wars: A History of America’s Most Embattled Profession (Doubleday, 2014)
    by Audrey Watters
    ~

    Teaching is, according to the subtitle of education journalist Dana Goldstein’s new book, “America’s Most Embattled Profession.” “No other profession,” she argues, ”operates under this level of political scrutiny, not even those, like policing or social work, that are also tasked with public welfare and are paid for with public funds.”

    That political scrutiny is not new. Goldstein’s book The Teacher Wars chronicles the history of teaching at (what has become) the K–12 level, from the early nineteenth century and “common schools” — that is, before before compulsory education and public school as we know it today — through the latest Obama Administration education policies. It’s an incredibly well-researched book that moves from the feminization of the teaching profession to the recent push for more data-driven teacher evaluation, observing how all along the way, teachers have been deemed ineffectual in some way or another — failing to fulfill whatever (political) goals the public education system has demanded be met, be those goals be economic, civic, or academic.

    As Goldstein describes it, public education is a labor issue; and it has been, it’s important to note, since well before the advent of teacher unions.

    The Teacher Wars and Teaching Machines

    To frame education this way — around teachers and by extension, around labor — has important implications for ed-tech. What happens if we examine the history of teaching alongside the history of teaching machines? As I’ve argued before, the history of public education in the US, particularly in the 20th century, is deeply intertwined with various education technologies – film, TV, radio, computers, the Internet – devices that are often promoted as improving access or as making an outmoded system more “modern.” But ed-tech is frequently touted too as “labor-saving” and as a corrective to teachers’ inadequacies and inefficiencies.

    It’s hardly surprising, in this light, that teachers have long looked with suspicion at new education technologies. With their profession constantly under attack, many teacher are worried no doubt that new tools are poised to replace them. Much is said to quiet these fears, with education reformers and technologists insisting again and again that replacing teachers with tech is not the intention.

    And yet the sentiment of science fiction writer Arthur C. Clarke probably does resonate with a lot of people, as a line from his 1980 Omni Magazine article on computer-assisted instruction is echoed by all sorts of pundits and politicians: “Any teacher who can be replaced by a machine should be.”

    Of course, you do find people like former Washington DC mayor Adrian Fenty – best known arguably via his school chancellor Michelle Rhee – who’ll come right out and say to a crowd of entrepreneurs and investors, “If we fire more teachers, we can use that money for more technology.”

    So it’s hard to ignore the role that technology increasingly plays in contemporary education (labor) policies – as Goldstein describes them, the weakening of teachers’ tenure protections alongside an expansion of standardized testing to measure “student learning,” all in the service finding and firing “bad teachers.” The growing data collection and analysis enabled by schools’ adoption of ed-tech feeds into the politics and practices of employee surveillance.

    Just as Goldstein discovered in the course of writing her book that the current “teacher wars” have a lengthy history, so too does ed-tech’s role in the fight.

    As Sidney Pressey, the man often credited with developing the first teaching machine, wrote in 1933 (from a period Goldstein links to “patriotic moral panics” and concerns about teachers’ political leanings),

    There must be an “industrial revolution” in education, in which educational science and the ingenuity of educational technology combine to modernize the grossly inefficient and clumsy procedures of conventional education. Work in the schools of the school will be marvelously though simply organized, so as to adjust almost automatically to individual differences and the characteristics of the learning process. There will be many labor-saving schemes and devices, and even machines — not at all for the mechanizing of education but for the freeing of teacher and pupil from the educational drudgery and incompetence.

    Or as B. F. Skinner, the man most associated with the development of teaching machines, wrote in 1953 (one year before the landmark Brown v Board of Education),

    Will machines replace teachers? On the contrary, they are capital equipment to be used by teachers to save time and labor. In assigning certain mechanizable functions to machines, the teacher emerges in his proper role as an indispensable human being. He may teach more students than heretofore — this is probably inevitable if the world-wide demand for education is to be satisfied — but he will do so in fewer hours and with fewer burdensome chores.

    These quotations highlight the longstanding hopes and fears about teaching labor and teaching machines; they hint too at some of the ways in which the work of Pressey and Skinner and others coincides with what Goldstein’s book describes: the ongoing concerns about teachers’ politics and competencies.

    The Drudgery of School

    One of the things that’s striking about Skinner and Pressey’s remarks on teaching machines, I think, is that they recognize the “drudgery” of much of teachers’ work. But rather than fundamentally change school – rather than ask why so much of the job of teaching entails “burdensome chores” – education technology seems more likely to offload that drudgery to machines. (One of the best contemporary examples of this perhaps: automated essay grading.)

    This has powerful implications for students, who – let’s be honest – suffer through this drudgery as well.

    Goldstein’s book doesn’t really address students’ experiences. Her history of public education is focused on teacher labor more than on student learning. As a result, student labor is missing from her analysis. This isn’t a criticism of the book; and it’s not just Goldstein that does this. Student labor in the history of public education remains largely under-theorized and certainly underrepresented. Cue AFT president Al Shanker’s famous statement: “Listen, I don’t represent children. I represent the teachers.”

    But this question of student labor seems to be incredibly important to consider, particularly with the growing adoption of education technologies. Students’ labor – students’ test results, students’ content, students’ data – feeds the measurements used to reward or punish teachers. Students’ labor feeds the algorithms – algorithms that further this larger narrative about teacher inadequacies, sure, and that serve to financially benefit technology, testing, and textbook companies, the makers of today’s “teaching machines.”

    Teaching Machines and the Future of Collective Action

    The promise of teaching machines has long been to allow students to move “at their own pace” through the curriculum. “Personalized learning,” it’s often called today (although the phrase often refers only to “personalization” in terms of the pace, not in terms of the topics of inquiry). This means, supposedly, that instead of whole class instruction, the “work” of teaching changes: in the words of one education reformer, “with the software taking up chores like grading math quizzes and flagging bad grammar, teachers are freed to do what they do best: guide, engage, and inspire.”

    Again, it’s not clear how this changes the work of students.

    So what are the implications – not just pedagogically but politically – of students, their headphones on staring at their individual computer screens working alone through various exercises? Because let’s remember: teaching machines and all education technologies are ideological. What are the implications – not just pedagogically but politically – of these technologies’ emphasis on individualism, self-management, personal responsibility, and autonomy?

    What happens to discussion and debate, for example, in a classroom of teaching machines and “personalized learning”? What happens, in a world of schools catered to individual student achievement, to the community development that schools (at their best, at least) are also tasked to support?

    What happens to organizing? What happens to collective action? And by collectivity here, let’s be clear, I don’t mean simply “what happens to teachers’ unions”? If we think about The Teacher Wars and teaching machines side-by-side, we should recognize our analysis of (our actions surrounding) the labor issues of school need to go much deeper and more farther than that.

    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared.

    Back to the essay

  • "Internet Freedom": Digital Empire?

    "Internet Freedom": Digital Empire?

    Dan Schiller, Digital Depression: Information Technology and Economic Crisisa review of Dan Schiller, Digital Depression: Information Technology and Economic Crisis  (University of Illinois Press, 2014)
    by Richard Hill
    ~
    Disclosure: the author of this review is mentioned in the Acknowledgements section of the reviewed book.

     

     

     

     

     

    Computers and telecommunications have revolutionized and disrupted all aspects of human activity, and even behavior. The impacts are broad and profound, with important consequences for governments, businesses, non-profit activities, and individuals. Networks of interconnected computer systems are driving many disruptive changes in business practices, information flows, and financial flows. Foremost amongst those networks is the Internet, much of which is global, or at least trans-national.

    According to some, the current governance arrangement for the Internet is nearly ideal. In particular, its global multi-stakeholder model of governance has resulted in a free and open Internet, which has enabled innovation and driven economic growth and well-being around the world. Others are of the view that things have not worked out that well. In particular, the Internet has resulted in mass surveillance by governments and by private companies, in monopolization, commodification and monetization of information and knowledge, in inequitable flows of finances between poor and rich countries, and in erosion of cultural diversity; further, those with central positions of influence have used it to consolidate power and to establish a new global regime of control and exploitation, under the guise of favoring liberalization, while in reality reinforcing the dominance and profitability of major corporations at the expense of the public interest, and the overarching position of certain national interests at the expense of global interests and well being.  [1]

    Dan Schiller’s book helps us to understand how rational and well-informed people can hold such diametrically opposing views. Schiller dissects the history of the growth of recent telecommunications networks and shows how they have significantly (indeed, dramatically) affected economic and political power relations around the world. And how, at the same time, US policies have consistently favored capital over labor, and have resulted in transfers of vast sums from developing countries to developed countries (in particular through interest on loans).

    2013 Berlin PRISM Demonstrations
    Participants wearing Edward Snowden and Chelsea Manning masks at 2013 Berlin protests against NSA PRISM program (image source: Wikipedia)

    Schiller documents in some detail how US policies that ostensibly promote the free flow of information around the world, the right of all people to connect to the Internet, and free speech, are in reality policies that have, by design, furthered the geo-economic and geo-political goals of the US, including its military goals, its imperialist tendencies, and the interests of large private companies based (if not always headquartered, at least for tax purposes) in the US. For example, strict copyright protection is held to be consistent with the free flow of information, as is mass surveillance. Cookies and exploitation of users’ personal data by Internet companies are held to be consistent with privacy rights (indeed, as Schiller shows, the US essentially denies the existence of the right to personal privacy for anything related to the Internet). There should be no requirements that data be stored locally, lest it escape the jurisdiction of the US surveillance apparatus. And very high profits and dominant positions in key Internet markets do not spark anti-trust or competition law investigations, as they might in any other industry.

    As Schiller notes, great powers have historically used communication systems to further their economic and strategic interests, so why should the US not so use the Internet? Thus stated, the matter seems obvious. But the matter is rarely thus stated. On the contrary, the Internet is often touted as a generous gift to the world’s people, able to lift them out of poverty and oppression, and to bring them the benefits of democracy and (or) free markets. Schiller’s carefully researched analysis is thus an important contribution.

    Schiller provides context by tracing the origins of the current financial and economic crises, pointing out that it is paradoxical that growing investments in Information and Communication Technologies (ICTs), and the supposed resultant productivity gains, did not prevent a major global economic crisis. Schiller explains how transnational corporations demanded liberalization of the terms on which they could use their private networks, and received then, resulting in profound changes in commodity chains, that is, the flow of production of goods and services. In particular, there has been an increase in transnational production, and this has reinforced the importance of transnational corporations. Further, ICTs have changed the nature of labor’s contribution to production, enabling many tasks to be shifted to unskilled workers (or even to consumers themselves: automatic teller machines (ATMs), for example, turn each of us into a bank clerk). However, the growth of the Internet did not transcend the regular economy: on the contrary, it was wrapped into the economy’s crisis tendencies and even exacerbated them.

    Schiller gives detailed accounts of these transformations in the automotive and financial industries, and in the military. The study of the effect of ICTs on the military is of particular interest considering that the Internet was originally developed as a military project, and that it is currently used by US intelligence agencies as a prime medium for the collection of information.

    Schiller then turns to telecommunications, explaining the very significant changes that took place in the USA starting in the late 1970s. Those changes resulted in a major restructuring of the dominant telecommunications playing field in the US and ultimately led to the growth of the Internet, a development which had world-wide effects. Schiller carefully describes the various US government actions that initiated and nurtured those changes, and that were instrumental in exporting similar changes to the rest of the world.

    Next, he analyzes how those changes affected and enabled the production of the networks themselves, the hardware used to build the networks and to use them (e.g. smartphones), and the software and applications that we all use today.

    Moving further up the value chain, Schiller explains how data-mining, coupled with advertising, fuels the growth of the dominant Internet companies, and how this data-mining is made possible only by denying data privacy, and how states use the very same techniques to implement mass surveillance.

    Having described the situation, Schiller proceeds to analyze it from economic and political perspectives. Given that the US was an early adopter of the Internet, it is not surprising that, because of economies of scale and network effects, US companies dominate the field (except in China, as Schiller explains in detail). Schiller describes how, given the influence of US companies on US politics, US policies, both domestic and foreign, are geared to allowing, or in fact favoring, ever-increasing concentration in key Internet markets, which is to the advantage of the US and its private companies–and despite the easy cant about decentralization and democratization.

    The book describes how the US views the Internet as an extraterritorial domain, subject to no authority except that of the US government and that of the dominant US companies. Each dictates its own law in specific spheres (for example, the US government has supervised, up to now, the management of Internet domain names and addresses; while US companies dictate unilateral terms and conditions to their users, terms and conditions that imply that users give up essentially all rights to their private data).

    Schiller describes how this state of affairs has become a foreign policy objective, with the US being willing to incur significant criticism and to pay a significant political price in order to maintain the status quo. That status quo is referred to as “the multi-stakeholder model”, in which private companies are essentially given veto power over government decisions (or at least over the decisions of any government other than the US government), a system that can be referred to as “corporatism”. Not only does the US staunchly defend that model for the Internet, it even tries to export it to other fields of human activity. And this despite, or perhaps because, that system allows companies to make profits when possible (in particular by exploiting state-built infrastructure or guarantees), and to transfer losses to states when necessary (as for example happened with the banking crisis).

    Schiller carefully documents how code words such as “freedom of access” and “freedom of speech” are used to justify and promote policies that in fact merely serve the interests of major US companies and, at the same time, the interests of the US surveillance apparatus, which morphed from a cottage industry into a major component of the military-industrial complex thanks to the Internet. He shows how the supposed open participation in key bodies (such as the Internet Engineering Task Force) is actually a screen to mask the fact that decisions are heavily influenced by insiders affiliated with US companies and/or the US government, and by agencies bound to the US as a state.

    As Schiller explains, this increasing dominance of US business and US political imperialism have not gone unchallenged, even if the challenges to date have mostly been rhetorical (again, except for China). Conflicts over Internet governance are related to rivalries between competing geo-political and geo-economic blocks, rivalries which will likely increase if economic growth continues to be weak. The rivalries are both between nations and within nations, and some are only emerging right now (for example, how to tax the digital economy, or the apparent emerging divergence of views between key US companies and the US government regarding mass surveillance).

    Indeed, the book explains how the challenges to US dominance have become more serious in the wake of the Snowden revelations, which have resulted in a significant loss of market share for some of the key US players, in particular with respect to cloud computing services. Those losses may have begun to drive the tip of a wedge between the so-far congruent goals of US companies and the US government

    In a nutshell, one can sum up what Schiller describes by paraphrasing Marx: “Capitalists of the world, unite! You have nothing to lose but the chains of government regulation.” But, as Schiller hints in his closing chapter, the story is still unfolding, and just as things did not work out as Marx thought they would, so things may not work out as the forces that currently dominate the Internet wish they will. So the slogan for the future might well be “Internet users of the world, unite! You have nothing to lose but the chains of exploitation of your personal data.”

    This book, and its extensive references, will be a valuable reference work for all future research in this area. And surely there will be much future research, and many more historical analyses of what may well be some of the key turning points in the history of mankind: the transition from the industrial era to the information era and the disruptions induced by that transition.

    _____

    Richard Hill, an independent consultant based in Geneva, Switzerland, was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). An earlier version of this review first appeared on Newsclick.

    Back to the essay
    _____

    1. From item 11 of document WSIS+10/4/6 of the preparatory process for the WSIS+10 High Level Event, which provided “a special platform for high-ranking officials of WSIS (World Summit on the Information Society) stakeholders, government, private sector, civil society and international organizations to express their views on the achievements, challenges and recommendations on the implementation” of various earlier internet governance initiatives backed by the International Telecommunications Union (ITU), the United Nations specialized agency for information and communications technologies, and other participants in the global internet governance sphere.

    Back to the essay

  • The Man Who Loved His Laptop

    The Man Who Loved His Laptop

    Her (2013)a review of Spike Jonze (dir.), Her (2013)
    by Mike Bulajewski
    ~
    I’m told by my sister, who is married to a French man, that the French don’t say “I love you”—or at least they don’t say it often. Perhaps they think the words are superfluous and it’s the behavior of the person you are in a relationship with tells you everything. Americans, on the other hand, say it to everyone—lovers, spouses, friends, parents, grandparents, children, pets—and as often as possible, as if quantity matters most. The declaration is also an event. For two people beginning a relationship, it marks a turning point and a new stage in the relationship.

    If you aren’t American, you may not have realized that relationships have stages. In America, they do. It’s complicated. First there are the three main thresholds of commitment: Dating, Exclusive Dating, then of course Marriage. There are three lesser pre-Dating stages: Just Talking, Hooking Up and Friends with Benefits; and one minor stage between Dating and Exclusive called Pretty Much Exclusive. Within Dating, there are several minor substages: number of dates (often counted up to the third date) and increments of physical intimacy denoted according to the well-known baseball metaphor of first, second, third and home base.

    There are also a number of rituals that indicate progress: updating of Facebook relationship statuses; leaving a toothbrush at each other’s houses; the aforementioned exchange of I-love-you’s; taking a vacation together; meeting the parents; exchange of house keys; and so on. When people, especially unmarried people talk about relationships, often the first questions are about these stages and rituals. In France the system is apparently much less codified. One convention not present in the United States is that romantic interest is signaled when a man invites a woman to go for a walk with him.

    The point is two-fold: first, although Americans admire and often think of French culture as holding up a standard for what romance ought to be, Americans act nothing like the French in relationships and in fact know very little about how they work in France. Second and more importantly, in American culture love is widely understood as spontaneous and unpredictable, and yet there is also an opposite and often unacknowledged expectation that relationships follow well-defined rules and rituals.

    This contradiction might explain the great public clamor over romance apps like Romantimatic and BroApp that automatically send your significant other romantic messages, either predefined or your own creation, at regular intervals—what philosopher of technology Evan Selinger calls (and not without justification) apps that outsource our humanity.

    Reviewers of these apps were unanimous in their disapproval, disagreeing only on where to locate them on a spectrum between pretty bad and sociopathic. Among all the labor-saving apps and devices, why should this one in particular be singled out for opprobrium?

    Perhaps one reason for the outcry is that they expose an uncomfortable truth about how easily romance can be automated. Something we believe is so intimate is revealed as routine and predictable. What does it say about our relationship needs that the right time to send a loving message to your significant other can be reduced to an algorithm?

    The routinization of American relationships first struck me in the context of this little-known fact about how seldom French people say “I love you.” If you had to launch one of these romance apps in France, it wouldn’t be enough to just translate the prewritten phrases into French. You’d have to research French romantic relationships and discover what are the most common phrases—if there are any—and how frequently text messages are used for this purpose. It’s possible that French people are too unpredictable, or never use text messages for romantic purposes, so the app is just not feasible in France.

    Romance is culturally determined. That American romance can be so easily automated reveals how standardized and even scheduled relationships already are. Selinger’s argument that automated romance undermines our humanity has some merit, but why stop with apps? Why not address the problem at a more fundamental level and critique the standardized courtship system that regulates romance. Doesn’t this also outsource our humanity?

    The best-selling relationship advice book The 5 Love Languages claims that everyone understands one of five love “languages” and the key to a happy relationship for each partner to learn to express love in the correct language. Should we be surprised if the more technically minded among us concludes that the problem of love can be solved with technology? Why not try to determine the precise syntax and semantics of these love languages, and attempt to express them rigorously and unambiguously in the same way that computer languages and communications protocols are? Can love be reduced to grammar?

    Spike Jonze’s Her (2013) tells the story of Theodore Twombly, a soon-to-be divorced writer who falls in love with Samantha, an AI operating system who far exceeds the abilities of today’s natural language assistants like Apple’s Siri or Microsoft’s Cortana. Samantha is not only hyper-intelligent, she’s also capable of laughter, telling jokes, picking up on subtle unspoken interpersonal cues, feeling and communicating her own emotions, and so on. Theodore falls in love with her, but there is no sense that their relationship is deficient because she’s not human. She is as emotionally expressive as any human partner, at least on film.

    Theodore works for a company called BeautifulHandwrittenLetters.com as a professional Cyrano de Bergerac (or perhaps a human Romantimatic), ghostwriting heartfelt “handwritten” letters on behalf of this clients. It’s an ironic twist: Samantha is his simulated girlfriend, a role which he himself adopts at work by simulating the feelings of his clients. The film opens with Theodore at his desk at work, narrating a letter from a wife to her husband on the occasion of their 50th wedding anniversary. He is a master of the conventions of the love letter. Later in the film, his work is discovered by a literary agent, and he gets an offer to have book published of his best work.

    [youtube https://www.youtube.com/watch?v=CxahbnUCZxY&w=560&h=315]

    But for all his (alleged) expertise as a romantic writer, Theodore is lonely, emotionally stunted, ambivalent towards the women in his life, and—at least before meeting Samantha—apparently incapable of maintaining relationships since he separated from his ex-wife Catherine. Highly sensitive, he is disturbed by encounters with women that go off the script: a phone sex encounter goes awry when the woman demands that he enact her bizarre fantasy of being choked with a dead cat; and on a date with a woman one night, she exposes a little too much vulnerability and drunkenly expresses her fear that he won’t call her. He abruptly and awkwardly ends the date.

    Theodore wanders aimlessly through the high tech city as if it is empty. With headphones always on, he’s withdrawn, cocooned in a private sonic bubble. He interacts with his device through voice, asking it to play melancholy songs and skipping angry messages from his attorney demanding that he sign the divorce papers already. At times, he daydreams of happier times when he and his ex-wife were together and tells Samantha how much he liked being married. At first it seems that Catherine left him. We wonder if he withdrew from the pain of his heartbreak. But soon a different picture emerges. When they finally meet to sign the divorce papers over lunch, Catherine accuses him of not being able to handle her emotions and reveals that he tried to get her on Prozac. She says to him “I always felt like you wished I could just be a happy, light, everything’s great, bouncy L.A. wife. But that’s not me.”

    So Theodore’s avoidance of real challenges and emotions in relationships turns out to be an ongoing problem—the cause, not the consequence, of his divorce. Starting a relationship with his operating systems Samantha is his latest retreat from reality—not from physical reality, but from the virtual reality of authentic intersubjective contact.

    Unlike his other relationships, Samantha is perfectly customized to his needs. She speaks his “love language.” Today we personalize our operating system and fill out online dating profile specifying exactly what kind of person we’re looking for. When Theodore installs Samantha on his computer for the first time, the two operations are combined with a single question. The system asks him how he would describe his relationship with his mother. He begins to reply with psychological banalities about how she is insufficiently attuned to his needs, and it quickly stops him, already knowing what he’s about. And so do we.

    That Theodore is selfish doesn’t mean that he is unfeeling, unkind, insensitive, conceited or uninterested in his new partners thoughts, feelings and goals. His selfishness is the kind that’s approved and even encouraged today, the ethically consistent selfishness that respects the right of others to be equally selfish. What he wants most of all is to be comfortable, to feel good, and that requires a partner who speaks his love language and nothing else, someone who says nothing that would veer off-script and reveal too many disturbing details. More precisely, Theodore wants someone who speaks what Lacan called empty speech: speech that obstructs the revelation of the subject’s traumatic desire.

    Objectification is a traditional problem between men and women. Men reduce women to mere bodies or body parts that exist only for sexual gratification, treating them as sex objects rather than people. The dichotomy is between the physical as the domain of materiality, animality and sex on one hand, and the spiritual realm of subjectivity, personality, agency and the soul on the other. If objectification eliminates the soul, then Theodore engages in something like the opposite, a subjectification which eradicates the body. Samantha is just a personality.

    Technology writer Nicholas Carr‘s new book The Glass Cage: Automation and Us (Norton, 2014) investigates the ways that automation and artificial intelligence dull our cognitive capacities. Her can be read as a speculative treatment of the same idea as it relates to emotion. What if the difficulty of relationships could be automated away? The film’s brilliant provocation is that it shows us a lonely, hollow world mediated through technology but nonetheless awash in sentimentality. It thwarts our expectations that algorithmically-generated emotion would be as stilted and artificial as today’s speech synthesizers. Samantha’s voice is warm, soulful, relatable and expressive. She’s real, and the feelings she triggers in Theodore are real.

    But real feelings with real sensations can also be shallow. As Maria Bustillo notes, Theodore is an awful writer, at least by today’s standards. Here’s the kind of prose that wins him accolades from everyone around him:

    I remember when I first started to fall in love with you like it was last night. Lying naked beside you in that tiny apartment, it suddenly hit me that I was part of this whole larger thing, just like our parents, and our parents’ parents. Before that I was just living my life like I knew everything, and suddenly this bright light hit me and woke me up. That light was you.

    In spite of this, we’re led to believe that Theodore is some kind of literary genius. Various people in his life compliment him on his skill and the editor of the publishing company who wants to publish his work emails to tell him how moved he and his wife were when they read them. What kind of society would treat such pedestrian writing as unusual, profound or impressive? And what is the average person’s writing like if Theodore’s services are worth paying for?

    Recall the cult favorite Idiocracy (2006) directed by Mike Judge, a science fiction satire set in a futuristic dystopia where anti-intellectualism is rampant and society has descended into stupidity. We can’t help but conclude that Her offers a glimpse into a society that has undergone a similar devolution into both emotional and literary idiocy.

    _____

    Mike Bulajewski (@mrteacup) is a user experience designer with a Master’s degree from University of Washington’s Human Centered Design and Engineering program. He writes about technology, psychoanalysis, philosophy, design, ideology & Slavoj Žižek at MrTeacup.org, where an earlier version of this review first appeared.

    Back to the essay

  • Our Very Own Francis Bacon

    Our Very Own Francis Bacon

    Zero to One: Notes on Startups, or How to Build the Futurea review of Peter Thiel, Zero to One: Notes on Startups, or How to Build the Future
    by LM Sacasas
    ~

    Few individuals have done as much to chart the course of science and technology in the modern world as the the Elizabethan statesmen and intellectual, Francis Bacon. But Bacon’s defining achievement was not, strictly speaking, scientific or technological. Rather, Bacon’s achievement lay in the realm of human affairs we would today refer to as “public relations.” Bacon’s genius was Draper-esque: he wove together a compelling story about the place of techno-science in human affairs from the loose threads of post-Reformation religious and political culture and the scientific breakthroughs we loosely group together as the Scientific Revolution.

    In story he told, knowledge mattered only insofar as it yielded power (the well-known formulation, “knowledge is power,” is Bacon’s), and that power mattered only insofar as it was directed toward “the relief of man’s estate.” To put that less archaically, we might say “the improvement of our quality of life.” But putting it that way obscures the theological overtones of Bacon’s formulation and its allusion to the curse under which humanity labored as a consequence of the Fall in the Christian understanding of the human condition. Our problem was both spiritual and material, and Bacon believed that in his day both facets of that problem were being solved. The improvement of humanity’s physical condition went hand in hand with the restoration of true religion occasioned by the English Reformation, and together they would lead straight to the full restoration of creation.

    Bacon’s significance, then, lay in merging science and technology into one techno-scientific project and synthesizing this emerging project with the dominant world picture, thus charting it’s course and securing its prestige. It is just this sort of expansive vision driving technological development that I’ve had in mind in my recent Frailest Thing posts (here and here) regarding culture, technology, and innovation.

    My recent posts have also mentioned the entrepreneur Peter Thiel, who is increasingly assuming the role of Silicon Valley’s leading public intellectual–the Sage of Silicon Valley, if you will. This morning, I was re-affirmed in that evaluation of Thiel’s position by a pair of posts by political philosopher, Peter Lawler. In the first of these posts, Lawler comments on Thiel’s seeming ubiquity in certain circles, and he rehearses some of the by-now familiar aspects of Thiel’s intellectual affinities, notably for the sociologist cum philosopher and Stanford professor René Girard (Thiel expounds on Girard in this video) and the right-wing political theorist Leo Strauss (whom Thiel praises in this interview on the National Review). Chiefly, Lawler discusses Thiel’s flirtations with transhumanism, particularly in his recently released Zero to One: Notes on Startups, or How to Build the Future, a distilled version of Thiel’s 2012 lecture course on start-ups at Stanford University.

    (The book was prepared with Blake Masters, who had previously made available detailed notes on Thiel’s course. I’ll mention in passing that that tag line on Masters’ website runs as follows: “Your mind is software. Program it. Your body is a shell. Change it. Death is a disease. Cure it. Extinction is approaching. Fight it.”)

    Francis Bacon

    As it turns out, Francis Bacon makes a notable appearance in Thiel’s work. Here is Lawler summarizing that portion of the book:

    “In the chapter entitled ‘You Are Not a Lottery Ticket,’ Thiel writes of Francis Bacon’s modern project, which places “prolongation of life” as the noblest branch of medicine, as well the main point of the techno-development of science. That prolongation is at the core of the definite optimism that should drive ‘the intelligent design’ at the foundation of technological development. We (especially we founders) should do everything we can “to prioritize design over chance.” We should do everything we can to remove contingency from existence, especially, of course, each of our personal existences.”

    The “intelligent design” in view has nothing to do, so far as I can tell, with the theory of human origins that is the most common referent for that phrase. Rather, it is Thiel’s way of labeling the forces of consciously deployed thought and work striving to bring order out of the chaos of contingency. Intelligent design is how human beings assert control and achieve mastery over their world and their lives, and that is an explicitly Baconian chord to strike.

    Thiel, worried by the technological stagnation he believes has set in over the last forty or so years, is seeking to reanimate the technological project by once again infusing it with an expansive, dare we say mythic, vision of its place in human affairs. It may not be too much of a stretch to say that he is seeking to play the role of Francis Bacon for our age.

    Like Bacon, Thiel is attempting to fuse the disparate strands of emerging technologies together into a coherent narrative of grandiose scale. And his story, like Bacon’s, features distinctly theological undertones. The chief difference may be this: whereas the defining institution of the early modern period was the nation-state, itself a powerful innovation of the period, the defining institution in Thiel’s vision is the start-up. As Lawler puts it, “the startup has replaced the country as the object of the highest human ambition. And that’s the foundation of the future that comes from being ruled by the intelligent designers who are Silicon Valley founders.”

    Lawler is right to conclude that “Peter Thiel has emerged as the most resolute and most imaginative defender of the distinctively modern part of Western civilization.” Bacon was, after all, one of the intellectual founders of modernity, on par, I would say, with the likes of Descartes and Locke. But, Lawler adds,

    “that doesn’t mean that, when it comes to the libertarian displacement of the nation by the startup and the abolition of all contingency from particular personal lives, his imagination and his self-importance don’t trump his astuteness. They do. His theology of liberation is that we, made in the image of God, can do for ourselves what the Biblical Creator promised—free ourselves from the misery of being self-conscious mortals dependent on forces beyond our control.”

    And that is, as Lawler notes in his follow-up post, a rather ancient aspiration. Indeed, Thiel, who professes an admittedly heterodox variety of Christianity, may do well to remember that to say we are made in the image of God is one way of saying we are not, the Whole Earth Catalog notwithstanding, gods ourselves. This, it would seem, is a hard lesson to learn.

    _______________________________

    Update: On Twitter, I was made aware of a talk by Thiel at SXSW in 2013 on the topic of the chapter discussed above. Here it is (via @carlamomo).

    [youtube https://www.youtube.com/watch?v=iZM_JmZdqCw?version=3&rel=1&fs=1&showsearch=0&showinfo=1&iv_load_policy=1&wmode=transparent]

    _____

    LM Sacasas (@frailesthing) is a PhD student in the Texts and Technology program at the University of Central Florida. He maintains the blog “The Frailest Thing,” on which this post first appeared. He is the author of the ebook The Tourist and The Pilgrim: Essays on Life and Technology in the Digital Age (Amazon Kindle, 2013).

    Back to the essay

  • Program and Be Programmed

    Program and Be Programmed

    Programmed Visions: Software and Memory (MIT Press, 2013)a review of Wendy Chun, Programmed Visions: Software and Memory (MIT Press, 2013)
    by Zachary Loeb
    ~

    Type a letter on a keyboard and the letter appears on the screen, double-click on a program’s icon and it opens, use the mouse in an art program to draw a line and it appears. Yet knowing how to make a program work is not the same as knowing how or why it works. Even a level of skill approaching mastery of a complicated program does not necessarily mean that the user understands how the software works at a programmatic level. This is captured in the canonical distinctions between users and “power users,” on the one hand, and between users and programmers on the other. Whether being a power user or being a programmer gives one meaningful power over machines themselves should be a more open question than injunctions like Douglas Rushkoff’s “program or be programmed” or the general opinion that every child must learn to code appear to allow.

    Sophisticated computer programs give users a fantastical set of abilities and possibilities. But to what extent does this sense of empowerment depend on faith in the unseen and even unknown codes at work in a given program? We press a key on a keyboard and a letter appears on the screen—but do we really know why? These are some of the questions that Wendy Hui Kyong Chun poses in Programmed Visions: Software and Memory, which provides a useful history of early computing alongside a careful analysis of the ways in which computers are used—and use their users—today. Central to Chun’s analysis is her insistence “that a rigorous engagement with software makes new media studies more, rather than less, vapory” (21), and her book succeeds admirably in this regard.

    The central point of Chun’s argument is that computers (and media in general) rely upon a notion of programmability that has become part of the underlying societal logic of neoliberal capitalism. In a society where computers are tied ever more closely to power, Chun argues that canny manipulation of software restores a sense of control or sovereignty to individual users, even as their very reliance upon this software constitutes a type of disempowerment. Computers are the driving force and grounding metaphor behind an ideology that seeks to determine the future—a future that “can be bought and sold” and which “depends on programmable visions that extrapolate the future—or more precisely, a future—based on the past” (9).

    Yet, one of the pleasures of contemporary computer usage, is that one need not fully understand much of what is going on to be able to enjoy the benefits of the computer. Though we may use computer technology to answer critical questions, this does not necessarily mean we are asking critical questions about computer technology. As Chun explains, echoing Michel Foucault, “software, free or not, is embodied and participates in structures of knowledge-power” (21); users become tangled in these structures once they start using a given device or program. Much of this “knowledge-power” is bound up in the layers of code which make software function, the code is that which gives the machine the directions—that which ensures that the tapping of the letter “r” on the keyboard leads to that letter appearing on the screen. Nevertheless, this code typically goes unseen, especially as it becomes source code, and winds up being buried ever deeper, even though this source code is what “embodies the power of the executive, the power of enforcement” (27). Importantly, the ability to write code, the programmer’s skill, does not in and of itself provide systematic power: computers follow “a set of rules that programmers must follow” (28). A sense of power over certain aspects of a computer is still incumbent upon submitting to the control of other elements of the computer.

    Contemporary computers, and our many computer-esque devices (such as smart phones and tablets), are the primary sites in which most of us encounter the codes and programming about which Chun writes, but she takes lengths to introduce the reader to the history of programming. For it is against the historical backdrop of military research, during the Second World War, that one can clearly see the ways in which notions of control, the unquestioning following of orders, and hierarchies have long been at work within computation and programming. Beyond providing an enlightening aside into the vital role that women played in programming history, analyzing the early history of computing demonstrates how as a means of cutting down on repetitive work structured programming emerged that “limits the logical procedures coders can use, and insists that the program consist of small modular units, which can be called from the main program” (36). Gradually this emphasis on structured programming allows for more and more processes to be left to the machine, and thus processes and codes become hidden from view even as future programmers are taught to conform to the demands that will allow for new programs to successfully make use of these early programs. Therefore the processes that were once a result of expertise come to be assumed aspects of the software—they become automated—and it is this very automation (“automatic programming”) that “allows the production of computer-enabled human-readable code” (41).

    As the codes and programs become hidden by ever more layers of abstraction, the computer simultaneously and paradoxically appears to make more of itself visible (through graphic user interfaces, for example), while the code itself recedes ever further into the background. This transition is central to the computer’s rapid expansion into ever more societal spheres, and it is an expansion that Chun links to the influence of neoliberal ideology. The computer with its easy-to-use interfaces creates users who feel as though they are free and empowered to manipulate the machine even as they rely on the codes and programs that they do not see. Freedom to act becomes couched in code that predetermines the range and type of actions that the users are actually free to take. What transpires, as Chun writes, is that “interfaces and operating systems produce ‘users’—one and all” (67).

    Without fully comprehending the codes that lead from a given action (a user presses a button) to a given result, the user is positioned to believe ever more in the power of the software/hardware hybrid, especially as increased storage capabilities allow for computers to access vast informational troves. In so doing, the technologically-empowered user has been conditioned to expect a programmable world akin to the programmed devices they use to navigate that world—it has “fostered our belief in the world as neoliberal: as an economic game that follows certain rules” (92). And this takes place whether or not we understand who wrote those rules, or how they can be altered.

    This logic of programmability may be linked to inorganic machines, but Chun also demonstrates the ways in which this logic has been applied to the organic world as well. In truth, the idea that the organic can be programmed predates the computer; as Chun explains “breeding encapsulates an early logic of programmability… Eugenics, in other words, was not simply a factor driving the development of high-speed mass calculation at the level of content… but also at the level of operationality” (124). In considering the idea that the organic can be programmed, what emerges is a sense of the way that programming has long been associated with a certain will to exert control over things be they organic or inorganic. Far from being a digression, Chun’s discussion of eugenics provides for a fascinating historic comparison given the way in which its decline in acceptance seems to dovetail with the steady ascendance of the programmable machine.

    The intersection of software and memory (or “software as memory”) is an essential matter to consider given the informational explosion that has occurred with the spread of computers. Yet, as Chun writes eloquently: “information is ‘undead’; neither alive nor dead, neither quite present nor absent” (134), since computers simultaneously promise to make ever more information available while making the future of much of this information precarious (insofar as access may rely upon software and hardware that no longer functions). Chun elucidates the ways in which the shift from analog to digital has permitted a wider number of users to enjoy the benefits of computers while this shift has likewise made much that goes on inside a computer (software and hardware) less transparent. While the machine’s memory may seem ephemeral and (to humans) illegible, accessing information in “storage” involves codes that read by re-writing elsewhere. This “battle of diligence between the passing and the repetitive” characterizing machine memory, Chun argues, “also characterizes content today” (170). Users rely upon a belief that the information they seek will be available and that they will be able to call upon it with a few simple actions, even though they do not see (and usually cannot see) the processes that make this information present and which do or do not allow it to be presented.

    When people make use of computers today they find themselves looking—quite literally—at what the software presents to them, yet in allowing this act of seeing the programming also has determined much of what the user does not see. Programmed Visions is an argument for recognizing that sometimes the power structures that most shape our lives go unseen—even if we are staring right at them.

    * * *

    With Programmed Visions, Chun has crafted a nuanced, insightful, and dense, if highly readable, contribution to discussions about technology, media, and the digital humanities. It is a book that demonstrates Chun’s impressive command of a variety of topics and the way in which she can engagingly shift from history to philosophy to explanations of a more technical sort. Throughout the book Chun deftly draws upon a range of classic and contemporary thinkers, whilst raising and framing new questions and lines of inquiry even as she seeks to provide answers on many other topics.

    Though peppered with many wonderful turns of phrase, Programmed Visions remains a challenging book. While all readers of Programmed Visions will come to it with their own background and knowledge of coding, programming, software, and so forth—the simple truth is that Chun’s point (that many people do not understand software sufficiently) may make many a reader feel somewhat taken aback. For most computer users—even many programmers and many whose research involves the study of technology and media—are quite complicit in the situation that Chun describes. It is the sort of discomforting confrontation that is valuable precisely because of the anxiety it provokes. Most users take for granted that the software will work the way they expect it to—hence the frustration bordering on fury that many people experience when suddenly the machine does something other than that which is expected provoking a maddened outburst of “why aren’t you working!” What Chun helps demonstrate is that it is not so much that the machines betray us, but that we were mistaken in our thinking that machines ever really obeyed us.

    It will be easy for many readers to see themselves as the user that Chun describes—as someone positioned to feel empowered by the devices they use, even as that power depends upon faith in forces the user cannot see, understand, or control. Even power users and programmers, on careful self-reflection may identify with Chun’s relocation of the programmer from a position of authority to a role wherein they too must comply with the strictures of the code presents an important argument for considerations of such labor. Furthermore, the way in which Chun links the power of the machine to the overarching ideology of neoliberalism makes her argument useful for discussions broader than those in media studies and the digital humanities. What makes these arguments particularly interesting is the way in which Chun locates them within thinking about software. As she writes towards the end of the second chapter, “this chapter is not a call to return to an age when one could see and comprehend the actions of our computers. Those days are long gone… Neither is this chapter an indictment of software or programming… It is, however, an argument against common-sense notions of software precisely because of their status as common sense” (92). Such a statement refuses to provide the anxious reader (who has come to see themselves as an uninformed user) with a clear answer, for it suggests that the “common-sense” clear answer is part of what has disempowered them.

    The weaving of historic details regarding computers during World War II and eugenics provide an excellent and challenging atmosphere against which Chun’s arguments regarding programmability can grow. Chun lucidly describes the embodiment and materiality of information and obsolescence that serve as major challenges confronting those who seek to manage and understand the massive informational flux that computer technology has enabled. The idea of information as “undead” is both amusing and evocative as it provides for a rich way of describing the “there but not there” of information, while simultaneously playing upon the slight horror and uneasiness that seems to be lurking below the surface in the confrontation with information.

    As Chun sets herself the difficult task of exploring many areas, there are some topics where the reader may be left wanting more. The section on eugenics presents a troubling and fascinating argument—one which could likely have been a book in and of itself—especially when considered in the context of arguments about cyborg selves and post-humanity, and it is a section that almost seems to have been cut short. Likewise the discussion of race (“a thread that has been largely invisible yet central,” 179), which is brought to the fore in the epilogue, confronts the reader with something that seems like it could in fact be the introduction for another book. It leaves the reader with much to contemplate—though it is the fact that this thread was not truly “largely invisible” that makes the reader upon reaching the epilogue wish that the book could have dealt with that matter at greater length. Yet, these are fairly minor concerns—that Programmed Visions leaves its readers re-reading sections to process them in light of later points is a credit to the text.

    Programmed Visions: Software and Memory is an alternatively troubling, enlightening, and fascinating book. It allows its reader to look at software and hardware in a new way, with a fresh insight about this act of sight. It is a book that plants a question (or perhaps subtly programs one into the reader’s mind): what are you not seeing, what power relations remain invisible, between the moment during which the “?” is hit on the keyboard and the moment it appears on the screen?


    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck. He has previously reviewed The People’s Platform by Astra Taylor and Social Media: A Critical Introduction by Christian Fuchs for boundary2.org.

    Back to the essay