b2o

  • Flat Theory

    Flat Theory

    By David M. Berry
    ~

    The world is flat.[1] 6Or perhaps better, the world is increasingly “layers.” Certainly the augmediated imaginaries of the major technology companies are now structured around a post-retina vision of mediation made possible and informed by the digital transformations ushered in by mobile technologies – whether smartphones, wearables, beacons or nearables – an internet of places and things. These imaginaries provide a sense of place, as well as sense for management, of the complex real-time streams of information and data broken into shards and fragments of narrative, visual culture, social media and messaging. Turned into software, they reorder and re-present information, decisions and judgment, amplifying the sense and senses of (neoliberal) individuality whilst reconfiguring what it means to be a node in the network of post digital capitalism.  These new imaginaries serve as abstractions of abstractions, ideologies of ideologies, a prosthesis to create a sense of coherence and intelligibility in highly particulate computational capitalism (Berry 2014). To explore the experimentation of the programming industries in relation to this it is useful to explore the design thinking and material abstractions that are becoming hegemonic at the level of the interface.

    Two new competing computational interface paradigms are now deployed in the latest version of Apple and Google’s operating systems, but more notably as regulatory structures to guide the design and strategy related to corporate policy. The first is “flat design” which has been introduced by Apple through iOS 8 and OS X Yosemite as a refresh of the aging operating systems’ human computer interface guidelines, essentially stripping the operating system of historical baggage related to techniques of design that disguised the limitations of a previous generation of technology, both in terms of screen but also processor capacity. It is important to note, however, that Apple avoids talking about “flat design” as its design methodology, preferring to talk through its platforms specificity, that is about iOS’ design or OS X’s design. The second is “material design” which was introduced by Google into its Android L, now Lollipop, operating system and which also sought to bring some sense of coherence to a multiplicity of Android devices, interfaces, OEMs and design strategies. More generally “flat design” is “the term given to the style of design in which elements lose any type of stylistic characters that make them appear as though they lift off the page” (Turner 2014). As Apple argues, one should “reconsider visual indicators of physicality and realism” and think of the user interface as “play[ing] a supporting role”, that is that techniques of mediation through the user interface should aim to provide a new kind of computational realism that presents “content” as ontologically prior to, or separate from its container in the interface (Apple 2014). This is in contrast to “rich design,” which has been described as “adding design ornaments such as bevels, reflections, drop shadows, and gradients” (Turner 2014).

    color_family_a_2xI want to explore these two main paradigms – and to a lesser extent the flat-design methodology represented in Windows 7 and the, since renamed, Metro interface – through a notion of a comprehensive attempt by both Apple and Google to produce a rich and diverse umwelt, or ecology, linked through what what Apple calls “aesthetic integrity” (Apple 2014). This is both a response to their growing landscape of devices, platforms, systems, apps and policies, but also to provide some sense of operational strategy in relation to computational imaginaries. Essentially, both approaches share an axiomatic approach to conceptualizing the building of a system of thought, in other words, a primitivist predisposition which draws from both a neo-Euclidian model of geons (for Apple), but also a notion of intrinsic value or neo-materialist formulations of essential characteristics (for Google). That is, they encapsulate a version of what I am calling here flat theory. Both of these companies are trying to deal with the problematic of multiplicities in computation, and the requirement that multiple data streams, notifications and practices have to be combined and managed within the limited geography of the screen. In other words, both approaches attempt to create what we might call aggregate interfaces by combining techniques of layout, montage and collage onto computational surfaces (Berry 2014: 70).

    The “flat turn” has not happened in a vacuum, however, and is the result of a new generation of computational hardware, smart silicon design and retina screen technologies. This was driven in large part by the mobile device revolution which has not only transformed the taken-for-granted assumptions of historical computer interface design paradigms (e.g. WIMP) but also the subject position of the user, particularly structured through the Xerox/Apple notion of single-click functional design of the interface. Indeed, one of the striking features of the new paradigm of flat design, is that it is a design philosophy about multiplicity and multi-event. The flat turn is therefore about modulation, not about enclosure, as such, indeed it is a truly processual form that constantly shifts and changes, and in many ways acts as a signpost for the future interfaces of real-time algorithmic and adaptive surfaces and experiences. The structure of control for the flat design interfaces is following that of the control society, is “short-term and [with] rapid rates of turnover, but also continuous and without limit” (Deleuze 1992). To paraphrase Deleuze: Humans are no longer in enclosures, certainly, but everywhere humans are in layers.

    manipulation_2x

    Apple uses a series of concepts to link its notion of flat design which include, aesthetic integrity, consistency, direct manipulation, feedback, metaphors, and user control (Apple 2014). Reinforcing the haptic experience of this new flat user interface has been described as building on the experience of “touching glass” to develop the “first post-Retina (Display) UI (user interface)” (Cava 2013). This is the notion of layered transparency, or better, layers of glass upon which the interface elements are painted through a logical internal structure of Z-axis layers. This laminate structure enables meaning to be conveyed through the organization of the Z-axis, both in terms of content, but also to place it within a process or the user interface system itself.

    Google, similarly, has reorganized it computational imaginary around a flattened layered paradigm of representation through the notion of material design. Matias Duarte, Google’s Vice President of Design and a Chilean computer interface designer, declared that this approach uses the notion that it “is a sufficiently advanced form of paper as to be indistinguishable from magic” (Bohn 2014). But magic which has constraints and affordances built into it, “if there were no constraints, it’s not design — it’s art” Google claims (see Interactive Material Design) (Bohn 2014). Indeed, Google argues that the “material metaphor is the unifying theory of a rationalized space and a system of motion”, further arguing:

    The fundamentals of light, surface, and movement are key to conveying how objects move, interact, and exist in space and in relation to each other. Realistic lighting shows seams, divides space, and indicates moving parts… Motion respects and reinforces the user as the prime mover… [and together] They create hierarchy, meaning, and focus (Google 2014).

    This notion of materiality is a weird materiality in as much as Google “steadfastly refuse to name the new fictional material, a decision that simultaneously gives them more flexibility and adds a level of metaphysical mysticism to the substance. That’s also important because while this material follows some physical rules, it doesn’t create the “trap” of skeuomorphism. The material isn’t a one-to-one imitation of physical paper, but instead it’s ‘magical’” (Bohn 2014). Google emphasises this connection, arguing that “in material design, every pixel drawn by an application resides on a sheet of paper. Paper has a flat background color and can be sized to serve a variety of purposes. A typical layout is composed of multiple sheets of paper” (Google Layout, 2014). The stress on material affordances, paper for Google and glass for Apple are crucial to understanding their respective stances in relation to flat design philosophy.[2]

    • Glass (Apple): Translucency, transparency, opaqueness, limpidity and pellucidity.
    • Paper (Google): Opaque, cards, slides, surfaces, tangibility, texture, lighted, casting shadows.
    Paradigmatic Substances for Materiality

    In contrast to the layers of glass paper-notes-templatethat inform the logics of transparency, opaqueness and translucency of Apple’s flat design, Google uses the notion of remediated “paper” as a digital material, that is this “material environment is a 3D space, which means all objects have x, y, and z dimensions. The z-axis is perpendicularly aligned to the plane of the display, with the positive z-axis extending towards the viewer.  Every sheet of material occupies a single position along the z-axis and has a standard 1dp thickness” (Google 2014). One might think then of Apple as painting on layers of glass, and Google as thin paper objects (material) placed upon background paper. However a key difference lies in the use of light and shadow in Google’s notion which enables the light source, located in a similar position to the user of the interface, to cast shadows of the material objects onto the objects and sheets of paper that lie beneath them (see Jitkoff 2014). Nonetheless, a laminate structure is key to the representational grammar that constitutes both of these platforms.

    armin_hofmann_2
    Armin Hofmann, head of the graphic design department at the Schule für Gestaltung Basel (Basel School of Design) and was instrumental in developing the graphic design style known as the Swiss Style. Designs from 1958 and 1959.

    Interestingly, both design strategies emerge from an engagement with and reconfiguration of the principles of design that draw from the Swiss style (sometimes called the International Typographic Style) in design (Ashghar 2014, Turner 2014).[3] This approach emerged in the 1940s, and

    mainly focused on the use of grids, sans-serif typography, and clean hierarchy of content and layout. During the 40’s and 50’s, Swiss design often included a combination of a very large photograph with simple and minimal typography (Turner 2014).

    The design grammar of the Swiss style has been combined with minimalism and the principle of “responsive design”, that is that the materiality and specificity of the device should be responsive to the interface and context being displayed. Minimalism is a “term used in the 20th century, in particular from the 1960s, to describe a style characterized by an impersonal austerity, plain geometric configurations and industrially processed materials” (MoMA 2014).

    img-robert-morris-1_125225955286
    Robert Morris: Untitled (Scatter Piece), 1968-69, felt, steel, lead, zinc, copper, aluminum, brass, dimensions variable; at Leo Castelli Gallery, New York. Photo Genevieve Hanson. All works © 2010 Robert Morris/Artists Rights Society (ARS), New York.

    Robert Morris, one of the principle artists of Minimalism, and author of the influential Notes on Sculpture used “simple, regular and irregular polyhedrons. Influenced by theories in psychology and phenomenology” which he argued “established in the mind of the beholder ‘strong gestalt sensation’, whereby form and shape could be grasped intuitively” (MoMA 2014).[4]

    The implications of these two competing world-views are far-reaching in that much of the worlds initial contact, or touch points, for data services, real-time streams and computational power is increasingly through the platforms controlled by these two companies. However, they are also deeply influential across the programming industries, and we see alternatives and multiple reconfigurations in relation to the challenge raised by the “flattened” design paradigms. That is, they both represent, if only in potentia, a situation of a power relation and through this an ideological veneer on computation more generally. Further, with the proliferation of computational devices – and the screenic imaginary associated with them in the contemporary computational condition – there appears a new logic which lies behind, justifies and legitimates these design methodologies.

    It seems to me that these new flat design philosophies, in the broad sense, produce an order in precepts and concepts in order to give meaning and purpose not only in the interactions with computational platforms, but also more widely in terms of everyday life. Flat design and material design are competing philosophies that offer alternative patterns of both creation and interpretation, which are meant to have not only interface design implications, but more broadly in the ordering of concepts and ideas, the practices and the experience of computational technologies broadly conceived. Another way to put this could be to think about these moves as being a computational founding, the generation of, or argument for, an axial framework for building, reconfiguration and preservation.

    Indeed, flat design provides and more importantly serves, as a translational or metaphorical heuristic for both re-presenting the computational, but also teaches consumers and users how to use and manipulate new complex computational systems and stacks. In other words, in a striking visual technique flat design communicates the vertical structure of the computational stack, on which the Stack corporations are themselves constituted. But also begins to move beyond the specificity of the device as privileged site of a computational interface interaction from beginning to end. For example, interface techniques are abstracted away from the specificity of the device, for example through Apple’s “handoff” continuity framework which also potentially changes reading and writing practices in interesting ways and new use-cases for wearables and nearables.

    These new interface paradigms, introduced by the flat turn, have very interesting possibilities for the application of interface criticism, through unpacking and exploring the major trends and practices of the Stacks, that is, the major technology companies. I think that further than this, the notion of layers are instrumental in mediating the experience of an increasingly algorithmic society (e.g. think dashboards, personal information systems, quantified self, etc.), and as such provide an interpretative frame for a world of computational patterns but also a constituting grammar for building these systems in the first place. There is an element in which the notion of the postdigital may also be a useful way into thinking about the question of the link between art, computation and design given here (see Berry and Dieter, forthcoming) but also the importance of notions of materiality for the conceptualization deployed by designers working within both the flat design and material design paradigms – whether of paper, glass, or some other “material” substance.[5]
    _____

    David M. Berry is Reader in the School of Media, Film and Music at the University of Sussex. He writes widely on computation and the digital and blogs at Stunlaw. He is the author of Critical Theory and the Digital, The Philosophy of Software: Code and Mediation in the Digital Age , Copy, Rip, Burn: The Politics of Copyleft and Open Source, editor of Understanding Digital Humanities and co-editor of Postdigital Aesthetics: Art, Computation And Design. He is also a Director of the Sussex Humanities Lab.

    Back to the essay
    _____

    Notes

    [1] Many thanks to Michael Dieter and Søren Pold for the discussion which inspired this post.

    [2] The choice of paper and glass as the founding metaphors for the flat design philosophies of Google and Apple raise interesting questions for the way in which these companies articulate the remediation of other media forms, such as books, magazines, newspapers, music, television and film, etc. Indeed, the very idea of “publication” and the material carrier for the notion of publication is informed by the materiality, even if only a notional affordance given by this conceptualization. It would be interesting to see how the book is remediated through each of the design philosophies that inform both companies, for example.

    [3] One is struck by the posters produced in the Swiss style which date to the 1950s and 60s but which today remind one of the mobile device screens of the 21st Century.

    [4] There is also some interesting links to be explored between the Superflat style and postmodern art movement, founded by the artist Takashi Murakami, which is influenced by manga and anime, both in terms of the aesthetic but also in relation to the cultural moment in which “flatness” is linked to “shallow emptiness.”

    [5] There is some interesting work to be done in thinking about the non-visual aspects of flat theory, such as the increasing use of APIs, such as the RESTful api, but also sound interfaces that use “flat” sound to indicate spatiality in terms of interface or interaction design. There are also interesting implications for the design thinking implicit in the Apple Watch, and the Virtual Reality and Augmented Reality platforms of Oculus Rift, Microsoft HoloLens, Meta and Magic Leap.

    Bibliography
  • The Reticular Fallacy

    The Reticular Fallacy

    By Alexander R. Galloway
    ~

    We live in an age of heterogenous anarchism. Contingency is king. Fluidity and flux win over solidity and stasis. Becoming has replaced being. Rhizomes are better than trees. To be political today, one must laud horizontality. Anti-essentialism and anti-foundationalism are the order of the day. Call it “vulgar ’68-ism.” The principles of social upheaval, so associated with the new social movements in and around 1968, have succeed in becoming the very bedrock of society at the new millennium.

    But there’s a flaw in this narrative, or at least a part of the story that strategically remains untold. The “reticular fallacy” can be broken down into two key assumptions. The first is an assumption about the nature of sovereignty and power. The second is an assumption about history and historical change. Consider them both in turn.

    (1) First, under the reticular fallacy, sovereignty and power are defined in terms of verticality, centralization, essence, foundation, or rigid creeds of whatever kind (viz. dogma, be it sacred or secular). Thus the sovereign is the one who is centralized, who stands at the top of a vertical order of command, who rests on an essentialist ideology in order to retain command, who asserts, dogmatically, unchangeable facts about his own essence and the essence of nature. This is the model of kings and queens, but also egos and individuals. It is what Barthes means by author in his influential essay “Death of the Author,” or Foucault in his “What is an Author?” This is the model of the Prince, so often invoked in political theory, or the Father invoked in psycho-analytic theory. In Derrida, the model appears as logos, that is, the special way or order of word, speech, and reason. Likewise, arkhe: a term that means both beginning and command. The arkhe is the thing that begins, and in so doing issues an order or command to guide whatever issues from such a beginning. Or as Rancière so succinctly put it in his Hatred of Democracy, the arkhe is both “commandment and commencement.” These are some of the many aspects of sovereignty and power as defined in terms of verticality, centralization, essence, and foundation.

    (2) The second assumption of the reticular fallacy is that, given the elimination of such dogmatic verticality, there will follow an elimination of sovereignty as such. In other words, if the aforementioned sovereign power should crumble or fall, for whatever reason, the very nature of command and organization will also vanish. Under this second assumption, the structure of sovereignty and the structure of organization become coterminous, superimposed in such a way that the shape of organization assumes the identical shape of sovereignty. Sovereign power is vertical, hence organization is vertical; sovereign power is centralized, hence organization is centralized; sovereign power is essentialist, hence organization, and so on. Here we see the claims of, let’s call it, “naïve” anarchism (the non-arkhe, or non foundation), which assumes that repressive force lies in the hands of the bosses, the rulers, or the hierarchy per se, and thus after the elimination of such hierarchy, life will revert so a more direct form of social interaction. (I say this not to smear anarchism in general, and will often wish to defend a form of anarcho-syndicalism.) At the same time, consider the case of bourgeois liberalism, which asserts the rule of law and constitutional right as a way to mitigate the excesses of both royal fiat and popular caprice.

    reticular connective tissue
    source: imgkid.com

    We name this the “reticular” fallacy because, during the late Twentieth Century and accelerating at the turn of the millennium with new media technologies, the chief agent driving the kind of historical change described in the above two assumptions was the network or rhizome, the structure of horizontal distribution described so well in Deleuze and Guattari. The change is evident in many different corners of society and culture. Consider mass media: the uni-directional broadcast media of the 1920s or ’30s gradually gave way to multi-directional distributed media of the 1990s. Or consider the mode of production, and the shift from a Fordist model rooted in massification, centralization, and standardization, to a post-Fordist model reliant more on horizontality, distribution, and heterogeneous customization. Consider even the changes in theories of the subject, shifting as they have from a more essentialist model of the integral ego, however fraught by the volatility of the unconscious, to an anti-essentialist model of the distributed subject, be it postmodernism’s “schizophrenic” subject or the kind of networked brain described by today’s most advanced medical researchers.

    Why is this a fallacy? What is wrong about the above scenario? The problem isn’t so much with the historical narrative. The problem lies in an unwillingness to derive an alternative form of sovereignty appropriate for the new rhizomatic societies. Opponents of the reticular fallacy claim, in other words, that horizontality, distributed networks, anti-essentialism, etc., have their own forms of organization and control, and indeed should be analyzed accordingly. In the past I’ve used the concept of “protocol” to describe such a scenario as it exists in digital media infrastructure. Others have used different concepts to describe it in different contexts. On the whole, though, opponents of the reticular fallacy have not effectively made their case, myself included. The notion that rhizomatic structures are corrosive of power and sovereignty is still the dominant narrative today, evident across both popular and academic discourses. From talk of the “Twitter revolution” during the Arab Spring, to the ideologies of “disruption” and “flexibility” common in corporate management speak, to the putative egalitarianism of blog-based journalism, to the growing popularity of the Deleuzian and Latourian schools in philosophy and theory: all of these reveal the contemporary assumption that networks are somehow different from sovereignty, organization, and control.

    To summarize, the reticular fallacy refers to the following argument: since power and organization are defined in terms of verticality, centralization, essence, and foundation, the elimination of such things will prompt a general mollification if not elimination of power and organization as such. Such an argument is false because it doesn’t take into account the fact that power and organization may inhabit any number of structural forms. Centralized verticality is only one form of organization. The distributed network is simply a different form of organization, one with its own special brand of management and control.

    Consider the kind of methods and concepts still popular in critical theory today: contingency, heterogeneity, anti-essentialism, anti-foundationalism, anarchism, chaos, plasticity, flux, fluidity, horizontality, flexibility. Such concepts are often praised and deployed in theories of the subject, analyses of society and culture, even descriptions of ontology and metaphysics. The reticular fallacy does not invalidate such concepts. But it does put them in question. We can not assume that such concepts are merely descriptive or neutrally empirical. Given the way in which horizontality, flexibility, and contingency are sewn into the mode of production, such “descriptive” claims are at best mirrors of the economic infrastructure and at worse ideologically suspect. At the same time, we can not simply assume that such concepts are, by nature, politically or ethically desirable in themselves. Rather, we ought to reverse the line of inquiry. The many qualities of rhizomatic systems should be understood not as the pure and innocent laws of a newer and more just society, but as the basic tendencies and conventional rules of protocological control.


    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here earlier in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay

  • Teacher Wars and Teaching Machines

    Teacher Wars and Teaching Machines

    teacher warsa review of Dana Goldstein, The Teacher Wars: A History of America’s Most Embattled Profession (Doubleday, 2014)
    by Audrey Watters
    ~

    Teaching is, according to the subtitle of education journalist Dana Goldstein’s new book, “America’s Most Embattled Profession.” “No other profession,” she argues, ”operates under this level of political scrutiny, not even those, like policing or social work, that are also tasked with public welfare and are paid for with public funds.”

    That political scrutiny is not new. Goldstein’s book The Teacher Wars chronicles the history of teaching at (what has become) the K–12 level, from the early nineteenth century and “common schools” — that is, before before compulsory education and public school as we know it today — through the latest Obama Administration education policies. It’s an incredibly well-researched book that moves from the feminization of the teaching profession to the recent push for more data-driven teacher evaluation, observing how all along the way, teachers have been deemed ineffectual in some way or another — failing to fulfill whatever (political) goals the public education system has demanded be met, be those goals be economic, civic, or academic.

    As Goldstein describes it, public education is a labor issue; and it has been, it’s important to note, since well before the advent of teacher unions.

    The Teacher Wars and Teaching Machines

    To frame education this way — around teachers and by extension, around labor — has important implications for ed-tech. What happens if we examine the history of teaching alongside the history of teaching machines? As I’ve argued before, the history of public education in the US, particularly in the 20th century, is deeply intertwined with various education technologies – film, TV, radio, computers, the Internet – devices that are often promoted as improving access or as making an outmoded system more “modern.” But ed-tech is frequently touted too as “labor-saving” and as a corrective to teachers’ inadequacies and inefficiencies.

    It’s hardly surprising, in this light, that teachers have long looked with suspicion at new education technologies. With their profession constantly under attack, many teacher are worried no doubt that new tools are poised to replace them. Much is said to quiet these fears, with education reformers and technologists insisting again and again that replacing teachers with tech is not the intention.

    And yet the sentiment of science fiction writer Arthur C. Clarke probably does resonate with a lot of people, as a line from his 1980 Omni Magazine article on computer-assisted instruction is echoed by all sorts of pundits and politicians: “Any teacher who can be replaced by a machine should be.”

    Of course, you do find people like former Washington DC mayor Adrian Fenty – best known arguably via his school chancellor Michelle Rhee – who’ll come right out and say to a crowd of entrepreneurs and investors, “If we fire more teachers, we can use that money for more technology.”

    So it’s hard to ignore the role that technology increasingly plays in contemporary education (labor) policies – as Goldstein describes them, the weakening of teachers’ tenure protections alongside an expansion of standardized testing to measure “student learning,” all in the service finding and firing “bad teachers.” The growing data collection and analysis enabled by schools’ adoption of ed-tech feeds into the politics and practices of employee surveillance.

    Just as Goldstein discovered in the course of writing her book that the current “teacher wars” have a lengthy history, so too does ed-tech’s role in the fight.

    As Sidney Pressey, the man often credited with developing the first teaching machine, wrote in 1933 (from a period Goldstein links to “patriotic moral panics” and concerns about teachers’ political leanings),

    There must be an “industrial revolution” in education, in which educational science and the ingenuity of educational technology combine to modernize the grossly inefficient and clumsy procedures of conventional education. Work in the schools of the school will be marvelously though simply organized, so as to adjust almost automatically to individual differences and the characteristics of the learning process. There will be many labor-saving schemes and devices, and even machines — not at all for the mechanizing of education but for the freeing of teacher and pupil from the educational drudgery and incompetence.

    Or as B. F. Skinner, the man most associated with the development of teaching machines, wrote in 1953 (one year before the landmark Brown v Board of Education),

    Will machines replace teachers? On the contrary, they are capital equipment to be used by teachers to save time and labor. In assigning certain mechanizable functions to machines, the teacher emerges in his proper role as an indispensable human being. He may teach more students than heretofore — this is probably inevitable if the world-wide demand for education is to be satisfied — but he will do so in fewer hours and with fewer burdensome chores.

    These quotations highlight the longstanding hopes and fears about teaching labor and teaching machines; they hint too at some of the ways in which the work of Pressey and Skinner and others coincides with what Goldstein’s book describes: the ongoing concerns about teachers’ politics and competencies.

    The Drudgery of School

    One of the things that’s striking about Skinner and Pressey’s remarks on teaching machines, I think, is that they recognize the “drudgery” of much of teachers’ work. But rather than fundamentally change school – rather than ask why so much of the job of teaching entails “burdensome chores” – education technology seems more likely to offload that drudgery to machines. (One of the best contemporary examples of this perhaps: automated essay grading.)

    This has powerful implications for students, who – let’s be honest – suffer through this drudgery as well.

    Goldstein’s book doesn’t really address students’ experiences. Her history of public education is focused on teacher labor more than on student learning. As a result, student labor is missing from her analysis. This isn’t a criticism of the book; and it’s not just Goldstein that does this. Student labor in the history of public education remains largely under-theorized and certainly underrepresented. Cue AFT president Al Shanker’s famous statement: “Listen, I don’t represent children. I represent the teachers.”

    But this question of student labor seems to be incredibly important to consider, particularly with the growing adoption of education technologies. Students’ labor – students’ test results, students’ content, students’ data – feeds the measurements used to reward or punish teachers. Students’ labor feeds the algorithms – algorithms that further this larger narrative about teacher inadequacies, sure, and that serve to financially benefit technology, testing, and textbook companies, the makers of today’s “teaching machines.”

    Teaching Machines and the Future of Collective Action

    The promise of teaching machines has long been to allow students to move “at their own pace” through the curriculum. “Personalized learning,” it’s often called today (although the phrase often refers only to “personalization” in terms of the pace, not in terms of the topics of inquiry). This means, supposedly, that instead of whole class instruction, the “work” of teaching changes: in the words of one education reformer, “with the software taking up chores like grading math quizzes and flagging bad grammar, teachers are freed to do what they do best: guide, engage, and inspire.”

    Again, it’s not clear how this changes the work of students.

    So what are the implications – not just pedagogically but politically – of students, their headphones on staring at their individual computer screens working alone through various exercises? Because let’s remember: teaching machines and all education technologies are ideological. What are the implications – not just pedagogically but politically – of these technologies’ emphasis on individualism, self-management, personal responsibility, and autonomy?

    What happens to discussion and debate, for example, in a classroom of teaching machines and “personalized learning”? What happens, in a world of schools catered to individual student achievement, to the community development that schools (at their best, at least) are also tasked to support?

    What happens to organizing? What happens to collective action? And by collectivity here, let’s be clear, I don’t mean simply “what happens to teachers’ unions”? If we think about The Teacher Wars and teaching machines side-by-side, we should recognize our analysis of (our actions surrounding) the labor issues of school need to go much deeper and more farther than that.

    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared.

    Back to the essay

  • The Man Who Loved His Laptop

    The Man Who Loved His Laptop

    Her (2013)a review of Spike Jonze (dir.), Her (2013)
    by Mike Bulajewski
    ~
    I’m told by my sister, who is married to a French man, that the French don’t say “I love you”—or at least they don’t say it often. Perhaps they think the words are superfluous and it’s the behavior of the person you are in a relationship with tells you everything. Americans, on the other hand, say it to everyone—lovers, spouses, friends, parents, grandparents, children, pets—and as often as possible, as if quantity matters most. The declaration is also an event. For two people beginning a relationship, it marks a turning point and a new stage in the relationship.

    If you aren’t American, you may not have realized that relationships have stages. In America, they do. It’s complicated. First there are the three main thresholds of commitment: Dating, Exclusive Dating, then of course Marriage. There are three lesser pre-Dating stages: Just Talking, Hooking Up and Friends with Benefits; and one minor stage between Dating and Exclusive called Pretty Much Exclusive. Within Dating, there are several minor substages: number of dates (often counted up to the third date) and increments of physical intimacy denoted according to the well-known baseball metaphor of first, second, third and home base.

    There are also a number of rituals that indicate progress: updating of Facebook relationship statuses; leaving a toothbrush at each other’s houses; the aforementioned exchange of I-love-you’s; taking a vacation together; meeting the parents; exchange of house keys; and so on. When people, especially unmarried people talk about relationships, often the first questions are about these stages and rituals. In France the system is apparently much less codified. One convention not present in the United States is that romantic interest is signaled when a man invites a woman to go for a walk with him.

    The point is two-fold: first, although Americans admire and often think of French culture as holding up a standard for what romance ought to be, Americans act nothing like the French in relationships and in fact know very little about how they work in France. Second and more importantly, in American culture love is widely understood as spontaneous and unpredictable, and yet there is also an opposite and often unacknowledged expectation that relationships follow well-defined rules and rituals.

    This contradiction might explain the great public clamor over romance apps like Romantimatic and BroApp that automatically send your significant other romantic messages, either predefined or your own creation, at regular intervals—what philosopher of technology Evan Selinger calls (and not without justification) apps that outsource our humanity.

    Reviewers of these apps were unanimous in their disapproval, disagreeing only on where to locate them on a spectrum between pretty bad and sociopathic. Among all the labor-saving apps and devices, why should this one in particular be singled out for opprobrium?

    Perhaps one reason for the outcry is that they expose an uncomfortable truth about how easily romance can be automated. Something we believe is so intimate is revealed as routine and predictable. What does it say about our relationship needs that the right time to send a loving message to your significant other can be reduced to an algorithm?

    The routinization of American relationships first struck me in the context of this little-known fact about how seldom French people say “I love you.” If you had to launch one of these romance apps in France, it wouldn’t be enough to just translate the prewritten phrases into French. You’d have to research French romantic relationships and discover what are the most common phrases—if there are any—and how frequently text messages are used for this purpose. It’s possible that French people are too unpredictable, or never use text messages for romantic purposes, so the app is just not feasible in France.

    Romance is culturally determined. That American romance can be so easily automated reveals how standardized and even scheduled relationships already are. Selinger’s argument that automated romance undermines our humanity has some merit, but why stop with apps? Why not address the problem at a more fundamental level and critique the standardized courtship system that regulates romance. Doesn’t this also outsource our humanity?

    The best-selling relationship advice book The 5 Love Languages claims that everyone understands one of five love “languages” and the key to a happy relationship for each partner to learn to express love in the correct language. Should we be surprised if the more technically minded among us concludes that the problem of love can be solved with technology? Why not try to determine the precise syntax and semantics of these love languages, and attempt to express them rigorously and unambiguously in the same way that computer languages and communications protocols are? Can love be reduced to grammar?

    Spike Jonze’s Her (2013) tells the story of Theodore Twombly, a soon-to-be divorced writer who falls in love with Samantha, an AI operating system who far exceeds the abilities of today’s natural language assistants like Apple’s Siri or Microsoft’s Cortana. Samantha is not only hyper-intelligent, she’s also capable of laughter, telling jokes, picking up on subtle unspoken interpersonal cues, feeling and communicating her own emotions, and so on. Theodore falls in love with her, but there is no sense that their relationship is deficient because she’s not human. She is as emotionally expressive as any human partner, at least on film.

    Theodore works for a company called BeautifulHandwrittenLetters.com as a professional Cyrano de Bergerac (or perhaps a human Romantimatic), ghostwriting heartfelt “handwritten” letters on behalf of this clients. It’s an ironic twist: Samantha is his simulated girlfriend, a role which he himself adopts at work by simulating the feelings of his clients. The film opens with Theodore at his desk at work, narrating a letter from a wife to her husband on the occasion of their 50th wedding anniversary. He is a master of the conventions of the love letter. Later in the film, his work is discovered by a literary agent, and he gets an offer to have book published of his best work.

    [youtube https://www.youtube.com/watch?v=CxahbnUCZxY&w=560&h=315]

    But for all his (alleged) expertise as a romantic writer, Theodore is lonely, emotionally stunted, ambivalent towards the women in his life, and—at least before meeting Samantha—apparently incapable of maintaining relationships since he separated from his ex-wife Catherine. Highly sensitive, he is disturbed by encounters with women that go off the script: a phone sex encounter goes awry when the woman demands that he enact her bizarre fantasy of being choked with a dead cat; and on a date with a woman one night, she exposes a little too much vulnerability and drunkenly expresses her fear that he won’t call her. He abruptly and awkwardly ends the date.

    Theodore wanders aimlessly through the high tech city as if it is empty. With headphones always on, he’s withdrawn, cocooned in a private sonic bubble. He interacts with his device through voice, asking it to play melancholy songs and skipping angry messages from his attorney demanding that he sign the divorce papers already. At times, he daydreams of happier times when he and his ex-wife were together and tells Samantha how much he liked being married. At first it seems that Catherine left him. We wonder if he withdrew from the pain of his heartbreak. But soon a different picture emerges. When they finally meet to sign the divorce papers over lunch, Catherine accuses him of not being able to handle her emotions and reveals that he tried to get her on Prozac. She says to him “I always felt like you wished I could just be a happy, light, everything’s great, bouncy L.A. wife. But that’s not me.”

    So Theodore’s avoidance of real challenges and emotions in relationships turns out to be an ongoing problem—the cause, not the consequence, of his divorce. Starting a relationship with his operating systems Samantha is his latest retreat from reality—not from physical reality, but from the virtual reality of authentic intersubjective contact.

    Unlike his other relationships, Samantha is perfectly customized to his needs. She speaks his “love language.” Today we personalize our operating system and fill out online dating profile specifying exactly what kind of person we’re looking for. When Theodore installs Samantha on his computer for the first time, the two operations are combined with a single question. The system asks him how he would describe his relationship with his mother. He begins to reply with psychological banalities about how she is insufficiently attuned to his needs, and it quickly stops him, already knowing what he’s about. And so do we.

    That Theodore is selfish doesn’t mean that he is unfeeling, unkind, insensitive, conceited or uninterested in his new partners thoughts, feelings and goals. His selfishness is the kind that’s approved and even encouraged today, the ethically consistent selfishness that respects the right of others to be equally selfish. What he wants most of all is to be comfortable, to feel good, and that requires a partner who speaks his love language and nothing else, someone who says nothing that would veer off-script and reveal too many disturbing details. More precisely, Theodore wants someone who speaks what Lacan called empty speech: speech that obstructs the revelation of the subject’s traumatic desire.

    Objectification is a traditional problem between men and women. Men reduce women to mere bodies or body parts that exist only for sexual gratification, treating them as sex objects rather than people. The dichotomy is between the physical as the domain of materiality, animality and sex on one hand, and the spiritual realm of subjectivity, personality, agency and the soul on the other. If objectification eliminates the soul, then Theodore engages in something like the opposite, a subjectification which eradicates the body. Samantha is just a personality.

    Technology writer Nicholas Carr‘s new book The Glass Cage: Automation and Us (Norton, 2014) investigates the ways that automation and artificial intelligence dull our cognitive capacities. Her can be read as a speculative treatment of the same idea as it relates to emotion. What if the difficulty of relationships could be automated away? The film’s brilliant provocation is that it shows us a lonely, hollow world mediated through technology but nonetheless awash in sentimentality. It thwarts our expectations that algorithmically-generated emotion would be as stilted and artificial as today’s speech synthesizers. Samantha’s voice is warm, soulful, relatable and expressive. She’s real, and the feelings she triggers in Theodore are real.

    But real feelings with real sensations can also be shallow. As Maria Bustillo notes, Theodore is an awful writer, at least by today’s standards. Here’s the kind of prose that wins him accolades from everyone around him:

    I remember when I first started to fall in love with you like it was last night. Lying naked beside you in that tiny apartment, it suddenly hit me that I was part of this whole larger thing, just like our parents, and our parents’ parents. Before that I was just living my life like I knew everything, and suddenly this bright light hit me and woke me up. That light was you.

    In spite of this, we’re led to believe that Theodore is some kind of literary genius. Various people in his life compliment him on his skill and the editor of the publishing company who wants to publish his work emails to tell him how moved he and his wife were when they read them. What kind of society would treat such pedestrian writing as unusual, profound or impressive? And what is the average person’s writing like if Theodore’s services are worth paying for?

    Recall the cult favorite Idiocracy (2006) directed by Mike Judge, a science fiction satire set in a futuristic dystopia where anti-intellectualism is rampant and society has descended into stupidity. We can’t help but conclude that Her offers a glimpse into a society that has undergone a similar devolution into both emotional and literary idiocy.

    _____

    Mike Bulajewski (@mrteacup) is a user experience designer with a Master’s degree from University of Washington’s Human Centered Design and Engineering program. He writes about technology, psychoanalysis, philosophy, design, ideology & Slavoj Žižek at MrTeacup.org, where an earlier version of this review first appeared.

    Back to the essay

  • Program and Be Programmed

    Program and Be Programmed

    Programmed Visions: Software and Memory (MIT Press, 2013)a review of Wendy Chun, Programmed Visions: Software and Memory (MIT Press, 2013)
    by Zachary Loeb
    ~

    Type a letter on a keyboard and the letter appears on the screen, double-click on a program’s icon and it opens, use the mouse in an art program to draw a line and it appears. Yet knowing how to make a program work is not the same as knowing how or why it works. Even a level of skill approaching mastery of a complicated program does not necessarily mean that the user understands how the software works at a programmatic level. This is captured in the canonical distinctions between users and “power users,” on the one hand, and between users and programmers on the other. Whether being a power user or being a programmer gives one meaningful power over machines themselves should be a more open question than injunctions like Douglas Rushkoff’s “program or be programmed” or the general opinion that every child must learn to code appear to allow.

    Sophisticated computer programs give users a fantastical set of abilities and possibilities. But to what extent does this sense of empowerment depend on faith in the unseen and even unknown codes at work in a given program? We press a key on a keyboard and a letter appears on the screen—but do we really know why? These are some of the questions that Wendy Hui Kyong Chun poses in Programmed Visions: Software and Memory, which provides a useful history of early computing alongside a careful analysis of the ways in which computers are used—and use their users—today. Central to Chun’s analysis is her insistence “that a rigorous engagement with software makes new media studies more, rather than less, vapory” (21), and her book succeeds admirably in this regard.

    The central point of Chun’s argument is that computers (and media in general) rely upon a notion of programmability that has become part of the underlying societal logic of neoliberal capitalism. In a society where computers are tied ever more closely to power, Chun argues that canny manipulation of software restores a sense of control or sovereignty to individual users, even as their very reliance upon this software constitutes a type of disempowerment. Computers are the driving force and grounding metaphor behind an ideology that seeks to determine the future—a future that “can be bought and sold” and which “depends on programmable visions that extrapolate the future—or more precisely, a future—based on the past” (9).

    Yet, one of the pleasures of contemporary computer usage, is that one need not fully understand much of what is going on to be able to enjoy the benefits of the computer. Though we may use computer technology to answer critical questions, this does not necessarily mean we are asking critical questions about computer technology. As Chun explains, echoing Michel Foucault, “software, free or not, is embodied and participates in structures of knowledge-power” (21); users become tangled in these structures once they start using a given device or program. Much of this “knowledge-power” is bound up in the layers of code which make software function, the code is that which gives the machine the directions—that which ensures that the tapping of the letter “r” on the keyboard leads to that letter appearing on the screen. Nevertheless, this code typically goes unseen, especially as it becomes source code, and winds up being buried ever deeper, even though this source code is what “embodies the power of the executive, the power of enforcement” (27). Importantly, the ability to write code, the programmer’s skill, does not in and of itself provide systematic power: computers follow “a set of rules that programmers must follow” (28). A sense of power over certain aspects of a computer is still incumbent upon submitting to the control of other elements of the computer.

    Contemporary computers, and our many computer-esque devices (such as smart phones and tablets), are the primary sites in which most of us encounter the codes and programming about which Chun writes, but she takes lengths to introduce the reader to the history of programming. For it is against the historical backdrop of military research, during the Second World War, that one can clearly see the ways in which notions of control, the unquestioning following of orders, and hierarchies have long been at work within computation and programming. Beyond providing an enlightening aside into the vital role that women played in programming history, analyzing the early history of computing demonstrates how as a means of cutting down on repetitive work structured programming emerged that “limits the logical procedures coders can use, and insists that the program consist of small modular units, which can be called from the main program” (36). Gradually this emphasis on structured programming allows for more and more processes to be left to the machine, and thus processes and codes become hidden from view even as future programmers are taught to conform to the demands that will allow for new programs to successfully make use of these early programs. Therefore the processes that were once a result of expertise come to be assumed aspects of the software—they become automated—and it is this very automation (“automatic programming”) that “allows the production of computer-enabled human-readable code” (41).

    As the codes and programs become hidden by ever more layers of abstraction, the computer simultaneously and paradoxically appears to make more of itself visible (through graphic user interfaces, for example), while the code itself recedes ever further into the background. This transition is central to the computer’s rapid expansion into ever more societal spheres, and it is an expansion that Chun links to the influence of neoliberal ideology. The computer with its easy-to-use interfaces creates users who feel as though they are free and empowered to manipulate the machine even as they rely on the codes and programs that they do not see. Freedom to act becomes couched in code that predetermines the range and type of actions that the users are actually free to take. What transpires, as Chun writes, is that “interfaces and operating systems produce ‘users’—one and all” (67).

    Without fully comprehending the codes that lead from a given action (a user presses a button) to a given result, the user is positioned to believe ever more in the power of the software/hardware hybrid, especially as increased storage capabilities allow for computers to access vast informational troves. In so doing, the technologically-empowered user has been conditioned to expect a programmable world akin to the programmed devices they use to navigate that world—it has “fostered our belief in the world as neoliberal: as an economic game that follows certain rules” (92). And this takes place whether or not we understand who wrote those rules, or how they can be altered.

    This logic of programmability may be linked to inorganic machines, but Chun also demonstrates the ways in which this logic has been applied to the organic world as well. In truth, the idea that the organic can be programmed predates the computer; as Chun explains “breeding encapsulates an early logic of programmability… Eugenics, in other words, was not simply a factor driving the development of high-speed mass calculation at the level of content… but also at the level of operationality” (124). In considering the idea that the organic can be programmed, what emerges is a sense of the way that programming has long been associated with a certain will to exert control over things be they organic or inorganic. Far from being a digression, Chun’s discussion of eugenics provides for a fascinating historic comparison given the way in which its decline in acceptance seems to dovetail with the steady ascendance of the programmable machine.

    The intersection of software and memory (or “software as memory”) is an essential matter to consider given the informational explosion that has occurred with the spread of computers. Yet, as Chun writes eloquently: “information is ‘undead’; neither alive nor dead, neither quite present nor absent” (134), since computers simultaneously promise to make ever more information available while making the future of much of this information precarious (insofar as access may rely upon software and hardware that no longer functions). Chun elucidates the ways in which the shift from analog to digital has permitted a wider number of users to enjoy the benefits of computers while this shift has likewise made much that goes on inside a computer (software and hardware) less transparent. While the machine’s memory may seem ephemeral and (to humans) illegible, accessing information in “storage” involves codes that read by re-writing elsewhere. This “battle of diligence between the passing and the repetitive” characterizing machine memory, Chun argues, “also characterizes content today” (170). Users rely upon a belief that the information they seek will be available and that they will be able to call upon it with a few simple actions, even though they do not see (and usually cannot see) the processes that make this information present and which do or do not allow it to be presented.

    When people make use of computers today they find themselves looking—quite literally—at what the software presents to them, yet in allowing this act of seeing the programming also has determined much of what the user does not see. Programmed Visions is an argument for recognizing that sometimes the power structures that most shape our lives go unseen—even if we are staring right at them.

    * * *

    With Programmed Visions, Chun has crafted a nuanced, insightful, and dense, if highly readable, contribution to discussions about technology, media, and the digital humanities. It is a book that demonstrates Chun’s impressive command of a variety of topics and the way in which she can engagingly shift from history to philosophy to explanations of a more technical sort. Throughout the book Chun deftly draws upon a range of classic and contemporary thinkers, whilst raising and framing new questions and lines of inquiry even as she seeks to provide answers on many other topics.

    Though peppered with many wonderful turns of phrase, Programmed Visions remains a challenging book. While all readers of Programmed Visions will come to it with their own background and knowledge of coding, programming, software, and so forth—the simple truth is that Chun’s point (that many people do not understand software sufficiently) may make many a reader feel somewhat taken aback. For most computer users—even many programmers and many whose research involves the study of technology and media—are quite complicit in the situation that Chun describes. It is the sort of discomforting confrontation that is valuable precisely because of the anxiety it provokes. Most users take for granted that the software will work the way they expect it to—hence the frustration bordering on fury that many people experience when suddenly the machine does something other than that which is expected provoking a maddened outburst of “why aren’t you working!” What Chun helps demonstrate is that it is not so much that the machines betray us, but that we were mistaken in our thinking that machines ever really obeyed us.

    It will be easy for many readers to see themselves as the user that Chun describes—as someone positioned to feel empowered by the devices they use, even as that power depends upon faith in forces the user cannot see, understand, or control. Even power users and programmers, on careful self-reflection may identify with Chun’s relocation of the programmer from a position of authority to a role wherein they too must comply with the strictures of the code presents an important argument for considerations of such labor. Furthermore, the way in which Chun links the power of the machine to the overarching ideology of neoliberalism makes her argument useful for discussions broader than those in media studies and the digital humanities. What makes these arguments particularly interesting is the way in which Chun locates them within thinking about software. As she writes towards the end of the second chapter, “this chapter is not a call to return to an age when one could see and comprehend the actions of our computers. Those days are long gone… Neither is this chapter an indictment of software or programming… It is, however, an argument against common-sense notions of software precisely because of their status as common sense” (92). Such a statement refuses to provide the anxious reader (who has come to see themselves as an uninformed user) with a clear answer, for it suggests that the “common-sense” clear answer is part of what has disempowered them.

    The weaving of historic details regarding computers during World War II and eugenics provide an excellent and challenging atmosphere against which Chun’s arguments regarding programmability can grow. Chun lucidly describes the embodiment and materiality of information and obsolescence that serve as major challenges confronting those who seek to manage and understand the massive informational flux that computer technology has enabled. The idea of information as “undead” is both amusing and evocative as it provides for a rich way of describing the “there but not there” of information, while simultaneously playing upon the slight horror and uneasiness that seems to be lurking below the surface in the confrontation with information.

    As Chun sets herself the difficult task of exploring many areas, there are some topics where the reader may be left wanting more. The section on eugenics presents a troubling and fascinating argument—one which could likely have been a book in and of itself—especially when considered in the context of arguments about cyborg selves and post-humanity, and it is a section that almost seems to have been cut short. Likewise the discussion of race (“a thread that has been largely invisible yet central,” 179), which is brought to the fore in the epilogue, confronts the reader with something that seems like it could in fact be the introduction for another book. It leaves the reader with much to contemplate—though it is the fact that this thread was not truly “largely invisible” that makes the reader upon reaching the epilogue wish that the book could have dealt with that matter at greater length. Yet, these are fairly minor concerns—that Programmed Visions leaves its readers re-reading sections to process them in light of later points is a credit to the text.

    Programmed Visions: Software and Memory is an alternatively troubling, enlightening, and fascinating book. It allows its reader to look at software and hardware in a new way, with a fresh insight about this act of sight. It is a book that plants a question (or perhaps subtly programs one into the reader’s mind): what are you not seeing, what power relations remain invisible, between the moment during which the “?” is hit on the keyboard and the moment it appears on the screen?


    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck. He has previously reviewed The People’s Platform by Astra Taylor and Social Media: A Critical Introduction by Christian Fuchs for boundary2.org.

    Back to the essay