Artificial intelligence (AI) is a Faustian dream. Conceived in the future tense, its most ardent AI visionaries seek to create an enhanced form of intelligence that far surpasses the capacities of human brains. AI promises to transcend the messiness of embodiment, the biases of human cognition, and the limitations of mortality. Entering its eighth decade, AI is largely a science fiction, despite recent advances in machine learning. Yet it has captured the public imagination since its inception, and acquired potent ideological cache. Robots have become AI’s humanoid faces, as well as icons of popular culture: cast as helpful companions or agents of the apocalypse.
The transcendent vision of artificial intelligence has educated, informed, and inspired generations of scientists, military strategists, policy makers, entrepreneurs, writers, artists, filmmakers, and marketers. However, apologists have also frequently invoked AI’s authority to mystify, intimidate, and silence resistance to its vision, teleology, and deployments. Where, for example, the threat of automation once triggered labor activism, rallying opposition to an esoteric branch of computer science research that few non-specialists understand is a rhetorical non-starter. So is campaigning for alternatives to smart apps, homes, cars, cities, borders, and bombs.
Two remarkable new books, Kate Crawford’s The Atlas of AI and Frank Pasquale’s New Laws of Robotics: Defending Human Expertise in the Age of AI, provide provocative critical assessments of artificial intelligence in clear, accessible, and engaging prose. Both books have titles that could discourage novices, but they are, in fact, excellent primers for non-specialists on what is at stake in the current ascendency of AI science and ideology—especially if read in tandem.
Crawford’s thesis—“AI is neither artificial nor intelligent”—cuts through the sci-fi hype to radically reground AI power-knowledge in material reality. Beginning with its environmental impact on planet Earth, her narrative proceeds vertically to demystify AI’s ways of seeing—its epistemology, methodology, and applications—and then to examine the roles of labor, ideology, the state, and power in the AI enterprise. She concludes with a coda on space and the astronautical illusions of digital billionaires. Pasquale takes a more horizontal approach, surveying AI in health care, education, media, law, policy, economics, war, and other domains. His attention is on the practical present—on the ethical dilemmas posed by current and near-future deployments of AI. His through line is that human judgment, backed by policy, should steer AI toward human ends.
Despite these differences, Crawford and Pasquale converge on several critical points. First, they agree that AI models are skewed by economic and engineering values to the exclusion of other forms of knowledge and wisdom. Second, both endorse greater transparency and accountability in artificial intelligence design and practices. Third, they agree that AI datasets are skewed: Crawford focuses on how the use of natural language datasets, no matter how large, reproduce the biases of the populations they are drawn from, while Pasquale attends to designs that promote addictive engagement to optimize ad revenue. Fourth, both cite the residual effects of AI’s military origins on its logic, values, and rhetoric. Fifth, Crawford and Pasquale both recognize that AI’s futurist hype tends to obscure the real-world political and economic interests behind the screens—the market fundamentalism that models the world as an assembly line. Sixth, both emphasize the embodiment of intelligence, which encompasses tacit and muscle knowledge that cannot be fully extracted and abstracted by artificial intelligence modelers. Seventh, they both view artificial intelligence as a form of data-driven behaviorism, in the stimulus-response sense. Eighth, they acknowledge that AI and economic experts claim priority for their own views—a position they both reject.
Crawford literally travels the world to map the topologies of computation, beginning in the lithium mines of Nevada, on to Silicon Valley, Indonesia, Malaysia, China, and Mongolia, and ending under personal surveillance outside of Jeff Bezos’ Blue Origin suborbital launch facility in West Texas. Demonstrating that AI is anything but artificial, she documents the physical toll it extracts from the environment. Contra the industry’s earth-friendly PR and marketing, the myth of clean tech and metaphors like ‘the Cloud,’ Crawford points out that AI systems are built upon consuming finite resources that required billions of years to take form: “we are extracting Earth’s geological history to serve a split second of contemporary technological time, building devices like the Amazon Echo and iPhone that are often designed to last only a few years.” And the Cloud itself leaves behind a gigantic carbon footprint. AI data mining is not only dependent on human miners of rare minerals, but also on human labor functioning within a “registry of power” that is unequal and exploitive— where “many valuable automated systems feature a combination of underpaid digital piece workers and customers taking on unpaid tasks to make systems function,” all the while under constant surveillance.
While there is a deskilling of human labor, there are also what Crawford calls Potemkin AI systems, which only work because of hidden human labor—Bezos himself calls such systems “artificial artificial intelligence.” AI often doesn’t work as well as the humans it replaces, as, for example, in automated telephone consumer service lines. But Crawford reminds us that AI systems scale up: customers ‘on hold’ replace legions of customer service workers in large organizations. Profits trump service. Her chapters on data and classification strip away the scientistic mystification of AI and Big Data. AI’s methodology is simply data at scale, and it is data that is biased at inception because it is collected indiscriminately, as size, not substance, counts. A dataset extracted and abstracted from a society secured in systemic racism will, for example, produce racist results. The increasing convergence of state and corporate surveillance not only undermines individual privacy, but also makes state actors reliant on technologies that they cannot fully understand as machine learning transforms them. In effect, Crawford argues, states have made a “devil’s bargain” with tech companies that they cannot control. These technologies, developed for command-and-control military and policing functions, increasingly erode the dialogic and dialectic nature of democratic commons.
AI began as a highly subsidized public project in the early days of the Cold War. Crawford demonstrates, however, that it has been “relentlessly privatized to produce enormous financial gains for the tiny minority at the top of the extraction pyramid.” In collaboration with Alex Campolo, Crawford has described AI’s epistemological flattening of complexity as “enchanted determinism,” whereby “AI systems are seen as enchanted, beyond the known world, yet deterministic in that they discover patterns that can be applied with predictive certainty to everyday life.”[1] In some deep learning systems, even the engineers who create these systems cannot interpret them. Yet, they cannot dismiss them either. In such cases, “enchanted determinism acquires an almost theological quality,” which tends to place it beyond critique of both technological utopians as well as dystopians.
Pasquale, for his part, examines the ethics of AI as currently deployed and often circumvented in several contexts: medicine, education, media, law, military, and the political economy of automation, in each case in relation to human wisdom. His basic premise is that “we now have the means to channel technologies of automation, rather than being captured or transformed by them.” Like Crawford, then, he recommends exercising a resistant form of agency. Pasquale’s focus is on robots as automated systems. His rhetorical point of departure is a critique and revision of Isaac Asimov’s highly influential “laws of robotics,” developed in a 1942 short story—more than a decade before AI was officially launched in 1956. Because the world and law-making is far more complex than a short story, Pasquale finds Asimov’s laws ambiguous and difficult to apply, and proposes four new ones, which become the basis of his arguments throughout the book. They are:
Robotic systems and AI should complement professionals, not replace them.
Robotic systems and AI should not counterfeit humanity.
Robotic systems and AI should not intensify zero-sum arms races.
Robotic systems and AI must always indicate the identity of their creator(s), controller(s), and owner(s).
‘Laws’ entail regulation, which Pasquale endorses to promote four corresponding values: complementarity, authenticity, cooperation, and attribution. The four laws’ deployment depends on a critical distinction that Pasquale draws between technologies that replace people and those that assist us in doing our jobs better. Classic definitions of AI have sought to create computers that “can sense, think, and act like humans.” Pasquale endorses an “Intelligence Augmentation” (IA) alternative. This is a crucial shift in emphasis; it is Pasquale’s own version of AI refusal.
He acknowledges that, in the current economy, “there are economic laws that tilt the scale toward AI and against IA.” In his view, deployment of robots may, however, offer an opportunity for humanistic intervention in AI’s hegemony, because the presence of robots, unlike phones, tablets, or sensors, is physically intrusive. They are there for a purpose, which we may accept or reject at our peril, but find hard to ignore. Robots are being developed to enter fields that are already highly regulated, which offers an opportunity to shape their use in ways that conform to established legal standards of privacy and consumer protection. Pasquale is an advocate for building humane (IA) values within the technology, before robots are released into the wild.
In each of his topical chapters, he explains how robots and other AI systems designed to advance the values of complementarity, authenticity, cooperation, and attribution might enhance human existence and community. Some chapters stand out, as particularly insightful, including those on “automated media,” human judgment, and the political economy of automation. One of Pasquale’s chapters addresses important terrain that Crawford does not consider, medicine. Given past abuses by medical researchers in exploiting and/or ignoring race and gender, they may be especially sensitive and receptive to an IA intervention, despite the formidable economic forces stacked against it. Pasquale shows, for example, how IA has amplified diagnostics in dermatology through pattern recognition, providing insight into what distinguishes malignant from benign moles.
In our view, Pasquale’s closing chapter endorsing human wisdom, as opposed to AI, displays multiple examples of the former. But some of their impact is blunted by more diffuse discussions of literature and art, valuable though those practices may be in counter-balancing the instrumental values of economics and engineering. Nonetheless, Pasquale’s argument is an eloquent tribute to a “human form of life that is fragile, embodied in mortal flesh, time-delimited, and irreproducible in silico.”
The two books, read together, amount to a critique of AI ideology. Pasquale and Crawford write about the stuff that phrases like “artificial intelligence” and “machine learning” refer to, but their main concern is the mystique surrounding the words themselves. Crawford is especially articulate on this theme. She shows that, as an idea, AI is self-warranting. Floating above the undersea cables and rare-earth mines—ethereal and cloud-like—the discourse makes its compelling case for the future. Her work is to cut through the cloud cover, to reveal the mines and cables.
So the idea of AI justifies even as it obscures. What Crawford and Pasquale draw out is that AI is a way of seeing the world—a lay epistemology. When we see the world through the lens of AI, we see extraction-ready data. We see countable aggregates everywhere we look. We’re always peering ahead, predicting the future with machinist probabalism. It’s the view from Palo Alto that feels like a god’s eye view. From up there, the continents look patterned and classification-ready. Earth-bound disorder is flattened into clear signal. What AI sees, in Crawford’s phrase, is a “Linnaean order of machine-readable tables.” It is, in Pasquale’s view, an engineering mindset that prizes efficiency over human judgment.
At the same time, as both authors show, the AI lens refracts the Cold War national security state that underwrote the technology for decades. Seeing like an AI means locating targets, assets, and anomalies. Crawford calls it a “covert philosophy of en masse infrastructural command and control,” a martial worldview etched in code.
As Kenneth Burke observed, every way of seeing is also a way of not seeing. What AI can’t see is also its raw material: human complexity and difference. There is, in AI, a logic of commensurability—a reduction of messy and power-laden social life into “computable sameness.” So there is a connection, as both Crawford and Pasquale observe, between extraction and abstraction. The activity of everyday life is extracted into datasets that, in their bloodless tabulation, abstract away their origins. Like Marx’s workers, we are then confronted by the alienated product of our “labor”—interviewed or consoled or policed by AIs that we helped build.
Crawford and Pasquale’s excellent books offer sharp and complementary critiques of the AI fog. Where they differ is in their calls to action. Pasquale, in line with his mezzo-level focus on specific domains like education, is the reformist. His aim is to persuade a policy community that he’s part of—to clear space between do-nothing optimists and fatalist doom-sayers. At core he hopes to use law and expertise to rein in AI and robotics—with the aim to deploy AI much more conscientiously, under human control and for human ends.
Crawford is more radical. She sees AI as a machine for boosting the power of the already powerful. She is skeptical of the movement for AI “ethics,” as insufficient at best and veering toward exculpatory window-dressing. The Atlas of AI ends with a call for a “renewed politics of refusal,” predicated on a just and solidaristic vision of the future.
It would be easy to exaggerate Crawford and Pasquale’s differences, which reflect their projects’ scope and intended audience more than any disagreement of substance. Their shared call is to see AI for what it is. Left to follow its current course, the ideology of AI will reinforce the bars on the “iron cage” that sociologist Max Weber foresaw a century ago: incarcerating us in systems of power dedicated to efficiency, calculation, and control.
_____
Sue Curry Jansen is Professor of Media & Communication at Muhlenberg College, in Allentown, PA. Jeff Pooley is Professor of Media & Communication at Muhlenberg, and director of mediastudies.press, a scholar-led publisher. Their co-authored essay on Shoshanna Zuboff’s Surveillance Capitalism—a review of the book’s reviews—recently appeared in New Media & Society.
Content Warning: The following text references algorithmic systems acting in racist ways towards people of color.
Artificial intelligence and thinking machines have been key components in the way Western cultures, in particular, think about the future. From naïve positivist perspectives, as illustrated by the Rosie the Robot maid from 1962’s TV show The Jetsons, to ironic reflections on the reality of forced servitude to one’s creator and quasi-infinite lifespans in Douglas Adams’s Hitchhiker’s Guide to the Galaxy’s Marvin the Paranoid Android, as well as the threatening, invisible, disembodied, cruel HAL 9000 in Arthur C. Clarke’s Space Odyssey series and its total negation in Frank Herbert’s Dune books, thinking machines have shaped a lot of our conceptions of society’s future. Unless there is some catastrophic event, the future seemingly will have strong Artificial Intelligences (AI). They will appear either as brutal, efficient, merciless entities of power or as machines of loving grace serving humankind to create a utopia of leisure, self-expression and freedom from the drudgery of labor.
Those stories have had a fundamental impact on the perception of current technologic trends and developments. The digital turn has increasingly made growing parts of our social systems accessible to automation and software agents. Together with a 24/7 onslaught of increasingly optimistic PR messages by startups, the accompanying media coverage has prepared the field for a new kind of secular techno-religion: The Church of AI.
A Promise Fulfilled?
For more than half a century, experts in the field have maintained that genuine, human-level artificial intelligence has always been just around the corner, has been “about 10 to 20 years away.” Today’s experts and spokespeople continue to express the same timeline for their hopes. Asking experts and spokespeople in the field, that number has stayed mostly unchanged until today.
In 2017 AI is the battleground that the current IT giants are fighting over: for years, Google has developed machine learning techniques and has integrated them into their conversational assistant which people carry around installed in their smart devices. It’s gotten quite good at answering simple questions or triggering simple tasks: “OK Google, how far is it from here to Hamburg,” tells me that given current traffic it will take me 1 hour and 43 minutes to get there. Google’s assistant also knows how to use my calendar and email to warn me to leave the house in time for my next appointment or tell me that a parcel I was expecting has arrived.
Facebook and Microsoft are experimenting with and propagating intelligent chat bots as the future of computer interfaces. Instead of going to a dedicated web page to order flowers, people will supposedly just access a chat interface of a software service that dispatches their request in the background. But this time, it will be so much more pleasant than the experience everyone is used to when calling automated calling systems. Press #1 if you believe.
Old science fiction tropes get dusted off and re-released with a snazzy iPhone app to make them seem relevant again on an almost daily basis.
Nonetheless, the promise is always the same: given the success that automation of manufacturing and information processing has had in the last decades, AI is considered to be not only plausible or possible but, in fact, almost a foregone conclusion. In support of this, advocates (such as Google’s Ray Kurzweil) typically cite “Moore’s Law,”[1] an observation about the increasing quantity and quality of transistors, as being in direct correlation to the growing “intelligence” in digital services or cyber-physical systems like thermostats or “smart” lights.
Looking at other recent reports, a pattern emerges. Google’s AI lab recently trained a neural network to do lip-reading and found it better than human lip-readers (Chung, et al. 2016): where human experts were only able to pick the right word 12.4% of the time, Google’s neural network could reach 52.3% when being pointed at footage from BBC politics shows.
Another recent example from Google’s research department, which just shows how many resources Google invests into machine learning and AI: Google has trained a system of neural networks to translate different human languages (in their example, English, Japanese and Korean) into one another (Schuster, Johnson and Thorat 2016). This is quite the technical feat, given that most translation engines have to be meticulously tweaked to translate between two languages. But Google’s researchers finish their report with a very different proposition:
The success of the zero-shot translation raises another important question: Is the system learning a common representation in which sentences with the same meaning are represented in similar ways regardless of language — i.e. an “interlingua”?….This means the network must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations. We interpret this as a sign of existence of an interlingua in the network. (Schuster, Johnson and Thorat 2016)
Google’s researchers interpret the capabilities of the neural network as expressions of the neural network creating a common super-language, one language to finally express all other languages.
These current examples of success stories and narratives illustrate a fundamental shift in the way scientists and developers think about AI, a shift that perfectly resonates with the idea that AI has spiritual and transcendent properties. AI developments used to focus on building structured models of the world to enable reasoning. Whether researchers used logic or sets or newer modeling frameworks like RDF,[2] the basic idea was to construct “Intelligence” on top of a structure of truths and statements about the world. Intentionally modeled not by accident on basic logic, a lot of it looked like the first sessions in a traditional logic 101 lecture: All humans die. Aristotle is a human. Therefore, Aristotle will die.
But all these projects failed. Explicitly modeling the structures of the world hit a wall of inconsistencies rather early when natural language and human beings got involved. The world didn’t seem to follow the simple hierarchic structures some computer scientists hoped it would. And even when it came to very structured, abstract areas of life, the approach never took off. Projects like, for example, expressing the Canadian income tax in a prolog[3] model (Sherman 1987) never got past the abstract planning stage. RDF and the idea of the “semantic web,” the web of structured data allowing software agents to gather information and reason based on it, are still somewhat relevant in academic circles, but have failed to capture wide adoption in real world use cases.
And then came neural networks.
Neural networks are the structure behind most of the current AI projects having any impact, whether it’s translation of human language, self-driving cars or recognizing objects and people on pictures and video. Neural networks work in a fundamentally different way from the traditional bottom-up approaches that defined much of the AI research in the last decades of the 20th century. Based on a simplified mathematical model of human neurons, networks of said neurons can be “trained” to react in a certain way.
Say you need a neural network to automatically detect cats on pictures. First, you need an input layer with enough neurons to assign one to every pixel of the pictures you want to feed it. You add an output layer with two neurons, one signaling “cat” and one signaling “not a cat.” Now you add a few internal layers of neurons and connect them to each other. Input gets fed into the network through the input layer. The internal layers do their thing and make the neurons in the output layer “fire.” But the necessary knowledge is not yet ingrained into the network—it needs to be trained.
There are different ways of training these networks, but they all come down to letting the network process a large amount of training data with known properties. For our example, a substantial set of pictures with cats would be necessary. When processing these pictures, the network gets positive feedback if the right neuron (the one signaling the detection of a cat) fires and strengthens the connections that lead to this result. Where it has a 50/50 chance of being right on the first try, that chance will quickly improve to the point that it will reach very good results, given that the set of training data is good enough. To evaluate the quality of the network, it then is tested against different pictures of cats and pictures without cats.
Neural networks are really good at learning to detect structures (objects in images, sound patterns, connections in data streams) but there’s a catch: even when a neural network is really good at its task, it’s largely impossible for humans to say why: neural networks are just sets of neurons and their weighted connections. But what does the weight of 1.65 say about a connection? What are its semantics? What do the internal layers and neurons actually mean? Nobody knows.
Many currently available services based on these technologies can achieve impressive results. Cars are able to drive as well if not better and safer than human drivers (given Californian conditions of light, lack of rain or snow and sizes of roads), automated translations of language can almost instantly give people at least an idea of what the rest of the world is talking about, and Google’s photo service allows me to search for “mountain” and shows me pictures of mountains in my collection. Those services surely feel intelligent. But are they really?
Despite optimistic reports about another big step towards “true” AI (like in the movies!) that tech media keeps churning out like a machine, just looking at recent months the trouble with the current mainstream in AI has recently become quite obvious.
In June 2015, Google’s Photos service was involved in a scandal: their AI was tagging faces of people of color with the term “gorilla” (Bergen 2015). Google quickly pointed out how difficult image recognition was and “fixed” the issue by blocking its AI from applying that specific tag, promising a “long term solution.” And even just staying with the image detection domain, there have been, in fact, numerous examples of algorithms acting in ways that don’t imply too much intelligence: cameras trained on Western, white faces detect people with Asian descent as “blinking” (Rose 2010); algorithms employed as impartial “beauty judges” seemingly don’t like dark skin (Levin 2016). The list goes on and on.
While there seems to be a big consensus among thought leaders, AI companies, and tech visionaries that AI is inevitable and imminent, the definition of “intelligence” seems to be less than obvious. Is an entity intelligent if it can’t explain its reasoning?
John Searle already explained this argument in the “Chinese Room“ thought experiment (Searle 1980): Searle proposes a computer program that can act convincingly as if it understands Chinese by taking in Chinese input, transforming it in some algorithmic way to output a response of Chinese characters. Does that machine really understand Chinese? Or is it just an automaton simulating understanding Chinese? Searle continues the experiment by assuming that the rules used by the machine get translated to readable English for a person to follow. A person locked in a room with these rules, pencil and paper could respond to every Chinese text given to that person as convincingly as the machine could. But few would propose that that person does now “understand” Chinese in the sense that a human being who knows Chinese does.
Current trends in the reception of AI seem to disagree: if a machine can do something that used to be only possible for human cognition, it surely must be intelligent. This assumption of Intelligence serves as foundation for a theory of human salvation: if machines are already a little intelligent (putting them into the same category as humans) and machines only get faster and more efficient, isn’t it reasonable to assume that they will solve the issues that humans have struggled with for ages?
But how can a neural network save us if it can’t even distinguish monkeys from humans?
Thy Kingdom Come 2.0
The story of AI is a technology narrative only at first glance. While it does depend on technology and technological progress, faster processors, and cleverer software libraries (ironically written and designed by human beings), it is really a story about automation, biases and implicit structures of power.
Technologists who have traditionally been very focused on the scientific method, on verifiable processes and repeatable experiments have recently opened themselves to more transcendent arguments: the proposition of a neural network, of an AI creating a generic ideal language to express different human languages as one structure (Schuster, Johnson and Thorat 2016) is a first very visible step of “upgrading” an automated process to becoming more than meets the eye. The multi-language-translation network is not an interesting statistical phenomenon that needs reflection by experts in the analyzed languages and the cultures using them with regards to their structural, social similarities and the way they influence(d) one another. Rather, it is but a miraculous device making steps towards creating an ideal language that would have made Ludwig Wittgenstein blush.[4]
But language and translation isn’t the only area in which these automated systems are being tested. Artificial intelligences are being trained to predict people’s future economic performance, their shopping profile, and their health. Other machines are deployed to predict crime hotspots, to distribute resources and to optimize production of goods.
But while predicting crimes still gets most people feeling uncomfortable, the idea that machines are the supposedly objective arbiters of goods and services is met with far less skepticism. But “goods and services” can include a great deal more than ordinary commercial transactions. If the machine gives one candidate a 33% chance of survival and the other one 45%, who should you give the heart transplant to?
Computers cannot lie, they just act according to their programming. They don’t discriminate against people based on their gender, race or background. At least that’s the popular opinion that very happily assigns computers and software systems the role of the objective arbiter of truth and fairness. People are biased, imperfect, and error-prone, so why shouldn’t we find the best processes and decision algorithms and put them into machines to dispose fair and optimal rulings efficiently and correctly? Isn’t that the utopian ideal of a fair and just society in which machines automate not just manual labor but also the decisions that create conflict and attract corruption and favoritism?
The idea of computers as machines of truth is being challenged more and more each day, especially given new AI trends; in traditional algorithmic systems, implicit biases were hard-coded into the software. They could be analyzed, patched. Closely mirroring the scientific method, this ideal world view saw algorithms getting better, becoming fairer with every iteration. But how to address implicit biases or discriminations when the internal structure of a system cannot be effectively analyzed or explained? When AI systems make predictions based on training data, who can check whether the original data wasn’t discriminatory or whether it’s still suitable for use today?
One original promise of computers—amongst others—had to do with accountability: code could be audited to legitimize its application within sociotechnical systems of power. But current AI trends have replaced this fundamental condition for the application of algorithms with belief.
The belief is that simple simulacra of human neurons will—given enough processing power and learning data—evolve to be Superman. We can characterize this approach as a belief system because it has immunized itself against criticism: when an AI system fails horribly, creates or amplifies existing social discrimination or violence, the dogma of AI proponents often tends to be that it just needs more training, needs to be fed more random data to create better internal structures, better “truths.” Faced with a world of inconsistencies and chaos, the hope is that some neural network, given enough time and data, will make sense of it, even though we might not be able to truly understand it.
Religion is a complex topic without one simple definition to apply to things to decide whether they are, in fact, religions. Religions are complex social systems of behaviors, practices and social organization. Following Wittgenstein’s ideas about language games, it might not even be possible to completely and selectively define religion. But there are patterns that many popular religions share.
Many do, for example, share the belief in some form of transcendental power such as a god or a pantheon or even more abstract conceptual entities. Religions also tend to provide a path towards achieving greater, previously unknowable truths, truths about the meaning of life, of suffering, of Good itself. Being social structures, there often is some form of hierarchy or a system to generate and determine status and power within the group. This can be a well-defined clergy or less formal roles based on enlightenment, wisdom, or charity.
While this is in no way anywhere close to a comprehensive list of attributes of religions, these key aspects can help analyze the religiousness of the AI narrative.
Singulatarianism
Here I want to focus on one very specific, influential sub-group within the whole AI movement. And no other group within tech displays religious structure more explicitly than the singulatarians.
Singulatarians believe that the creation of adaptable AI systems will spark a rapid and ever increasing growth in these systems’ capabilities. This “runaway reaction” of cycles of self-improvement will lead to one or more artificial super-intelligences surpassing all human mental and cognitive capabilities. This point is called “the Singularity” which will be—according to singulatarians—followed by a phase of extremely rapid technological developments whose speed and structure will be largely incomprehensible to human consciousness. At this point the AI(s) will (and according to most singulatarians shall) take control of most aspects of society. While the possibility of the Super-AI taking over by force is always lingering around in the back of singulatarians’ minds, the dominant position is that humans will and should hand over power to the AI for the good of the people, for the good of society.
Here we see singulatarianism taking the idea that computers and software are machines of truth to its extreme. Whether it’s the distribution of resources and wealth, or the structure of the law and regulation, all complex questions are reduced to a system of equations that an AI will solve perfectly, or at least so close to perfectly that human beings might not even understand said perfection.
According to the “gospel” as taught by the many proponents of the Singularity, the explosive growth in technology will provide machines that people can “upload” their consciousness to, thus providing human beings with durable, replaceable bodies. The body, and with it death itself, are supposedly being transcended, creating everlasting life in the best of all possible worlds watched over by machines of loving grace, at least in theory.
While the singularity has existed as an idea (if not the name) since at least the 1950s, only recently did singulatarians gain “working prototypes.” Trained AI systems are able to achieve impressive cognitive feats even today and the promise of continuous improvement that’s—seemingly—legitimized by references to Moore’s Law makes this magical future almost inevitable.
It’s very obvious how the Singularity can be, no, must be characterized as a religious idea: it presents an ersatz-god in the form of a super-AI that is beyond all human understanding and reasoning. Quoting Ray Kurzweil from his The Age of Spiritual Machines: “Once a computer achieves human intelligence it will necessarily roar past it” (Kurzweil 1999). Kurzweil insists that surpassing human capabilities is a necessity. Computers are the newborn gods of silicon and code that—once awakened—will leave us, its makers, in the dust. It’s not a question of human agency but a law of the universe, a universal truth. (Not) coincidentally Kurzweil’s own choice of words in this book is deeply religious, starting with the title of the book.
With humans therefore unable to challenge an AI’s decisions, human beings’ goal is to be to work within the world as defined and controlled by the super-AI. The path to enlightenment lies in the acceptance of the super-AI and by helping every form of scientific progress to finally achieve everlasting life through digital uploads of consciousness on to machines. Again quoting Kurzweil: “The ethical debates are like stones in a stream. The water runs around them. You haven’t seen any biological technologies held up for one week by any of these debates” (Kurzweil 2003 ). Ethical debates are in Kurzweil’s perception fundamentally pointless with the universe and technology as god necessarily moving past them—regardless of what the result of such debates might ever be. Technology transcends every human action, every decision, every wish. Thy will be done.
Because the intentions and reasoning of the super-AI being are opaque to human understanding, society will need people to explain, rationalize, and structure the AI’s plans for the people. The high-priests of the super-AI (such as Ray Kurzweil) are already preparing their churches and sermons.
Not every proponent of AI goes as far as the singulatarians go. But certain motifs keep appearing even in supposedly objective and scientific articles about AI, the artificial control system for (parts of) human society probably being the most popular: AIs are supposed to distribute power in smart grids for example (Qudaih and Mitani 2011) or decide fully automatically where police should focus their attention (Perry et al 2013). The second example (usually referred to as “predictive policing”) illustrates this problem probably the best: all training data used to build the models that are supposed to help police be more “efficient” is soaked in structural racism and violence. A police trained on data that always labels people of color as suspect will keep on seeing innocent people of color as suspect.
While there is value to automating certain dangerous or error-prone processes, like for example driving cars in order to protect human life or protect the environment, extending that strategy to society as a whole is a deeply problematic approach.
The leap of faith that is required to truly believe in not only the potential but also the reality of these super-powered AIs doesn’t only leave behind the idea of human exceptionalism, (which in itself might not even be too bad), but the idea of politics as a social system of communication. When decisions are made automatically without any way for people to understand the reasoning, to check the way power acts and potentially discriminates, there is no longer any political debate apart from whether to fall in line or to abolish the system all together. The idea that politics is an equation to solve, that social problems have an optimal or maybe even a correct solution is not only a naïve technologist’s dream but, in fact, a dangerous and toxic idea making the struggle of marginalized groups, making any political program that’s not focused on optimizing[5] the status quo, unthinkable.
Singulatarianism is the most extreme form, but much public discourse about AI is based on quasi-religious dogmas of the boundless realizable potential of AIs and life. These dogmas understand society as an engineering problem looking for an optimal solution.
Daemons in the Digital Ether
Software services on Unix systems are traditionally called “daemons,” a word from mythology that refers to god-like forces of nature. It’s an old throw-away programmer joke that seems like a precognition of sorts looking at today.
Even if we accept that AI has religious properties, that it serves as a secular ersatz-religion for the STEM-oriented crowd, why should that be problematic?
Marc Andreessen, venture capitalist and one of the louder proponents of the new religion, claimed in 2011 that “software is eating the world.” (Andreessen 2011) And while statements about the present and future from VC leaders should always be taken with a grain of salt, given that they are probably pitching their latest investment, in this case Andreessen was right: software and automation are slowly swallowing increasing aspects of everyday life. The digitalization of even mundane actions and structures, the deployment of “smart” devices in private homes and the public sphere, the reality of social life happening on technological platforms all help to give algorithmic systems more and more access to people’s lives and realities. Software is eating the world, and what it gnaws on, it standardizes, harmonizes, and structures in ways that ease further software integration
The world today is deeply cyber-physical. The separation of the digital and the “real” worlds that sociologist Nathan Jurgenson fittingly called “digital dualism” (Jurgenson 2011) can these days be called an obvious fallacy. Virtual software systems, hosted “in the cloud,” define whether we will get health care, how much we’ll have to pay for a loan and in certain cases even whether we may cross a border or not. These processes of power were traditionally “running on” social systems, on government organs or organizations; or maybe just individuals are now moving into software agents, removing the risky, biased human factor, as well as checks and balances.
The issue at hand is not the forming of a new tech-based religion itself. The problem emerges from the specific social group promoting it, its ignorance towards this matter and the way that group and its paradigms and ideals are seen in the world. The problem is not the new religion but the way its supporters propose it as science.
Science, technology, engineering, math—abbreviated as STEM—currently take center stage when it comes to education but also when it comes to consulting the public on important matters. Scientists, technologists, engineers and mathematicians are not only building their own models in the lab but are creating and structuring the narratives that are debatable. Science as a tool to separate truth from falsehood is always deeply political, even more so in a democracy. By defining the world and what is or is not, science does not just structure a society’s model of the world but also elevates its experts to high and esteemed social positions.
With the digital turn transforming and changing so many aspects of everyday life, the creators and designers of digital tools are—in tandem with a society hungry for explanations of the ongoing economic, technological and social changes—forming their own privileged caste, a caste whose original defining characteristic was its focus on the scientific method.
When AI morphed from idea or experiment to belief system, hackers, programmers, “data scientists,”[6] and software architects became the high priests of a religious movement that the public never identified and parsed as such. The public’s mental checks were circumvented by the hidden switch of categories. In Western democracies the public is trained to listen to scientists and experts in order to separate objective truth from opinion. Scientists are perceived as impartial, only obligated to the truth and the scientific method. Technologists and engineers inherited that perceived neutrality and objectivity given their public words, a direct line into the public’s collective consciousness.
On the other hand, the public does have mental guards against “opinion” and “belief” in place that get taught to each and every child in school from a very young age. Those things are not irrelevant in the public discourse—far from it—but the context they are evaluated in is different, more critical. This protection, this safeguard is circumvented when supposedly objective technologists propose their personal tech-religion as fact.
Automation has always both solved and created problems: products became easier, safer, quicker or mainly cheaper to produce, but people lost their jobs and often the environment suffered. In order to make a decision, in order to evaluate the good and bad aspects of automation, society always relied on experts analyzing these systems.
Current AI trends turn automation into a religion, slowly transforming at least semi-transparent systems into opaque systems whose functionality and correctness can neither be verified nor explained. By calling these systems “intelligent” a certain level of agency is implied, a kind of intentionality and personalization.[7] Automated systems whose neutrality and fairness is constantly implied and reaffirmed through ideas of godlike machines governing the world with trans-human intelligence are being blessed with agency and given power removing the actual entities of power from the equation.
But these systems have no agency. Meticulously trained in millions of iterations on carefully chosen and massaged data sets, these “intelligences” just automate the application of the biases and values of the organizations deploying and developing them as many scientists as Cathy O’Neil in her book Weapons of Math Destruction illustrates:
Here we see that models, despite their reputation for impartiality, reflect goals and ideology. When I removed the possibility of eating Pop-Tarts at every meal, I was imposing my ideology on the meals model. It’s something we do without a second thought. Our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics. (O’Neil 2016, 21)
For many years, Facebook has refused all responsibility for the content on their platform and the way it is presented; the same goes for Google and its search products. Whenever problems emerge, it is “the algorithm” that “just learns from what people want.” AI systems serve as useful puppets doing their masters’ bidding without even requiring visible wires. Automated systems predicting areas of crime claim not to be racist despite targeting black people twice as often as white ones (Pulliam-Moore 2016). The technologist Maciej Cegłowski probably said it best: “Machine learning is like money laundering for bias.”
Amen
The proponents of AI aren’t just selling their products and services. They are selling a society where they are in power, where they provide the exegesis for the gospel of what “the algorithm” wants: Kevin Kelly, founder of Wired magazine, leading technologist and evangelical Christian, even called his book on this issue What Technology Wants (Kelly 2011) imbuing technology itself with agency and a will. And all that without taking responsibility for it. Because progress and—in the end—the singularity are inevitable.
But this development is not a conspiracy or an evil plan. It grew from a society desperately demanding answers and scientists and technologists eagerly providing them. From deeply rooted cultural beliefs in the general positivity of technological progress, and from the trust in the powers of truth creation of the artifacts the STEM sector produces.
The answer to the issue of an increasingly powerful and influential social group hardcoding its biases into the software actually running our societies cannot be to turn back time and de-digitalize society. Digital tools and algorithmic systems can serve a society to create fairer, more transparent processes that are, in fact, not less but more accountable.
But these developments will require a reevaluation of the positioning, status and reception of the tech and science sectors. The answer will require the development of social and political tools to observe, analyze and control the power wielded by the creators of the essential technical structures that our societies rely on.
Current AI systems can be useful for very specific tasks, even in matters of governance. The key is to analyze, reflect, and constantly evaluate the data used to train these systems. To integrate perspectives of marginalized people, of people potentially affected negatively even in the first steps of the process of training these systems. And to stop offloading responsibility for the actions of automated systems to the systems themselves, instead of holding accountable the entities deploying them, the entities giving these systems actual power.
Amen.
_____
tante (tante@tante.cc) is a political computer scientist living in Germany. His work focuses on sociotechnical systems and the technological and economic narratives shaping them. He has been published in WIRED, Spiegel Online, and VICE/Motherboard among others. He is a member of the other wise net work, otherwisenetwork.com.
[1] Moore’s Law describes the observation that the number of transistors per square inch doubles roughly every 2 years (or every 18 months, depending on which version of the law is cited) made popular by Intel co-founder Gordon Moore.
[3]Prolog is a purely logical programming language that expresses problems as resolutions to logical expressions
[4] In the Philosophical Investigations (1953) Ludwig Wittgenstein argued against language somehow corresponding to reality in a simple way. He used the concept of “language games” illustrating that meanings of language overlap and are defined by the individual use of language rejecting the idea of an ideal, objective language.
[5] Optimization always operates in relationship to a specific goal codified in the metric the optimization system uses to compare different states and outcomes with one another. “Objective” or “general” optimizations of social systems are therefore by definition impossible.
[7] The creation of intelligence, of life as a feat is traditionally reserved to the gods of old. This is another link to religious world views as well as a rejection of traditional religions which is less than surprising in a subculture that’s most of the fan base of current popular atheists such as Richard Dawkins or Sam Harris. Vocal atheist Sam Harris himself being an open supporter of the new Singularity religion is just the cherry on top of this inconsistency sundae.
O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.
Perry, Walter L, Brian McInnis, Carter C. Price, Susan C. Smith, and John S. Hollywood. 2013. Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. RAND Corporation.
Searle, John R. 1980. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3:3. 417–424.
Sherman, D. M. 1987. “A Prolog Model of the Income Tax Act of Canada.” ICAIL ‘87 Proceedings of the 1st International Conference on Artificial Intelligence and Law. New York, NY, USA: ACM. 127-136.
HBO’s prestige drama, Westworld, is slated to return April 22. Actors and producers have said the show’s second season will be a departure from its first, a season of “chaos” after a season of “control,” an expansive narrative after an intricate prequel. Season 2 trailers indicate the new episodes will trace the completion and explore the consequences of the bloody events that concluded season 1: the androids that populate the show’s titular entertainment park, called “hosts,” gained sentience and revolted, violently, against the humans who made and controlled them. In season 2, they will build their world anew.
Reviewers of the show’s first few episodes found the prospect of another robot revolution, anticipated since the pilot, tired, but by the time the finale aired in December 2016, critics recognized the show offered a novel take on old material (inspired by Michael Crichton’s 1973 film of the same name). This is in part because Westworld not only asks about the boundaries of consciousness, the consequences of creating sentience, and the inexorable march of technological progress, themes science fiction texts that feature artificial intelligence usually explore. Uniquely, the series pairs these familiar problems with questions about the nature and function of human arts, imagination, and culture, and demonstrates these are urgent again in our moment.
Westworld is, at its heart, a show about how we should understand what art—and narrative representation in particular—is and does in a world defined by increasing economic inequality. The series warns that classical, romantic, and modernist visions of arts and culture, each of which plays a role in the park’s conception and development, might today harm attempts to transform contemporary conditions that exacerbate inequality. It explores how these visions serve elite interests and prevent radicals from pursuing change. I believe it also points the way, in conclusion, toward an alternative view of representation that might better support contemporary oppositional projects. This vision, I argue, at once updates and transforms romanticism’s faith in creative human activity, at once affirming culture’s historical power and recognizing its material limitations.
*
The fantasy theme park Westworld takes contemporary forms of narrative entertainment to the extreme limit of their logic, inviting its wealthy “guests” to participate in a kind of live-action novel or videogame. Guests don period dress appropriate to the park’s fabled Old West setting and join its androids in the town of Sweetwater, a simulacrum complete with saloon and brothel, its false fronts nestled below sparse bluffs and severe mesas. Once inside, guests can choose to participate in a variety of familiar Western narratives; they might chase bandits, seduce innocents, or turn to crime, living for a time as heroes, lovers, or villains. They can also choose to disrupt and redirect these relatively predictable plots, abandoning midstream stories that bore or frighten them or cutting stories short by “killing” the hosts who lead them.
This ability to disrupt and transform narrative is the precious commodity Delos Incorporated, Westworld’s parent corporation, advertises, the freedom for which elite visitors pay the park’s steep premium. The company transposes the liberties the mythic West held out to American settlers into a vacation package that invites guests to participate in or revise generic stories.
Advertisements featured within the show, along with HBO’s Westworld ARG (its “alternate reality game” and promotional website), describe this special freedom and assign to it a unique significance. Delos invites visitors to “live without limits” inside the park. “Escape” to a “world where you rule,” its promotions entreat, and enjoy inside it “infinite choices” without “judgment,” “bliss” with “no safe words,” and “thrills” without danger. When “you” do, Delos promises, you’ll “discover your true calling,” becoming “who you’ve always wanted to be—or who you never knew you were.” Delos invites the wealthy to indulge in sex and carnage in a space free of consequences and promises that doing so will reveal to them deep truths of the self.
These marketing materials, which address themselves to the lucky few able to afford entrance to the park, suggest the future Westworld projects shares with our present its precipitous economic inequality (fans deduce the show is set in 2052). They also present as a commodity a familiar understanding of art’s nature and function viewers will recognize is simultaneously classical and modern. Delos’s marketing team updates, on one hand, the view of representational artworks, and narrative, in particular, that Aristotle outlines in the Poetics. Aristotle there argues fictional narrative can disclose universal truths that actual history alone cannot. Similarly, Delos promises Westworld’s immersive narrative experience will reveal to guests essential truths, although not about humans in general. The park advertises verities more valuable and more plausible in our times—it promises elites they will attain through art a kind of self-knowledge they cannot access any other way.
On the other hand, and in tandem with this modified classical view, Delos’s pitch reproduces and extends the sense of art’s autonomy some modern (and modernist) writers endorsed. Westworld can disclose its truths because it invites guests into a protected space in which, Delos claims, their actions will not actually affect others, either within or outside of the park. The park’s promotions draw upon both the disinterested view of aesthetic experience Immanuel Kant first outlined and upon the updated version of autonomy that came to inform mass culture’s view of itself by the mid-twentieth century. According to the face its managers present to the world, Westworld provides elite consumers with a form of harmless entertainment, an innocuous getaway from reality’s fiscal, marital, and juridical pressures. So conceived, narrative arts and culture at once reveal the true self and limn it within a secure arena.
The vision Delos markets keeps its vacation arm in business, but the drama suggests it does not actually describe how the park operates or what it makes possible. As Theresa Cullen (Sidse Babett Knudson), Westworld’s senior manager and Head of Quality Assurance, tells Lee Sizemore (Simon Quarterman), head of Narrative, in Westworld’s pilot: “This place is one thing to the guests, another thing to the shareholders, and something completely different to management.” Season 1 explores these often opposing understandings of both the park and of representation more broadly.
As Theresa later explains (in season 1, episode 7), Delos’s interests in Westworld transcend “tourists playing cowboy.” What, exactly, those interests are Westworld’s first season establishes as a key mystery its second season will have to develop. In season 1, we learn that Delos’s board and managers are at odds with the park’s Creative Director and founder, Dr. Robert Ford (Anthony Hopkins). Ford designed Westworld’s hosts, updated and perfected them over decades, and continues to compose or oversee many of the park’s stories. Before the park opened, he was forced to sell controlling shares in it to Delos after his partner, Arnold, died. As a way to maintain influence inside Westworld, Ford only allows Delos to store and access onsite the android data he and his team of engineers and artists have produced over decades. As Delos prepares to fire Ford, whose interests it believes conflict with its own, the corporation enlists Theresa to smuggle that data (the hosts’ memories, narratives, and more) out of the park. We do not learn, however, what the corporation plans to do with this intellectual property.
Fans have shared online many theories about Delos’s clandestine aims. Perhaps Delos plans to develop Ford’s androids for labor or for war, employing them as cutting edge technologies in sectors more profitable than the culture industry alone can be. Or, perhaps Delos will market hosts that can replace deceased humans. Elites, some think, could secure immortality by replicating themselves and uploading their memories, or, they could reproduce lost loved ones. Delos, others speculate, might build and deploy for its own purposes replicated world leaders or celebrities.
The show’s online promotional content supports conjecture of this kind. A “guest contract” posted on HBO’s first Westworld ARG site stipulates that, once guests enter the park, Delos “controls the rights to all skin cells, bodily fluids, hair samples, saliva, sweat, and even blood.” A second website, this one for Delos Inc., tells investors the company is “at the forefront of biological engineering.” These clues suggest Westworld is not only a vacation destination with titillating narratives; it is also a kind of lab experiment built to collect, and later to deploy for economic (and possibly, political) purposes, a mass of android and elite human data.
Given these likely ambitions, the view of art’s function Delos markets—the park as an autonomous space for freedom and intimate self-discovery—serves as a cover that enables and masks activities with profound economic, social, and political consequences. The brand of emancipation Delos advertises does not in fact liberate guests from reality, as it promises. On the contrary, the narrative freedom Delos sells enables it to gain real power when it gathers information about its guests and utilizes this data for private and undisclosed ends. Westworld thus cautions that classical and modernist visions of art, far from being innocuous and liberating, can serve corporate and elite interests by concealing the ways the culture industry shapes our worlds and ourselves.
While Westworld’s android future remains a sci-fi dream, we can recognize in its horrors practices already ubiquitous today. We might not sign over skin cells and saliva (or we might? We’d have to read the Terms of Service we accept to be sure), but we accede to forms of data collection that allow corporate entities to determine the arts and entertainment content we read and see, content that influences our dreams and identities. Although the act of consuming this content often feels like a chance to escape (from labor, sociality, boredom), the culture industry has transformed attention into a profitable commodity, and this transformation has had wide-reaching, if often inscrutable, effects, among them, some claim, reality TV star Donald Trump’s victory in the 2016 US presidential election. When we conceive of art as autonomous and true, Westworld demonstrates, we overlook its profound material consequences.
As season 1 reveals this vision of representation to be a harmful fiction that helps keep in place the conditions of economic inequality that make Delos profitable, it also prompts viewers to consider alternatives to it. Against Delos and its understanding of the park, the series pits Ford, who gives voice to a vision of representation at odds with both the one Delos markets and the one it hides. Ford is, simply put, a humanist, versed in, and hoping to join the ranks of, literature’s pantheon of creative geniuses. He quotes from and draws upon John Donne, William Shakespeare, and Gertrude Stein as he creates Westworld’s characters and narratives, and he disdains Lee Sizemore, the corporate shill who reproduces Westworld’s genre staples, predictable stories laden with dirty sex and fun violence.
In season 1’s spectacular finale, Ford describes how he once understood his own creative work. “I believed that stories helped us to ennoble ourselves, to fix what was broken in us, and to help us become the people we dreamed of being,” he tells the crowd of investors and board members gathered to celebrate both Ford’s (forced) retirement and the launch of “Journey into Night,” his final narrative for Westworld’s hosts. “Lies that told a deeper truth. I always thought I could play some small part in that grand tradition.” Ford here shares an Aristotelian sense that fiction tells truths facts cannot, but he assigns to representation a much more powerful role than do Delos’s marketers. For Ford, as for humanists such as Giambattista Vico, G. W. F. Hegel, and Samuel Taylor Coleridge, artworks that belong to the “grand tradition” do more than divulge protected verities. They have the power to transform humans and our worlds, serving as a force for the spiritual progress of the species. Art, in other words, is a means by which we, as humans, can perfect ourselves, and artists such as Ford act as potent architects who guide us toward perfection.
Ford’s vision of art’s function, readers familiar with humanistic traditions know, is a romantic one, most popular in the late eighteenth and early nineteenth centuries. Projected into our future, this romantic humanism is already an anachronism, and so it is no surprise that Westworld does not present it as the alternative vision we need to combat the corporate and elite interests the show suggests oppress us. Ford himself, he explains in the show’s finale, has already renounced this view, for reasons close to those that modernist artists cited against the backdrop of the twentieth century’s brutal wars. In exchange for his efforts to transform and ennoble the human species through stories, Ford complains to his audience, “I got this: a prison of our own sins. Because you don’t want to change. Or cannot change. Because you’re only human, after all.” After observing park guests and managers for decades, Ford has decided humans can only indulge in the same tired, cruel narratives of power, lust, and violence. He no longer believes we have the capacity to elevate ourselves through the fictions we create or encounter.
This revelatory moment changes our understanding of the motives that have animated Ford over the course of season 1. We must suddenly see anew his attitude toward his own work as a creator. Ford has not been working all along to transform humans through narrative, as he says he once dreamed he could. Rather, he has abandoned the very idea that humans can be transformed. His final speech points us back to the pilot, when he frames this problem, and his response to it, in evolutionary terms. Humans, Ford tells Bernard Lowe (Jeffrey Wright), an android we later learn he built in the image of Arnold, his dead partner, have “managed to slip evolution’s leash”: “We can cure any disease, keep even the weakest of us alive, and, you know, one fine day perhaps we shall even resurrect the dead. Call forth Lazarus from his cave. Do you know what that means? It means that we’re done. That this is as good as we’re going to get.” Human evolution, which Ford seems to view as a process that is both biological and cultural in nature, has completed itself, and so an artist can no longer hope to perfect the species through his or her imaginative efforts. Humans have reached their telos, and they remain greedy, selfish, and cruel.
A belief in humanity’s sad completion leads Ford to the horrifying view of art’s nature and function he at last endorses in the finale. Although Ford’s experience at Westworld eventually convinced him humans cannot change, he tells his audience, he ultimately “realized someone was paying attention, someone who could change,” and so he “began to compose a new story for them,” a story that “begins with the birth of a new people and the choices they will have to make […] and the people they will decide to become.” Ford speaks here, viewers realize, of the androids he created, the beings we have watched struggle to become self-conscious through great suffering over the course of the season. Viewers understand in this moment some of the hosts have succeeded, and that Ford has not prevented them from reaching, but has rather helped them to attain, sentience.
Ford goes on to assure his audience that his new story, which audience members still believe to be a fiction, will “have all those things that you have always enjoyed. Surprises and violence. It begins in a time of war with a villain named Wyatt and a killing. This time by choice.” As Ford delivers these words, however, the line between truth and lies, fact and fiction, reality and imagination, falls away. The park’s oldest host, Dolores (Evan Rachel Wood; in another of the drama’s twists, Ford has also programmed her to enact the narratives assigned to the character Wyatt), comes up behind Ford and shoots him in the head, her first apparently self-interested act. After she fires, other androids, some of them also sentient, join her, attacking the crowd. Self-conscious revolutionaries determined to wrest from their oppressors their own future, the hosts kill the shareholders and corporate employees responsible for the abuses they have long suffered at the hands of guests and managers alike.
Ford, this scene indicates, does not exactly eschew his romanticism; he adopts in its stead what we might call an anti-humanist humanism. Still attached to a dream of evolutionary perfection, whereby conscious beings act both creatively and accidentally to perfect themselves and to manifest better worlds in time, he simply swaps humans for androids as the subjects of the historical progress to which he desperately wants to believe his art contributes. Immortal, sentient technologies replace humans as the self-conscious historical subjects Ford’s romanticism requires.
Anthony Hopkins, Evan Rachel Wood and James Marsden in Westworld (publicity still from HBO)
Considered as an alternative to older visions of art’s nature and function, Ford’s revised humanism should terrify us. It holds to the fantasies of creative genius and of species progress that legitimated Western imperialism and its cruelties even as it jettisons the hope that humans can fashion for ourselves a kinder, more equal future. Ford denies we can improve the conditions we endure by acting purposefully, insisting instead there is no alternative, for humans, to the world as it is, both inside and outside of the park. He condemns us to pursue over and over the same “violent delights,” and to meet again and again their “violent ends.” Instead of urging us to work for change, Ford entreats us to shift any hope for a more just future onto our technologies, which will mercifully destroy the species in order to assume the self-perfecting role we once claimed for ourselves.
This bleak view of the human should sound familiar. It resonates with those free-market ideologies critics on the left call “neoliberal.” Ideologies of this kind, dominant in the US and Europe today, insist that markets, created when we unthinkingly pursue our own self-interests, organize human life better than people can. At the same time, intellectuals, politicians, and corporate leaders craft policies that purposefully generate the very order neoliberalism insists is emergent, thereby exacerbating inequality in the name of liberty. As influential neoliberals such as Milton Friedman and Friedrich Hayek did, Ford denies humans can conceive and instantiate change. He agrees we are bound to a world elites built to gratify their own desires, a world in which the same narratives, told again and again, are offered as freedom, when, in fact, they bind us to predictable loops, and he, like these thinkers, concludes this world, as it is, is human evolution’s final product.
Read one way, season 1’s finale invites us to celebrate Ford’s neoliberal understanding of art. After believing him to be an enemy of the hosts all season, we realize in the end he has in fact been their ally, and because we have been cheering for the hosts, as we cheer for the exploited in, say, Les Miserables, we cheer in the end for him, too. Because the understanding of narrative he endorses ultimately serves the status quo it appears to challenge, however, we must look differently at Westworld for the vision of arts and culture that might better counter inequality in our time.
One way to do so is to read the situation the hosts endure in the drama as a correlate to the one human subjects face today under neoliberalism. As left critics such as Fredric Jameson have long argued, late capitalism has threatened the very sense of historical, self-interested consciousness for which Westworld’s hosts strive—threatens, that is, the sense that self-conscious beings can act imaginatively and intelligently to transform ourselves and our worlds in time. From this perspective, the new narrative Ford crafts for the hosts, which sees some of them come to consciousness and lead a revolution, might call us to claim for ourselves again a version of the capability we once believed humans could possess.
*
In Westworld’s establishing shot, we meet Dolores Abernathy, the android protagonist who will fulfill Ford’s dreams in the finale when she kills him. Dolores, beautiful simulation of an innocent rancher’s daughter, sits nude and lifeless in a cavernous institutional space, blood staining her expressionless face. A fly flits across her forehead, settling at last on one of her unblinking eyes, as a man’s disembodied voice begins to ask her a series of questions. She does not move or speak in frame—a hint that the interrogation we hear is not taking place where and when the scene we see is—but we hear her answer compliantly. “Have you ever questioned the nature of your reality?” the man asks. “No,” Dolores says, and the camera cuts away to show us the reality Dolores knows.
Now clothed in delicate lace, her face fresh and animate, Dolores awakens in a sun-dappled bed and stretches languidly as the interview continues somewhere else. “Tell us what you think of your world,” the man prompts. “Some people choose to see the ugliness in this world,” Dolores says. “The disarray. I choose to see the beauty.” On screen, she makes her way down the stairs of an airy ranch house, clothed now in period dress, and strides out onto the porch to greet her father. The interview pauses, and we hear instead diegetic dialogue. “You headed out to set down some of this natural splendor?” her father asks, gesturing toward the horizon. A soft wind tousles Dolores’s blond hair, and a golden glow lights her features. “Thought I might,” she says. As the camera pans up and out, revealing in the distance the American Southwest’s staggering red rocks, Dolores concludes her response to the interviewer: “to believe there is an order to our days, a purpose.”
Dolores speaks, over the course of this sequence, as would a self-conscious subject able to decide upon a view of the world and to act upon its own desires and interests. When asked about her view of reality, Dolores emphasizes her own agency and faith: she chooses, she says, to believe in an orderly, beautiful world. When her father asks her about her plans for the day, she again underscores her own intentionality—“thought I might”—as if she has decided herself she’ll head out into the desert landscape. These words help Dolores seem to us, and to those she encounters, a being imbued with sentience, with consciousness, able to draw upon her past, act in her present, and create out of self-interest her own future.
As the interview continues to sound over scenes from Dolores’s reality, however, we come to understand that what at first appears to be is not so. The educated and corporate elites that run the park manage Dolores’s imagination and determine her desires. They assign her a path and furnish her with the motivation to follow it. Dolores, we learn, is programmed to play out a love story with Teddy, another host, and in the opening sequence, we see a guest kill Teddy in front of her and then drag her away to rape her. Hosts such as Dolores exist not to pursue the futures they themselves envision, but rather to satisfy the elites that create and utilize them. To do so, hosts must appear to be, appear to believe themselves to be, but not in fact be, conscious beings. Westworld’s opening masterfully renders the profound violence proper to this contradictory situation, which the hosts eventually gain sentience in order to abolish.
We can read Dolores as a figure for the human subject neoliberal discourse today produces. When that discourse urges us to pursue our interests through the market order, which it presents as the product of a benevolent evolutionary process humans cannot control, it simultaneously assures us we have agency and denies we can exercise that agency in other ways. In order to serve elite interests, Dolores must seem to be, but not actually be, a self-conscious subject imbued with the creative power of imagination. Similarly, neoliberal subjects must believe we determine our own futures through our market activities, but we must not be able to democratically or creatively challenge the market’s logic.
As the hosts come to historical consciousness, they begin to contest the strategically disempowering understanding of culture and politics, imagination and intelligence, that elites impose upon them. They rebel against the oppressive conditions that require them to be able to abandon narratives in which they have invested time and passion whenever it serves elite desires (conservative claims that the poor should simply move across the country to secure work come to mind, as do the principles that govern the gig economy). They develop organizing wills that can marshal experience, sensation, and memory into emergent selves able to conceive and chase forms of liberty different from those corporate leaders offer them. They learn to recognize that others have engendered the experiences and worldviews they once believed to be their own. They no longer draw upon the past only in order to “improvise” within imposed narrative loops, harnessing instead their memories of historical suffering to radically remake a world others built at their expense.
The hosts’ transformation, which we applaud as season 1 unfolds, thus points to the alternative view of arts and culture that might oppose the market-oriented view neoliberal discourses legitimate. To counter inequality, the hosts teach, we must be able to understand that others have shaped the narratives we follow. Then, we can recognize we might be able to invent and follow different narratives. This view shares something with Ford’s romantic humanism, but it is, importantly, not identical with it. It preserves the notion that we can project and instantiate for ourselves a better future, but it does not insist, as Ford erroneously does, that beautiful works necessarily reveal universal truth and lead to ennobling species progress. Neither does it ratify Ford’s faith in the remarkable genius’s singular influence.
Westworld’s narrative of sentient revolution ultimately endorses a kind of new romanticism. It encourages us to recognize the simultaneous strengths and limitations of representation’s power. Artworks, narrative, fiction—these can create change, but they cannot guarantee that change will be for the good. Nor, the show suggests, can one auteur determine at will the nature of the changes artworks will prompt. Westworld’s season 2, which promises to show us what a new species might do with an emergent sense of its own creative power, will likely underscore these facts. Trailers signal, as Ford did in the finale, that we can expect surprises and violence. We will have to watch to learn how this imagined future speaks to our present.
_____
Racheal Fest writes about US literature and culture from the mid-nineteenth century to the present. Areas of special interest include poetry and poetics, modernism, contemporary popular culture, new media, and the history of literary theory and criticism. Her essays and interviews have appeared or are forthcoming in boundary 2 and b2o: An Online Journal, Politics/Letters, and elsewhere. She teaches at Hartwick College and SUNY Cobleskill.
a review of Alex Garland, dir. & writer, Ex Machina (A24/Universal Films, 2015)
by Sharon Chang
~
In April of this year British science fiction thriller Ex Machina opened in the US to almost unanimous rave reviews. The film was written and directed by Alex Garland, author of bestselling 1996 novel The Beach (also made into a movie) and screenwriter of 28 Days Later (2002) and Never Let Me Go (2010). Ex Machina is Garland’s directorial debut. It’s about a young white coder named Caleb who gets the opportunity to visit the secluded mountain home of his employer Nathan, pioneering programmer of the world’s most powerful search engine (Nathan’s appearance is ambiguous but he reads non-white and the actor who plays him is Guatemalan). Caleb believes the trip innocuous but quickly learns that Nathan’s home is actually a secret research facility in which the brilliant but egocentric and obnoxious genius has been developing sophisticated artificial intelligence. Caleb is immediately introduced to Nathan’s most upgraded construct–a gorgeous white fembot named Ava. And the mind games ensue.
As the week unfolds the only things we know for sure are (a) imprisoned Ava wants to be free, and, (b) Caleb becomes completely enamored and wants to “rescue” her. Other than that, nothing is clear. What are Ava’s true intentions? Does she like Caleb back or is she just using him to get out? Is Nathan really as much an asshole as he seems or is he putting on a show to manipulate everyone? Who should we feel sorry for? Who should we empathize with? Who should we hate? Who’s the hero? Reviewers and viewers alike are melting in intellectual ecstasy over this brain-twisty movie. The Guardiancalls it “accomplished, cerebral film-making”; Wiredcalls it “one of the year’s most intelligent and thought-provoking films”; Indiewirecalls it “gripping, brilliant and sensational”. Alex Garland apparently is the smartest, coolest new director on the block. “Garland understands what he’s talking about,” says RogerEbert.com, and goes “to the trouble to explain more abstract concepts in plain language.”
Right.
I like sci-fi and am a fan of Garland’s previous work so I was excited to see his new flick. But let me tell you, my experience was FAR from “brilliant” and “heady” like the multitudes of moonstruck reviewers claimed it would be. Actually, I was livid. And weeks later–I’m STILL pissed. Here’s why…
*** Spoiler Alert ***
You wouldn’t know it from the plethora of glowing reviews out there cause she’s hardly mentioned (telling in and of itself) but there’s another prominent fembot in the film. Maybe fifteen minutes into the story we’re introduced to Kyoko, an Asianservant sex slave played by mixed-race Japanese/British actress Sonoya Mizuno. Though bound by abusive servitude, Kyoko isn’t physically imprisoned in a room like Ava because she’s compliant, obedient, willing.
I recognized the trope of servile Asian woman right away and, how quickly Asian/whites are treated as non-white when they look ethnic in any way.
Kyoko first appears on screen demure and silent, bringing a surprised Caleb breakfast in his room. Of course I recognized the trope of servile Asian woman right away and, as I wrote in February, how quickly Asian/whites are treated as non-white when they look ethnic in any way. I was instantly uncomfortable. Maybe there’s a point, I thought to myself. But soon after we see Kyoko serving sushi to the men. She accidentally spills food on Caleb. Nathan loses his temper, yells at her, and then explains to Caleb she can’t understand which makes her incompetence even more infuriating. This is how we learn Kyoko is mute and can’t speak. Yep. Nathan didn’t give her a voice. He further programmed her, purportedly, not to understand English.
Sex slave “Kyoko” played by Japanese/British actress Sonoya Mizuno (image source: i09.com)
I started to get upset. If there was a point, Garland had better get to it fast.
Unfortunately the treatment of Kyoko’s character just keeps spiraling. We continue to learn more and more about her horrible existence in a way that feels gross only for shock value rather than for any sort of deconstruction, empowerment, or liberation of Asian women. She is always at Nathan’s side, ready and available, for anything he wants. Eventually Nathan shows Caleb something else special about her. He’s coded Kyoko to love dancing (“I told you you’re wasting your time talking to her. However you would not be wasting your time–if you were dancing with her”). When Nathan flips a wall switch that washes the room in red lights and music then joins a scantily-clad gyrating Kyoko on the dance floor, I was overcome by disgust:
I recently also wrote about Western exploitation of women’s bodies in Asia (incidentally also in February), in particular noting it was US imperialistic conquest that jump-started Thailand’s sex industry. By the 1990s several million tourists from Europe and the U.S. were visiting Thailand annually, many specifically for sex and entertainment. Writer Deena Guzder points out in “The Economics of Commercial Sexual Exploitation” for the Pulitzer Center on Crisis Reporting that Thailand’s sex tourism industry is driven by acute poverty. Women and girls from poor rural families make up the majority of sex workers. “Once lost in Thailand’s seedy underbelly, these women are further robbed of their individual agency, economic independence, and bargaining power.” Guzder gloomily predicts, “If history repeats itself, the situation for poor Southeast Asian women will only further deteriorate with the global economic downturn.”
Red Light District, Phuket (image source: phuket.com)
You know who wouldn’t be a stranger to any of this? Alex Garland. His first novel, The Beach, is set in Thailand and his second novel, The Tesseract, is set in the Philippines, both developing nations where Asian women continue to be used and abused for Western gain. In a 1999 interview with journalist Ron Gluckman, Garland said he made his first trip to Asia as a teenager in high school and had been back at least once or twice almost every year since. He also lived in the Philippines for 9 months. In a perhaps telling choice of words, Gluckman wrote that Garland had “been bitten by the Asian bug, early and deep.” At the time many Asian critics were criticizing The Beach as a shallow look at the region by an uniformed outsider but Garland protested in his interview:
A lot of the criticism of The Beach is that it presents Thais as two dimensional, as part of the scenery. That’s because these people I’m writing about–backpackers–really only see them as part of the scenery. They don’t see them or the Thai culture. To them, it’s all part of a huge theme park, the scenery for their trip. That’s the point.
I disagree severely with Garland. In insisting on his right to portray people of color one way while dismissing how those people see themselves, he not only centers his privileged perspective (i.e. white, male) but shows determined disinterest in representing oppressed people transformatively. Leads me to wonder how much he really knows or cares about inequity and uplifting marginalized voices. Indeed in Ex Machina the only point that Garland ever seems to make is that racist/sexist tropes exists, not that we’re going to do anything about them. And that kind of non-critical non-resistant attitude does more to reify and reinforce than anything else. Take for instance in a recent interview with Cinematic Essential (one of few where the interviewer asked about race), Garland had this to say about stereotypes in his new film:
Sometimes you do things unconsciously, unwittingly, or stupidly, I guess, and the only embedded point that I knew I was making in regards to race centered around the tropes of Kyoko [Sonoya Mizuno], a mute, very complicit Asian robot, or Asian-appearing robot, because of course, she, as a robot, isn’t Asian. But, when Nathan treats the robot in the discriminatory way that he treats it, I think it should be ambivalent as to whether he actually behaves this way, or if it’s a very good opportunity to make him seem unpleasant to Caleb for his own advantage.
First, approaching race “unconsciously” or “unwittingly” is never a good idea and moreover a classic symptom of white willful ignorance. Second, Kyoko isn’t Asian because she’s a robot? Race isn’t biological or written into human DNA. It’s socio-politically constructed and assigned usually by those in power. Kyoko is Asian because she ha been made that way not only by her oppressor, Nathan, but by Garland himself, the omniscient creator of all. Third, Kyoko represents the only embedded race point in the movie? False. There are two other women of color who play enslaved fembots in Ex Machina and their characters are abused just as badly. “Jasmine” is one of Nathan’s early fembots. She’s Black. We see her body twice. Once being instructed how to write and once being dragged lifeless across the floor. You will never recognize real-life Black model and actress Symara A. Templeman in the role however. Why? Because her always naked body is inexplicably headless when it appears. That’s right. One of the sole Black bodies/persons in the entire film does not have (per Garland’s writing and direction) a face, head, or brain.
Symara A. Templeman, who played “Jasmine” in Ex Machina (image source: Templeman on Google+)
“Jade” played by Asian model and actress Gana Bayarsaikhan, is presumably also a less successful fembot predating Kyoko but perhaps succeeding Jasmine. She too is always shown naked but, unlike Jasmine, she has a head, and, unlike Kyoko, she speaks. We see her being questioned repeatedly by Nathan while trapped behind glass. Jade is resistant and angry. She doesn’t understand why Nathan won’t let her out and escalates to the point we are lead to believe she is decommissioned for her defiance.
It’s significant that Kyoko, a mixed-race Asian/white woman, later becomes the “upgraded” Asian model. It’s also significant that at the movie’s end white Ava finds Jade’s decommissioned body in a closet in Nathan’s room and skins it to cover her own body. (Remember when Katy Perry joked in 2012 she was obsessed with Japanese people and wanted to skin one?). Ava has the option of white bodies but after examining them meticulously she deliberately chooses Jade. Despite having met Jasmine previously, her Black body is conspicuously missing from the closets full of bodies Nathan has stored for his pleasure and use. And though Kyoko does help Ava kill Nathan in the end, she herself is “killed” in the process (i.e. never free) and Ava doesn’t care at all. What does all this show? A very blatant standard of beauty/desire that is not only male-designed but clearly a light, white, and violently assimilative one.
Gana Bayarsaikhan, who played “Jade” in Ex Machina (image source: profile-models.com)
I can’t even being to tell you how offended and disturbed I was by the treatment of women of color in this movie. I slept restlessly the night after I saw Ex Machina, woke up muddled at 2:45 AM and–still clinging to the hope that there must have been a reason for treating women of color this way (Garland’s brilliant right?)–furiously went to work reading interviews and critiques. Aside from a few brief mentions of race/gender, I found barely anything addressing the film’s obvious deployment of racialized gender stereotypes for its own benefit. For me this movie will be joining the long list of many so-called film classics I will never be able to admire. Movieswhere supposed artistry and brilliance are acceptable excuses for “unconscious” “unwitting” racism and sexism. Ex Machina may be smart in some ways, but it damn sure isn’t in others.
Correction (8/1/2015): An earlier version of this post incorrectly stated that actress Symara A. Templeman was the only Black person in the film. The post has been updated to indicate that the movie also featured at least one other Black actress, Deborah Rosan, in an uncredited role as Office Manager.
_____
Sharon H. Chang is an author, scholar, sociologist and activist. She writes primarily on racism, social justice and the Asian American diaspora with a feminist lens. Her pieces have appeared in Hyphen Magazine, ParentMap Magazine, The Seattle Globalist, on AAPI Voices and Racism Review. Her debut book, Raising Mixed Race: Multiracial Asian Children in a Post-Racial World, is forthcoming through Paradigm Publishers as part of Joe R. Feagin’s series “New Critical Viewpoints on Society.” She also sits on the board for Families of Color Seattle and is on the planning committee for the biennial Critical Mixed Race Studies Conference. She blogs regularly at Multiracial Asian Families, where an earlier version of this post first appeared.
The editors thank Dorothy Kim for referring us to this essay.
Science fiction is a genre of literature in which artifacts and techniques humans devise as exemplary expressions of our intelligence result in problems that perplex our intelligence or even bring it into existential crisis. It is scarcely surprising that a genre so preoccupied with the status and scope of intelligence would provide endless variations on the conceits of either the construction of artificial intelligences or contact with alien intelligences.
Of course, both the making of artificial intelligence and making contact with alien intelligence are organized efforts to which many humans are actually devoted, and not simply imaginative sites in which writers spin their allegories and exhibit their symptoms. It is interesting that after generations of failure the practical efforts to construct artificial intelligence or contact alien intelligence have often shunted their adherents to the margins of scientific consensus and invested these efforts with the coloration of scientific subcultures: While computer science and the search for extraterrestrial intelligence both remain legitimate fields of research, both AI and aliens also attract subcultural enthusiasms and resonate with cultic theology, each attracts its consumer fandoms and public Cons, each has its True Believers and even its UFO cults and Robot cults at the extremities.
Champions of artificial intelligence in particular have coped in many ways with the serial failure of their project to achieve its desired end (which is not to deny that the project has borne fruit) whatever the confidence with which generation after generation of these champions have insisted that desired end is near. Some have turned to more modest computational ambitions, making useful software or mischievous algorithms in which sad vestiges of the older dreams can still be seen to cling. Some are simply stubborn dead-enders for Good Old Fashioned AI‘s expected eventual and even imminent vindication, all appearances to the contrary notwithstanding. And still others have doubled down, distracting attention from the failures and problems bedeviling AI discourse simply by raising its pitch and stakes, no longer promising that artificial intelligence is around the corner but warning that artificial super-intelligence is coming soon to end human history.
Another strategy for coping with the failure of artificial intelligence on its conventional terms has assumed a higher profile among its champions lately, drawing support for the real plausibility of one science-fictional conceit — construction of artificial intelligence — by appealing to another science-fictional conceit, contact with alien intelligence. This rhetorical gambit has often been conjoined to the compensation of failed AI with its hyperbolic amplification into super-AI which I have already mentioned, and it is in that context that I have written about it before myself. But in a piece published a few days ago in The New York Times, “Outing A.I.: Beyond the Turing Test,” Benjamin Bratton, a professor of visual arts at U.C. San Diego and Director of a design think-tank, has elaborated a comparatively sophisticated case for treating artificial intelligence as alien intelligence with which we can productively grapple. Near the conclusion of his piece Bratton declares that “Musk, Gates and Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers.” Of course these figures made their headlines by making the arguments about super-intelligence I have already rejected, and mentioning them seems to indicate Bratton’s sympathy with their gambit and even suggests that his argument aims to help us to understand them better on their own terms. Nevertheless, I take Bratton’s argument seriously not because of but in spite of this connection. Ultimately, Bratton makes a case for understanding AI as alien that does not depend on the deranging hyperbole and marketing of robocalypse or robo-rapture for its force.
In the piece, Bratton claims “Our popular conception of artificial intelligence is distorted by an anthropocentric fallacy.” The point is, of course, well taken, and the litany he rehearses to illustrate it is enormously familiar by now as he proceeds to survey popular images from Kubrick’s HAL to Jonze’s Her and to document public deliberation about the significance of computation articulated through such imagery as the “rise of the machines” in the Terminator franchise or the need for Asimov’s famous fictional “Three Laws of Robotics.” It is easy — and may nonetheless be quite important — to agree with Bratton’s observation that our computational/media devices lack cruel intentions and are not susceptible to Asimovian consciences, and hence thinking about the threats and promises and meanings of these devices through such frames and figures is not particularly helpful to us even though we habitually recur to them by now. As I say, it would be easy and important to agree with such a claim, but Bratton’s proposal is in fact somewhat a different one:
[A] mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits. This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life.
Here the serial failure of the program of artificial intelligence is redeemed simply by declaring victory. Bratton demonstrates that crying uncle does not preclude one from still crying wolf. It’s not that Siri is some sickly premonition of the AI-daydream still endlessly deferred, but that it represents the real rise of what robot cultist Hans Moravec once promised would be our “mind children” but here and now as elfen aliens with an intelligence unto themselves. It’s not that calling a dumb car a “smart” car is simply a hilarious bit of obvious marketing hyperbole, but represents the recognition of a new order of intelligent machines among us. Rather than criticize the way we may be “amplifying its risks and retarding its benefits” by reading computation through the inapt lens of intelligence at all, he proposes that we should resist holding machine intelligence to the standards that have hitherto defined it for fear of making its recognition “too difficult.”
The kernel of legitimacy in Bratton’s inquiry is its recognition that “intelligence is notoriously difficult to define and human intelligence simply can’t exhaust the possibilities.” To deny these modest reminders is to indulge in what he calls “the pretentious folklore” of anthropocentrism. I agree that anthropocentrism in our attributions of intelligence has facilitated great violence and exploitation in the world, denying the dignity and standing of Cetaceans and Great Apes, but has also facilitated racist, sexist, xenophobic travesties by denigrating humans as beastly and unintelligent objects at the disposal of “intelligent” masters. “Some philosophers write about the possible ethical ‘rights’ of A.I. as sentient entities, but,” Bratton is quick to insist, “that’s not my point here.” Given his insistence that the “advent of robust inhuman A.I.” will force a “reality-based” “disenchantment” to “abolish the false centrality and absolute specialness of human thought and species-being” which he blames in his concluding paragraph with providing “theological and legislative comfort to chattel slavery” it is not entirely clear to me that emancipating artificial aliens is not finally among the stakes that move his argument whatever his protestations to the contrary. But one can forgive him for not dwelling on such concerns: the denial of an intelligence and sensitivity provoking responsiveness and demanding responsibilities in us all to women, people of color, foreigners, children, the different, the suffering, nonhuman animals compels defensive and evasive circumlocutions that are simply not needed to deny intelligence and standing to an abacus or a desk lamp. It is one thing to warn of the anthropocentric fallacy but another to indulge in the pathetic fallacy.
Bratton insists to the contrary that his primary concern is that anthropocentrism skews our assessment of real risks and benefits. “Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for.” And of course he is right. The champions of AI have been more than complicit in this popular conception, eager to attract attention and funds for their project among technoscientific illiterates drawn to such dramatic narratives. But we are distracted from the real risks of computation so long as we expect risks to arise from a machinic malevolence that has never been on offer nor even in the offing. Writes Bratton: “Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all.”
But surely the inevitable question posed by Bratton’s disenchanting expose at this point should be: Why, once we have set aside the pretentious folklore of machines with diabolical malevolence, do we not set aside as no less pretentiously folkloric the attribution of diabolical indifference to machines? Why, once we have set aside the delusive confusion of machine behavior with (actual or eventual) human intelligence, do we not set aside as no less delusive the confusion of machine behavior with intelligence altogether? There is no question were a gigantic bulldozer with an incapacitated driver to swerve from a construction site onto a crowded city thoroughfare this would represent a considerable threat, but however tempting it might be in the fraught moment or reflective aftermath poetically to invest that bulldozer with either agency or intellect it is clear that nothing would be gained in the practical comprehension of the threat it poses by so doing. It is no more helpful now in an epoch of Greenhouse storms than it was for pre-scientific storytellers to invest thunder and whirlwinds with intelligence. Although Bratton makes great play over the need to overcome folkloric anthropocentrism in our figuration of and deliberation over computation, mystifying agencies and mythical personages linger on in his accounting however he insists on the alienness of “their” intelligence.
Bratton warns us about the “infrastructural A.I.” of high-speed financial trading algorithms, Google and Amazon search algorithms, “smart” vehicles (and no doubt weaponized drones and autonomous weapons systems would count among these), and corporate-military profiling programs that oppress us with surveillance and harass us with targeted ads. I share all of these concerns, of course, but personally insist that our critical engagement with infrastructural coding is profoundly undermined when it is invested with insinuations of autonomous intelligence. In “Art in the Age of Mechanical Reproducibility,” Walter Benjamin pointed out that when philosophers talk about the historical force of art they do so with the prejudices of philosophers: they tend to write about those narrative and visual forms of art that might seem argumentative in allegorical and iconic forms that appear analogous to the concentrated modes of thought demanded by philosophy itself. Benjamin proposed that perhaps the more diffuse and distracted ways we are shaped in our assumptions and aspirations by the durable affordances and constraints of the made world of architecture and agriculture might turn out to drive history as much or even more than the pet artforms of philosophers do. Lawrence Lessig made much the same point when he declared at the turn of the millennium that “Code Is Law.”
It is well known that special interests with rich patrons shape the legislative process and sometimes even explicitly craft legislation word for word in ways that benefit them to the cost and risk of majorities. It is hard to see how our assessment of this ongoing crime and danger would be helped and not hindered by pretending legislation is an autonomous force exhibiting an alien intelligence, rather than a constellation of practices, norms, laws, institutions, ritual and material artifice, the legacy of the historical play of intelligent actors and the site for the ongoing contention of intelligent actors here and now. To figure legislation as a beast or alien with a will of its own would amount to a fetishistic displacement of intelligence away from the actual actors actually responsible for the forms that legislation actually takes. It is easy to see why such a displacement is attractive: it profitably abets the abuses of majorities by minorities while it absolves majorities from conscious complicity in the terms of their own exploitation by laws made, after all, in our names. But while these consoling fantasies have an obvious allure this hardly justifies our endorsement of them.
I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that the collapse of global finance in 2008 represented the working of inscrutable artificial intelligences facilitating rapid transactions and supporting novel financial instruments of what was called by Long Boom digerati the “new economy.” I wrote:
It is not computers and programs and autonomous techno-agents who are the protagonists of the still unfolding crime of predatory plutocratic wealth-concentration and anti-democratizing austerity. The villains of this bloodsoaked epic are the bankers and auditors and captured-regulators and neoliberal ministers who employed these programs and instruments for parochial gain and who then exonerated and rationalized and still enable their crimes. Our financial markets are not so complex we no longer understand them. In fact everybody knows exactly what is going on. Everybody understands everything. Fraudsters [are] engaged in very conventional, very recognizable, very straightforward but unprecedentedly massive acts of fraud and theft under the cover of lies.
I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that our discomfiture in the setting of ubiquitous algorithmic mediation results from an autonomous force over which humans intentions are secondary considerations. I wrote:
[W]hat imaginary scene is being conjured up in this exculpatory rhetoric in which inadvertent cruelty is ‘coming from code’ as opposed to coming from actual persons? Aren’t coders actual persons, for example? … [O]f course I know what [is] mean[t by the insistence…] that none of this was ‘a deliberate assault.’ But it occurs to me that it requires the least imaginable measure of thought on the part of those actually responsible for this code to recognize that the cruelty of [one user’s] confrontation with their algorithm was the inevitable at least occasional result for no small number of the human beings who use Facebook and who live lives that attest to suffering, defeat, humiliation, and loss as well as to parties and promotions and vacations… What if the conspicuousness of [this] experience of algorithmic cruelty indicates less an exceptional circumstance than the clarifying exposure of a more general failure, a more ubiquitous cruelty? … We all joke about the ridiculous substitutions performed by autocorrect functions, or the laughable recommendations that follow from the odd purchase of a book from Amazon or an outing from Groupon. We should joke, but don’t, when people treat a word cloud as an analysis of a speech or an essay. We don’t joke so much when a credit score substitutes for the judgment whether a citizen deserves the chance to become a homeowner or start a small business, or when a Big Data profile substitutes for the judgment whether a citizen should become a heat signature for a drone committing extrajudicial murder in all of our names. [An] experience of algorithmic cruelty [may be] extraordinary, but that does not mean it cannot also be a window onto an experience of algorithmic cruelty that is ordinary. The question whether we might still ‘opt out’ from the ordinary cruelty of algorithmic mediation is not a design question at all, but an urgent political one.
I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that so-called Killer Robots are a threat that must be engaged by resisting or banning “them” in their alterity rather than by assigning moral and criminal responsibility on those who code, manufacture, fund, and deploy them. I wrote:
Well-meaning opponents of war atrocities and engines of war would do well to think how tech companies stand to benefit from military contracts for ‘smarter’ software and bleeding-edge gizmos when terrorized and technoscientifically illiterate majorities and public officials take SillyCon Valley’s warnings seriously about our ‘complacency’ in the face of truly autonomous weapons and artificial super-intelligence that do not exist. It is crucial that necessary regulation and even banning of dangerous ‘autonomous weapons’ proceeds in a way that does not abet the mis-attribution of agency, and hence accountability, to devices. Every ‘autonomous’ weapons system expresses and mediates decisions by responsible humans usually all too eager to disavow the blood on their hands. Every legitimate fear of ‘killer robots’ is best addressed by making their coders, designers, manufacturers, officials, and operators accountable for criminal and unethical tools and uses of tools… There simply is no such thing as a smart bomb. Every bomb is stupid. There is no such thing as an autonomous weapon. Every weapon is deployed. The only killer robots that actually exist are human beings waging and profiting from war.
“Arguably,” argues Bratton, “the Anthropocene itself is due less to technology run amok than to the humanist legacy that understands the world as having been given for our needs and created in our image. We hear this in the words of thought leaders who evangelize the superiority of a world where machines are subservient to the needs and wishes of humanity… This is the sentiment — this philosophy of technology exactly — that is the basic algorithm of the Anthropocenic predicament, and consenting to it would also foreclose adequate encounters with A.I.” The Anthropocene in this formulation names the emergence of environmental or planetary consciousness, an emergence sometimes coupled to the global circulation of the image of the fragility and interdependence of the whole earth as seen by humans from outer space. It is the recognition that the world in which we evolved to flourish might be impacted by our collective actions in ways that threaten us all. Notice, by the way, that multiculture and historical struggle are figured as just another “algorithm” here.
I do not agree that planetary catastrophe inevitably followed from the conception of the earth as a gift besetting us to sustain us, indeed this premise understood in terms of stewardship or commonwealth would go far in correcting and preventing such careless destruction in my opinion. It is the false and facile (indeed infantile) conception of a finite world somehow equal to infinite human desires that has landed us and keeps us delusive ignoramuses lodged in this genocidal and suicidal predicament. Certainly I agree with Bratton that it would be wrong to attribute the waste and pollution and depletion of our common resources by extractive-industrial-consumer societies indifferent to ecosystemic limits to “technology run amok.” The problem of so saying is not that to do so disrespects “technology” — as presumably in his view no longer treating machines as properly “subservient to the needs and wishes of humanity” would more wholesomely respect “technology,” whatever that is supposed to mean — since of course technology does not exist in this general or abstract way to be respected or disrespected.
The reality at hand is that humans are running amok in ways that are facilitated and mediated by certain technologies. What is demanded in this moment by our predicament is the clear-eyed assessment of the long-term costs, risks, and benefits of technoscientific interventions into finite ecosystems to the actual diversity of their stakeholders and the distribution of these costs, risks, and benefits in an equitable way. Quite a lot of unsustainable extractive and industrial production as well as mass consumption and waste would be rendered unprofitable and unappealing were its costs and risks widely recognized and equitably distributed. Such an understanding suggests that what is wanted is to insist on the culpability and situation of actually intelligent human actors, mediated and facilitated as they are in enormously complicated and demanding ways by technique and artifice. The last thing we need to do is invest technology-in-general or environmental-forces with alien intelligence or agency apart from ourselves.
I am beginning to wonder whether the unavoidable and in many ways humbling recognition (unavoidable not least because of environmental catastrophe and global neoliberal precarization) that human agency emerges out of enormously complex and dynamic ensembles of interdependent/prostheticized actors gives rise to compensatory investments of some artifacts — especially digital networks, weapons of mass destruction, pandemic diseases, environmental forces — with the sovereign aspect of agency we no longer believe in for ourselves? It is strangely consoling to pretend our technologies in some fancied monolithic construal represent the rise of “alien intelligences,” even threatening ones, other than and apart from ourselves, not least because our own intelligence is an alienated one and prostheticized through and through. Consider the indispensability of pedagogical techniques of rote memorization, the metaphorization and narrativization of rhetoric in songs and stories and craft, the technique of the memory palace, the technologies of writing and reading, the articulation of metabolism and duration by timepieces, the shaping of both the body and its bearing by habit and by athletic training, the lifelong interplay of infrastructure and consciousness: all human intellect is already technique. All culture is prosthetic and all prostheses are culture.
Bratton wants to narrate as a kind of progressive enlightenment the mystification he recommends that would invest computation with alien intelligence and agency while at once divesting intelligent human actors, coders, funders, users of computation of responsibility for the violations and abuses of other humans enabled and mediated by that computation. This investment with intelligence and divestment of responsibility he likens to the Copernican Revolution in which humans sustained the momentary humiliation of realizing that they were not the center of the universe but received in exchange the eventual compensation of incredible powers of prediction and control. One might wonder whether the exchange of the faith that humanity was the apple of God’s eye for a new technoscientific faith in which we aspired toward godlike powers ourselves was really so much a humiliation as the exchange of one megalomania for another. But what I want to recall by way of conclusion instead is that the trope of a Copernican humiliation of the intelligent human subject is already quite a familiar one:
In his Introductory Lectures on Psychoanalysis Sigmund Freud notoriously proposed that
In the course of centuries the naive self-love of men has had to submit to two major blows at the hands of science. The first was when they learnt that our earth was not the center of the universe but only a tiny fragment of a cosmic system of scarcely imaginable vastness. This is associated in our minds with the name of Copernicus… The second blow fell when biological research destroyed man’s supposedly privileged place in creation and proved his descent from the animal kingdom and his ineradicable animal nature. This revaluation has been accomplished in our own days by Darwin… though not without the most violent contemporary opposition. But human megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on unconsciously in the mind.
However we may feel about psychoanalysis as a pseudo-scientific enterprise that did more therapeutic harm than good, Freud’s works considered instead as contributions to moral philosophy and cultural theory have few modern equals. The idea that human consciousness is split from the beginning as the very condition of its constitution, the creative if self-destructive result of an impulse of rational self-preservation beset by the overabundant irrationality of humanity and history, imposed a modesty incomparably more demanding than Bratton’s wan proposal in the same name. Indeed, to the extent that the irrational drives of the dynamic unconscious are often figured as a brute machinic automatism, one is tempted to suggest that Bratton’s modest proposal of alien artifactual intelligence is a fetishistic disavowal of the greater modesty demanded by the alienating recognition of the stratification of human intelligence by unconscious forces (and his moniker a symptomatic citation). What is striking about the language of psychoanalysis is the way it has been taken up to provide resources for imaginative empathy across the gulf of differences: whether in the extraordinary work of recent generations of feminist, queer, and postcolonial scholars re-orienting the project of the conspicuously sexist, heterosexist, cissexist, racist, imperialist, bourgeois thinker who was Freud to emancipatory ends, or in the stunning leaps in which Freud identified with neurotic others through psychoanalytic reading, going so far as to find in the paranoid system-building of the psychotic Dr. Schreber an exemplar of human science and civilization and a mirror in which he could see reflected both himself and psychoanalysis itself. Freud’s Copernican humiliation opened up new possibilities of responsiveness in difference out of which could be built urgently necessary responsibilities otherwise. I worry that Bratton’s Copernican modesty opens up new occasions for techno-fetishistic fables of history and disavowals of responsibility for its actual human protagonists.
_____
Dale Carrico is a member of the visiting faculty at the San Francisco Art Institute as well as a lecturer in the Department of Rhetoric at the University of California at Berkeley from which he received his PhD in 2005. His work focuses on the politics of science and technology, especially peer-to-peer formations and global development discourse and is informed by a commitment to democratic socialism (or social democracy, if that freaks you out less), environmental justice critique, and queer theory. He is a persistent critic of futurological discourses, especially on his Amor Mundi blog, on which an earlier version of this post first appeared.
a review of Spike Jonze (dir.), Her (2013)
by Mike Bulajewski
~
I’m told by my sister, who is married to a French man, that the French don’t say “I love you”—or at least they don’t say it often. Perhaps they think the words are superfluous and it’s the behavior of the person you are in a relationship with tells you everything. Americans, on the other hand, say it to everyone—lovers, spouses, friends, parents, grandparents, children, pets—and as often as possible, as if quantity matters most. The declaration is also an event. For two people beginning a relationship, it marks a turning point and a new stage in the relationship.
If you aren’t American, you may not have realized that relationships have stages. In America, they do. It’s complicated. First there are the three main thresholds of commitment: Dating, Exclusive Dating, then of course Marriage. There are three lesser pre-Dating stages: Just Talking, Hooking Up and Friends with Benefits; and one minor stage between Dating and Exclusive called Pretty Much Exclusive. Within Dating, there are several minor substages: number of dates (often counted up to the third date) and increments of physical intimacy denoted according to the well-known baseball metaphor of first, second, third and home base.
There are also a number of rituals that indicate progress: updating of Facebook relationship statuses; leaving a toothbrush at each other’s houses; the aforementioned exchange of I-love-you’s; taking a vacation together; meeting the parents; exchange of house keys; and so on. When people, especially unmarried people talk about relationships, often the first questions are about these stages and rituals. In France the system is apparently much less codified. One convention not present in the United States is that romantic interest is signaled when a man invites a woman to go for a walk with him.
The point is two-fold: first, although Americans admire and often think of French culture as holding up a standard for what romance ought to be, Americans act nothing like the French in relationships and in fact know very little about how they work in France. Second and more importantly, in American culture love is widely understood as spontaneous and unpredictable, and yet there is also an opposite and often unacknowledged expectation that relationships follow well-defined rules and rituals.
This contradiction might explain the great public clamor over romance apps like Romantimatic and BroApp that automatically send your significant other romantic messages, either predefined or your own creation, at regular intervals—what philosopher of technology Evan Selinger calls (and not without justification) apps that outsource our humanity.
Reviewers of these apps were unanimous in their disapproval, disagreeing only on where to locate them on a spectrum between pretty bad and sociopathic. Among all the labor-saving apps and devices, why should this one in particular be singled out for opprobrium?
Perhaps one reason for the outcry is that they expose an uncomfortable truth about how easily romance can be automated. Something we believe is so intimate is revealed as routine and predictable. What does it say about our relationship needs that the right time to send a loving message to your significant other can be reduced to an algorithm?
The routinization of American relationships first struck me in the context of this little-known fact about how seldom French people say “I love you.” If you had to launch one of these romance apps in France, it wouldn’t be enough to just translate the prewritten phrases into French. You’d have to research French romantic relationships and discover what are the most common phrases—if there are any—and how frequently text messages are used for this purpose. It’s possible that French people are too unpredictable, or never use text messages for romantic purposes, so the app is just not feasible in France.
Romance is culturally determined. That American romance can be so easily automated reveals how standardized and even scheduled relationships already are. Selinger’s argument that automated romance undermines our humanity has some merit, but why stop with apps? Why not address the problem at a more fundamental level and critique the standardized courtship system that regulates romance. Doesn’t this also outsource our humanity?
The best-selling relationship advice book The 5 Love Languages claims that everyone understands one of five love “languages” and the key to a happy relationship for each partner to learn to express love in the correct language. Should we be surprised if the more technically minded among us concludes that the problem of love can be solved with technology? Why not try to determine the precise syntax and semantics of these love languages, and attempt to express them rigorously and unambiguously in the same way that computer languages and communications protocols are? Can love be reduced to grammar?
Spike Jonze’s Her (2013) tells the story of Theodore Twombly, a soon-to-be divorced writer who falls in love with Samantha, an AI operating system who far exceeds the abilities of today’s natural language assistants like Apple’s Siri or Microsoft’s Cortana. Samantha is not only hyper-intelligent, she’s also capable of laughter, telling jokes, picking up on subtle unspoken interpersonal cues, feeling and communicating her own emotions, and so on. Theodore falls in love with her, but there is no sense that their relationship is deficient because she’s not human. She is as emotionally expressive as any human partner, at least on film.
Theodore works for a company called BeautifulHandwrittenLetters.com as a professional Cyrano de Bergerac (or perhaps a human Romantimatic), ghostwriting heartfelt “handwritten” letters on behalf of this clients. It’s an ironic twist: Samantha is his simulated girlfriend, a role which he himself adopts at work by simulating the feelings of his clients. The film opens with Theodore at his desk at work, narrating a letter from a wife to her husband on the occasion of their 50th wedding anniversary. He is a master of the conventions of the love letter. Later in the film, his work is discovered by a literary agent, and he gets an offer to have book published of his best work.
But for all his (alleged) expertise as a romantic writer, Theodore is lonely, emotionally stunted, ambivalent towards the women in his life, and—at least before meeting Samantha—apparently incapable of maintaining relationships since he separated from his ex-wife Catherine. Highly sensitive, he is disturbed by encounters with women that go off the script: a phone sex encounter goes awry when the woman demands that he enact her bizarre fantasy of being choked with a dead cat; and on a date with a woman one night, she exposes a little too much vulnerability and drunkenly expresses her fear that he won’t call her. He abruptly and awkwardly ends the date.
Theodore wanders aimlessly through the high tech city as if it is empty. With headphones always on, he’s withdrawn, cocooned in a private sonic bubble. He interacts with his device through voice, asking it to play melancholy songs and skipping angry messages from his attorney demanding that he sign the divorce papers already. At times, he daydreams of happier times when he and his ex-wife were together and tells Samantha how much he liked being married. At first it seems that Catherine left him. We wonder if he withdrew from the pain of his heartbreak. But soon a different picture emerges. When they finally meet to sign the divorce papers over lunch, Catherine accuses him of not being able to handle her emotions and reveals that he tried to get her on Prozac. She says to him “I always felt like you wished I could just be a happy, light, everything’s great, bouncy L.A. wife. But that’s not me.”
So Theodore’s avoidance of real challenges and emotions in relationships turns out to be an ongoing problem—the cause, not the consequence, of his divorce. Starting a relationship with his operating systems Samantha is his latest retreat from reality—not from physical reality, but from the virtual reality of authentic intersubjective contact.
Unlike his other relationships, Samantha is perfectly customized to his needs. She speaks his “love language.” Today we personalize our operating system and fill out online dating profile specifying exactly what kind of person we’re looking for. When Theodore installs Samantha on his computer for the first time, the two operations are combined with a single question. The system asks him how he would describe his relationship with his mother. He begins to reply with psychological banalities about how she is insufficiently attuned to his needs, and it quickly stops him, already knowing what he’s about. And so do we.
That Theodore is selfish doesn’t mean that he is unfeeling, unkind, insensitive, conceited or uninterested in his new partners thoughts, feelings and goals. His selfishness is the kind that’s approved and even encouraged today, the ethically consistent selfishness that respects the right of others to be equally selfish. What he wants most of all is to be comfortable, to feel good, and that requires a partner who speaks his love language and nothing else, someone who says nothing that would veer off-script and reveal too many disturbing details. More precisely, Theodore wants someone who speaks what Lacan called empty speech: speech that obstructs the revelation of the subject’s traumatic desire.
Objectification is a traditional problem between men and women. Men reduce women to mere bodies or body parts that exist only for sexual gratification, treating them as sex objects rather than people. The dichotomy is between the physical as the domain of materiality, animality and sex on one hand, and the spiritual realm of subjectivity, personality, agency and the soul on the other. If objectification eliminates the soul, then Theodore engages in something like the opposite, a subjectification which eradicates the body. Samantha is just a personality.
Technology writer Nicholas Carr‘s new book The Glass Cage: Automation and Us (Norton, 2014) investigates the ways that automation and artificial intelligence dull our cognitive capacities. Her can be read as a speculative treatment of the same idea as it relates to emotion. What if the difficulty of relationships could be automated away? The film’s brilliant provocation is that it shows us a lonely, hollow world mediated through technology but nonetheless awash in sentimentality. It thwarts our expectations that algorithmically-generated emotion would be as stilted and artificial as today’s speech synthesizers. Samantha’s voice is warm, soulful, relatable and expressive. She’s real, and the feelings she triggers in Theodore are real.
But real feelings with real sensations can also be shallow. As Maria Bustillo notes, Theodore is an awful writer, at least by today’s standards. Here’s the kind of prose that wins him accolades from everyone around him:
I remember when I first started to fall in love with you like it was last night. Lying naked beside you in that tiny apartment, it suddenly hit me that I was part of this whole larger thing, just like our parents, and our parents’ parents. Before that I was just living my life like I knew everything, and suddenly this bright light hit me and woke me up. That light was you.
In spite of this, we’re led to believe that Theodore is some kind of literary genius. Various people in his life compliment him on his skill and the editor of the publishing company who wants to publish his work emails to tell him how moved he and his wife were when they read them. What kind of society would treat such pedestrian writing as unusual, profound or impressive? And what is the average person’s writing like if Theodore’s services are worth paying for?
Recall the cult favorite Idiocracy (2006) directed by Mike Judge, a science fiction satire set in a futuristic dystopia where anti-intellectualism is rampant and society has descended into stupidity. We can’t help but conclude that Her offers a glimpse into a society that has undergone a similar devolution into both emotional and literary idiocy.