Detroit, Westworld, and moving androids beyond human
This post contains plot details from Detroit: Become Human, season two of Westworld, and Blade Runner 2049.
At the beginning of Detroit: Become Human, a video game about American androids fighting for equal rights, a character looks out from the television screen and says, directly to the player, “Remember: This is not just a story. This is our future.”
It’s a bold claim. As Detroit’s story unfolds, the game switches between three different androids: household servant turned revolutionary leader Markus; Kara, a robot fleeing from government persecution with the abused child she rescued from her former boss; and Connor, an agent of the delightfully named megacorp CyberLife who hunts down “deviant androids” disobeying their programming. Through their perspectives, we’re meant to observe a technological future the game wants us to believe is, in fact, soon to come. Connor’s character may sound familiar. That’s because he’s essentially a recast of Rick Deckard, the titular Blade Runner from Ridley Scott’s 1982 sci-fi classic. In each case, Deckard and Connor are hunting aberrant robots, capturing and/or killing those who have broken free of their programming and attempting to live outside their intended roles as servants to humanity.
In both Detroit and Blade Runner the point of these robot hunters is to introduce the question of what separates humanity from a synthetic being so emotionally and intellectually advanced that it is indistinguishable from any member of our species. By the time we’ve watched the monologue from Blade Runner’s bleach blond “replicant” robot Roy Batty (Rutger Hauer) about his memories vanishing “like tears in rain,” any hint of inhumanity feels irrelevant. He, like the soulful androids who populate Detroit, remembers his past in the same way we do. He loves. He can be sad. He thinks about his own mortality. The movie ends with the audience having been convinced that a robot with incredibly advanced artificial intelligence deserves to be treated better than a defective home appliance. Blade Runner, it bears repeating, was released in 1982. Detroit: Become Human came out May of this year.
Again and again, Detroit attempts to pull its sci-fi storyline into the real world to convey the same message Blade Runner accomplished so many years ago. It evokes the American civil rights movement (its future Michigan features segregated shops and public transit where androids are kept to the back of city buses; one chapter is even called “Freedom March”), American slavery (the horrific abuses visited on the androids by their masters are regular enough to become numbing), and the Holocaust (extermination camps are set up to house revolutionary androids near the game’s finale) in order to do so. Others have done a great job running down the myriad ways in which Detroit fails in its evocation of the civil rights movement and class-based civil unrest. The poor taste inherent in its decision to make tone-deaf comparisons between its (multi-ethnic, apparently secular) robots and some of human history’s most reprehensible moments of violent prejudice is grotesque enough on its own. But it’s worth noting that on a dramatic level, Detroit also falls completely flat.
Its central point, presented with the satisfied air of a toddler smugly revealing that the family dog feels pain when you yank its tail, is that an android with a sophisticated sense of the world and itself deserves the same rights as any human. This seems like a philosophical problem that ought to have been put to bed around the time Blade Runner made the “dilemma” of android humanity part of mainstream pop culture. For decades now, audiences have watched, read, and played through stories that very persuasively argue there’s no good moral case for treating sufficiently advanced artificial intelligence—especially when housed in an independently thinking and feeling robot body—like dirt. To watch Roy Batty die in Blade Runner and feel nothing isn’t a failure of social and cultural empathy, but the viewer for just kind of being a monster. To release a video game in 2018 where players are honestly expected to experience conflicting emotions or a sense of emotional revelation when a completely humanistic robot is tortured or killed in cold blood ignores decades of genre-advancing history.
Even outside popular art, the past few decades have seen seismic shifts in our relationship with technology that should be impossible to ignore. In the ’80s, a home computer was revolutionary. Now, we live in an era where it’s completely mundane to ask talking boxes for trivia answers and maintain digital extensions of our personae on websites accessed through portable phones. We are not as suspicious of technology as we once were. It’s a part of us now—something we live with.
This shift is pretty clear in other areas of pop culture. Westworld—one of the highest profile sci-fi works in recent years—spent much of its first season retreading some of the same familiar ground as Detroit, but has found a more interesting path as it’s continued onward. While early episodes floundered with dramatically inert questions of whether sexually assaulting, torturing, and murdering lifelike thinking and feeling robots was an okay premise for an amusement park, it’s since moved on from hammering home the simplistic, insultingly moralizing lesson that “treating humanoid androids badly is the wrong thing to do.” At its best, characters like the show’s standout, Bernard Lowe—a tortured robot who is very well aware he is a robot—bring a welcome complexity.
Bernard, in actor Jeffrey Wright’s strongest performance to date, alternates naturally between a machine’s cold, vacant-eyed calculations and the trembling pathos of an android traumatized not only by the loss of his family and the violence of the world in which he lives, but also the knowledge that his memories are artificially coded and that his programming has led him to contribute to the horror of his surroundings. With this focus, viewers are given scenes far more philosophically troubling than the show’s earlier attempts to question whether it’s all right to kill humanlike robots for fun. In season two’s “Les Écorchés,” for example, Bernard is sat in a diagnostic interrogation and tormented by park co-creator Robert Ford (Anthony Hopkins), who, apparently, has entered his system in the form of a viral digital consciousness. Ford flits about his mind like a demonic possession. Bernard remembers killing others while under the intruder’s control. He cries and shakes like any human wracked with so much psychological pain would. “It’s like he’s trying to debug himself,” a technician notes. A digital read-out of Bernard’s synthetic brain shows his consciousness is “heavily fragmented,” as if under attack from a computer virus.
Rather than focus on simple ideas, the show acknowledges, in instances like these, that its audience is willing to accept an android character like Bernard as “human” enough to deserve empathy while remembering, too, that his mechanical nature introduces more compelling dramatic possibilities. Thankfully, Westworld’s second season has leaned further into this direction, moving (albeit at a glacial pace) toward stories about what it means for robots to embrace their freedom while being both deeply human and, due to their computerized nature, still fundamentally alien. By the end of the season, its earlier concern with flat moral questions has largely been swept away. Its finale, while still prone to narrative cliché elsewhere, shows a greater willingness to delve into explorations of how concepts like free will, mortality, and the nature of reality function for the computerized minds of its characters.
This is the sort of thing that elevates modern sci-fi, that reaffirms its potential for valuable speculation rather than just being a place to indulge familiar tropes and revisit nostalgic aesthetics. We see it in games like Nier: Automata, whose anime-tinged action is set in a far-future world where humanity has gone extinct, leaving behind only androids who must grapple with their minds persisting over centuries of samsara-like cycles of endless war against simpler machines trying to come to grips with their own intellectual awakening. We see it in Soma, which explores similar territory and turns it into soul-shaking horror by telling a story where people’s minds have been transplanted into synthetic consciousnesses, stored immortally on computers that reside in facilities dotting the inky depths of the ocean floor while the Earth dies out far above them. Like Bernard—and like many of the other characters now freeing themselves from both their shackles as Westworld’s park “hosts” and the narrative constraints of the show’s earlier episodes—these games transcend the outdated concerns of a story like Detroit. They give us something new to chew on, concerns that are not only intellectually fuller but also more reflective of where we are now as a technology-dependent species.
There’s no better summary of this change than the extremely belated Blade Runner sequel, Blade Runner 2049. Its predecessor was devoted entirely to convincing audiences that its assumedly inhuman replicants are worthy of empathy. It ended by asking if we’d even be able to tell the difference between a flesh-and-blood person and a synthetic one. Compare that to 2049, where protagonist K—Ryan Gosling playing a character with a suitably product-line-style name—is shown to be an android almost from the start. The plot of the film centers (like Detroit and Westworld) on a fast-approaching revolution where self-sufficient androids will overthrow their human creators, but the heart of its story is about the psychology of artificially intelligent beings. K is depicted as deeply troubled, grasping for affection from the mass-market hologram AI he’s in love with, grappling with the fact that he might be the first replicant to be born from another android, hoping to connect with his possible father, and being tormented by his inability to distinguish between what’s been programmed into his synthetic mind and what’s a “real” memory.
Blade Runner 2049 considers it a given that modern audiences can empathize with this android character without prerequisite arguments—that we’re not instinctively terrified of what he represents but willing to think about what such a creation means when set against age-old concepts of love and selfhood. As a sequel to the movie that did so much to settle questions about whether a robotic being was equal to humanity, it moves its concerns forward in tandem with society itself.
There’s a scene in 2049 where K, having learned of the existence of the first replicant child to be born of two replicant parents, is asked by his boss, Lt. Joshi (Robin Wright), to homicidally erase this revolutionary evidence in order to maintain the world’s status quo. K says he’s never killed something “born” before. When asked why that makes him uncomfortable, he replies that being born means having a soul—that that may be a crucial difference. “You’ve been getting on fine without one,” Joshi says. “What’s that, madam?” K replies. “A soul.”
It’s an exchange that takes moments, but it’s enough to communicate more about the nature of an AI consciousness than Detroit manages over its dozen hours. In these few words, 2049 puts an old debate to rest while raising new questions about what it means for a machine to worry about its place in the world. K doesn’t “have a soul” in the traditional sense, but he is tortured by the knowledge that he, with his need to love and be loved, may possess something quite like it. Modern science fiction is capable of asking us to explore what it means to view technology this way. It’s able to make us consider how our sense of reality may or may not intersect with the ever-more complex computers we create. It is, basically, able to do a lot more than revisit tired questions about whether the kind of highly advanced robots that populate Detroit: Become Human are worth taking seriously enough to care about in the first place.