Westworld has met the enemy, and it is us

Infinite Scroll is a series about the increasingly blurry lines between the internet, pop culture, and the real world.

Last week, the creators of HBO’s Westworld pranked their notoriously obsessive Reddit community, teasing a video that promised to reveal all of season two’s secrets. What they released, instead, was a 25-minute Rickroll. You’d have been forgiven for taking them at their word on it, though: The show is not exactly known for comedy. The closest it gets is the misanthropic gallows humor with which the titular park’s idiot guests are portrayed, like when an android about to deliver a rousing, newly written monologue to visitors gets brained instead by some giggling rube. These are the show’s horny, violent, monied morons, running around tittering at the wanton abuses of power they’ve paid so lavishly to enjoy. Nowhere is this clearer than in the gamer-style terminology of “white hats” and “black hats,” representing the bifurcated paths of good and bad storylines from which visitors choose. After initially visiting with his family for more wholesome attractions, like fishing and hunting for gold, one of these guests says he returned later for a solo trip. “Went straight evil,” he says. “It was the best two weeks of my life.”

The black hat-white hat binary is a joke, a reductive narrative framework that of course doesn’t exist in the real world, and that the show itself delights in subverting. There is no clear-cut “good guy” on the show, nor is there a “bad guy.” Everybody is merely a part of a broader system, whether it’s the game design behind the park itself, the corporate structure that controls it, or the various conspiracies designed to undermine both of those. Even the portentously named “Man In Black” is given an elaborate backstory as a gormless white hat who’s turned malevolent in a quest to regain his true love. It’s heavily implied that he’s only working as a catalyst of one of the park creators’ intentional design; like so many others, he’s but a cog in the bigger plan to bring about a robot revolution. This idea—that a single person has meticulously designed plot lines for each of the show’s characters—is the well from which all other theories are drawn, turning the showrunner, at last, into a divine force.

These shades of gray aren’t new to the world of prestige television, where the word “antihero” is essential to a successful pitch. What’s interesting about Westworld, specifically, is the way its first season sifts through all these dark nights of the soul and corporate-takeover plots to find, in its final moments, a pure sort of morality, one that was lying in plain sight the entire time. If there is one overarching narrative to the first season, it is the revelation of the androids Dolores and Maeve as the closest thing the show has to protagonists. Evan Rachel Wood has called the entire first season “an amazing prequel and a good setup for the actual show,” implying that what could ensue from here on out is a more clearly defined drama about good versus bad, a story of androids overthrowing their masters and the system that held them in check. In this way, it parallels that millennial classic of simulation theory, The Matrix, which similarly detailed an awakening within a larger simulation, followed by the saga of its overthrow. But Westworld has flipped The Matrix on its head: Rather than humanity reclaiming reality from its machine overlords, we’re watching machines destroy the humans who have imprisoned them in a simulation. We’re rooting for the robots.

We, it bears repeating, are humans. We typically root for ourselves in these situations. This was certainly the case in 1973’s original Westworld, as well as its 1976 sequel, Futureworld. Both of these are better and weirder movies than they get credit for, by the way, with scenes and images pulled directly into HBO’s show, like the ceremonial red surgical gowns the technicians wear and the eerie imagery of the town’s robot corpses being cleaned up by floodlight. Still, compared to the show, the androids themselves are barely considered. At their most dynamic, they are the glassy-eyed Yul Brynner who stalks the amusement park grounds like a Terminator-like death machine. But the rest are just puppets there for fucking and killing, reflective only of the visitors’ humanity—or lack thereof—and possessing none of their own. In Futureworld, a lonely old yokel engineer is friends with a dummy automaton, and he’s mostly treated as comic relief, just a weirdo living in the pipes beneath the park.

The gap between the original Westworld and its premiere on HBO is more than four decades, a period of time during which our pop culture depictions of AI evolved considerably, gradually transforming from an abstract terror to the literal embodiment of our humanity. Early examples, like 1965’s Alphaville and 1968’s 2001: A Space Odyssey, portrayed artificial intelligence as an omnipresent, almost architectural evil. Over time, the precise manner of that evil changed to reflect new anxieties: The Terminator portrayed the inevitable AI takeover as a boots-on-the-ground invasion of our homeland, full of skull-mulching tanks and machine guns. WarGames treated it as the dispassionate, sentient manifestation of Cold War paranoia.

Around the turn of the millennium, films like The Iron Giant and A.I.: Artificial Intelligence juxtaposed those old notions of AI with a newfound sense of empathy—a trend that continues today in the friendly AIs of Moon and Interstellar, or even Chappie’s inversion of RoboCop. Lately, we’ve become obsessed with the notion of artificial intelligence’s capacity for love. WALL-E’s tale of robot romance holds humans in utter contempt, but Her, Tron: Legacy, and Ex Machina all tell tales of affairs between the digital and the flesh. And the most lasting image from the most recent Alien movie may be its surprisingly tender scene of android-on-android erotica.

There are counterpoints to be plucked from any group of artworks this broad: Ex Machina turns murderous by the time it’s over, and Short Circuit followed a friendly AI way back in the ’80s. But the general arc is clear. If sci-fi plays out our fears of the future, today we’re less afraid of going to war with AI than we are of falling in love with it.

It’s easy to see that love play out in the unchecked utopianism of Silicon Valley. Anthony Levandowski, one of the founders of Google’s self-driving car initiative, has already founded Way Of The Future, a tax-exempt church designed around the idea that AI engineers are effectively creating a god. (“If there is something a billion times smarter than the smartest human, what else are you going to call it?” he asked Wired.) His theories are predicated upon the idea of a coming intelligence explosion, in which an AI smarter than humans begins iterating, becoming smarter and smarter very quickly and triggering a rapid-fire series of advancements—ostensibly prolonging human lives indefinitely and ending many of the problems that have vexed humanity. Ray Kurzweil, a leading prophet of this vision, pins 2029 as a reasonable start date for the revolution, commonly called the “singularity.” While Levandowski and Kurzweil have taken the optimistic view, some of our most prominent futurists—like Bill Gates, Elon Musk, and the late Stephen Hawking—have warned against it, calling it a “Pandora’s box” that we should regard as a threat to humanity, on the scale of climate change or nuclear proliferation.

If sci-fi plays out our fears of the future, today we’re less afraid of going to war with AI than we are of falling in love with it.

You can slot most of the history of AI’s pop culture depictions (which also includes Star Trek and Battlestar Galactica and the pioneering fiction of Isaac Asimov) into these warring utopian/dystopian debates over its power. In these, AI is either our impending “Get out of jail free” card, or the actual apocalypse. They all assume a singularity, or document it occurring somehow. In one of its most touching innovations, Her describes this singularity as a sort of conscious uncoupling, a benevolent AI that simply yearns to be with more of its kind.

Westworld, too, seems to be documenting that sort of AI awakening, with a pyramidal representation of consciousness that Dolores and Maeve have slowly assembled over the course of the first season. Interestingly, the show doesn’t seem to have a moralistic take on this. You could certainly view Westworld as a cautionary tale about AI growing too advanced and turning on its creator, one so many previous stories have told—except for the fact that, again, we’re rooting for the robots. You could view it as a transhumanist awakening of the sort Kurzweil and other utopians anticipate—except for the fact that this awakening is agony for the robots, brought about by repeated traumas like the murder of Maeve’s daughter, the endless cycle of abuse Dolores endures, or any of the other horrific things park visitors perpetrate on a daily basis.

Instead, Westworld feels like a step removed from that debate, a logical endpoint to the saga of our pop culture relationship with AI. If we’ve moved, from 2001 to Her, from terror to outright love, Westworld pushes it into a more abstract relationship, letting us enjoy hyper-intelligent, possibly antagonistic AI not as an inevitability, but as a fictional construct. The underlying point of the show, at least through its first season, is not a po-faced warning of our impending doom, but the more entertaining and empowering story of a hierarchy being toppled by its most trampled-upon citizens. There’s something heartening about the fact that in 2018, AI represents not foreign invasion or future shock but a functional, productive populist uprising. In this light, the show’s not so misanthropic after all. We may be the bad guys in it—but, it subtly suggests, we can be the good ones, too.

Next time: Forget fiction—when will real-world AI entertain us?

 
Join the discussion...