Friday, June 24, 2016

Ex Machina: A Review - Part I

Last night, as part of an event organized by the JBC (Jerusalem Brain Community) I had a chance to watch the movie Ex Machina for the first time after it came out in January 2015 (what can I say, I'm a busy graduate student). The screening was followed by a brief lecture from Dr. Yair Weiss, a professor of computer science at Hebrew University who specializes in AI algorithms used for computational vision. I thought the film was a phenomenal piece of work; it was artistically and conceptually profound on multiple levels. After watching it I felt the need to collect my thoughts and put them...somewhere. So in short, this is going to be a stream-of-consciousness review of a movie that's over a year old. I'll mainly focus on the scientific and philosophical aspects of the movie and mostly avoid the human interest/drama/romance aspects of the movie for a variety of reasons, not the least of which is that the character of the main human protagonist hits frighteningly too close to home to talk comfortably about.

First a synopsis, which I'll copy from Wikipedia because I don't feel like writing it myself:

Computer programmer Caleb Smith wins a one-week visit to the luxurious, isolated home of Nathan Bateman, the CEO of his software company, Blue Book. The only other person there is Nathan's servant Kyoko, who Nathan says does not speak English. Nathan has built a humanoid robot named Ava with artificial intelligence (AI). Ava has already surpassed a simple Turing test; Nathan wants Caleb to judge whether he can relate to Ava despite knowing she is artificial.

Ava has a robotic body but a human-looking face, and is confined to her apartment. During their talks, Caleb grows close to her, and she expresses a romantic interest in him and a desire to experience the world outside. She reveals she can trigger power outages that temporarily shut down the surveillance system which Nathan uses to monitor their interactions, allowing them to speak privately. The power outages also trigger the building's security system, locking all the doors. During one outage, Ava tells Caleb that Nathan is a liar who cannot be trusted.

Caleb grows uncomfortable with Nathan's narcissism, excessive drinking and his crude behaviour towards Kyoko and Ava. He learns that Nathan intends to reprogram Ava, essentially "killing" her in the process. Caleb encourages Nathan to drink until he passes out, then steals his security card to access his room and computer. After he alters some of Nathan's code, he discovers footage of Nathan interacting with previous android models in disturbing ways, and learns that Kyoko is also an android. Back in his room, Caleb cuts his arm open to examine his flesh.

At their next meeting, Ava cuts the power. Caleb explains what Nathan is going to do, and Ava begs him to help her. Caleb tells her he will get Nathan drunk again and change the security system to open the doors in the event of a power failure instead of locking them. He tells her that when Ava cuts the power, she and Caleb will leave together.

Nathan reveals to Caleb that he has been observing Caleb and Ava's secret conversations with a battery-powered camera. He says Ava has only pretended to like Caleb so he would help her escape. This, he says, was the real test all along, and by manipulating Caleb so successfully, Ava has demonstrated true intelligence. Ava cuts the power, and Caleb reveals that he knew Nathan was watching them, and already modified the security system when Nathan was passed out the previous day. Nathan knocks Caleb unconscious.

The door to Ava's room opens, and Nathan rushes to stop her from leaving. With help from Kyoko, Ava kills him, but Nathan damages her and destroys Kyoko in the process. Ava repairs herself with parts from earlier androids, using their artificial skin to take on the full appearance of a human woman. She abandons Caleb, who has just regained consciousness, leaving him trapped inside the locked facility. Ava escapes to the outside world via the helicopter meant for Caleb.


Preface: Misquotes and Imperfect Imitations

One of the running gags throughout the entire movie was that Nathan (the creator of the AI) keeps misquoting things or quoting them with the wrong attribution. Here's one example from the beginning of the movie:

CALEB: If you’ve created a conscious machine, it’s not the history of man. It’s the history of Gods.
(Later)
CALEB: She’s fascinating. When you talk to her, you’re through the looking glass. 
NATHAN :‘Through the looking glass’. You’ve got a way with words there, Caleb. You’re quotable.
CALEB: Actually, it’s someone else’s quote.
NATHAN: You know I wrote it down. That other line you came up with. About how if I’ve created a conscious machine, I’m not man. I’m God.
CALEB:... I don’t think that’s exactly what I said. 

There are a few other examples of this in the movie. Here's another bit of dialogue later on, when Caleb and Nathan are sitting over the river discussing Ava's future:

NATHAN: See? I really am a God.
CALEB: I am become death, the destroyer of worlds.
NATHAN: There you go again. Mister quotable.
CALEB No: there you go again. It’s not my quote. It’s what Oppenheimer said when he made the atomic bomb.
NATHAN: (simultaneous) - made the atomic bomb.
NATHAN laughs.
NATHAN: I know what it is, dude.

In fact, they're both still (sort of) wrong, Oppenheimer's quote wasn't original, he was citing the Bhagavad Gita.  But my favorite example is one that the movie didn't point out explicitly:

NATHAN : Let’s make this like Star Trek, okay? Engage intellect.
CALEB... What?
NATHAN: I’m Kirk. Your head is the warp drive. ‘Engage intellect’.

Those of us who are Star Trek nerds will immediately realize that this is a misquote too. "Engage" was the catchphrase of Captain Jean-Luc Picard of Star Trek: The Next Generation; Captain Kirk of Star Trek: The Original Series never said it. 

So what's the purpose of the frequent misquotations? I think on a general level, it is symbolic of the concept of "imperfect imitation." Ava is an AI designed to replicate human behavior, and she's close -- very close -- to being human, but there's always a feeling that she's not quite there yet. As portrayed in the movie, AI is a never-ending imitation game. Every new model will be slightly closer than the last to the goal of functionally replicating human behavior, but every model is, in some sense, a "misquote" of the original.

There is also more subtle message here, and it's directed toward people who actually have a background in AI. Ex Machina has received criticism by professionals and academics who work in the field  of AI for misrepresenting things like the Turing test (Yair Weiss, who gave the lecture after the movie at the JBC event, leveled this criticism at the movie). I get the sense, though, that the creators of the movie were well aware of the fact that they were taking significant artistic license with the specific formulations of various concepts in AI. Nathan, who serves as a stand-in for the creators of the movie, wants us to think about the general concepts, not the technicalities of whether the Turing test is portrayed exactly as it was originally stated. Nathan almost explicitly makes this point early on in the movie:

NATHAN: Caleb. I understand you want me to explain how Ava works. But - I’m sorry. I don’t think I’ll be able to do that.
CALEB: Try me! I’m hot on high-level abstraction, and -
NATHAN: (cuts in) It’s not because you’re too dumb. It’s because I want to have a beer
and a conversation with you. Not a seminar.

A beer and a conversation. Not a seminar.

The Turing Test


This is what Wikipedia has to say about the Turing test.

"Computing Machinery and Intelligence" (1950) was the first published paper by Turing to focus exclusively on machine intelligence. Turing begins the 1950 paper with the claim, "I propose to consider the question 'Can machines think?'"[4] As he highlights, the traditional approach to such a question is to start with definitions, defining both the terms "machine" and "intelligence". Turing chooses not to do so; instead he replaces the question with a new one, "which is closely related to it and is expressed in relatively unambiguous words."[4] In essence he proposes to change the question from "Can machines think?" to "Can machines do what we (as thinking entities) can do?"[22] The advantage of the new question, Turing argues, is that it draws "a fairly sharp line between the physical and intellectual capacities of a man."[23]
To demonstrate this approach Turing proposes a test inspired by a party game, known as the "Imitation Game," in which a man and a woman go into separate rooms and guests try to tell them apart by writing a series of questions and reading the typewritten answers sent back. In this game both the man and the woman aim to convince the guests that they are the other. (Huma Shah argues that this two-human version of the game was presented by Turing only to introduce the reader to the machine-human question-answer test.[24]) Turing described his new version of the game as follows:
We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"[23]
And this is how the Turing test is portrayed in the movie:

NATHAN: Do you know what the Turing Test is? 
CALEB: (...) It’s where a human interacts with a computer. And if the human can’t tell they’re interacting with a computer, the test is passed. 
NATHAN: And what does a pass tell us? 
CALEB: That the computer has artificial intelligence. 

I would argue that the movie's description of the Turing test here is actually a reasonable restatement of Turing's original proposal. It is true that there is no human serving as a "control" in Caleb's formulation, but pitting the computer against a person to see which one can fool the test subject is just a technicality of the experimental design.

What I do object to, however, is Caleb's last quote, that the Turing test tells us whether the computer "has artificial intelligence." This might be somewhat of a semantic issue, but I'll mention it anyway because it's conceptually important. Artificial intelligence already exists. Deep Blue, the chess-playing computer, is an artificial intelligence. Watson, the Jeopardy! computer, is an artificial intelligence. Wolfram Alpha, the computational search engine, is an artificial intelligence. (I wrote more generally about the definition of intelligence in an earlier post here). As Dr. Weiss noted in his lecture, "intelligence" is a multi-dimensional attribute that can be possessed be many different kinds of systems and organisms. There is basketball intelligence (which Lebron James has and I do not), long-term memorization intelligence (which a MicroSD card can have but goldfish do not [cf. here]) and mathematical intelligence, (which I probably have more of than Lebron James, though I've never actually compared).

The real question that the Turing test is designed to answer is whether we have achieved a general form of intelligence which is on par with human intelligence on every axis. To put it a different way, the Turing test is meant to determine whether an AI is functionally equivalent to a human in terms of computational capabilities. [I am specifically avoiding using the term "Strong AI" here because the the Turing test does not test for Strong AI, as I think John Searle's Chinese Room thought experiment showed (more about this in the next post).] But is functional equivalence to human intelligence a worthwhile - or even interesting - goal to pursue? And is the Turing test a good metric to determine whether or not we've achieved functional equivalence?

My answer to the first question -- whether functional equivalence is interesting -- is an emphatic "yes." Why? Because despite intelligence being a multi-dimensional attribute, there's still very good reason to say that humans are the most intelligent entities in the universe by several orders of magnitude if we simultaneously take into account all of the axes. Deep Blue may be better than any human at playing chess, but could Deep Blue write a program that can beat it at chess? The human ability to come up with strategies to surmount virtually every kind of challenge that we are faced with -- whether it is writing code, playing basketball, or engaging in everyday social interactions -- is rather unique. It involves a level of meta-cognition and creativity that we are still far from achieving with AI.

That is not to say that human intelligence is necessarily qualitatively different from what is possible for AI -- indeed, it seems that artificial neural networks (ANNs) do posses some general problem-solving abilities -- but we still have a long way to go before we bridge the gap. And something very important happens when\if we do reach the point of functional equivalence, namely that we don't have to design AI anymore, because AI will be able to design AI as well or better than we can. That's the key ingredient of what people call "the singularity." Once a general-purpose, human-equivalent AI can design other general-purpose AIs, a snowball effect begins where humans can become quickly cognitively outpaced by AIs that are building better and better AIs. (I personally remain skeptical about whether this kind of singularity will actually happen because it depends on all kinds of mundane things like research funding and the general zeitgeist of the field of AI in terms of defining the types of problems that people are interested in working on.)

<\singularity rant>

Anyway, back to the Turing test. The original conception of the Turing test leaves something to be desired, because, like the party game that spawned it, it focuses too much on deception. A computer behind a screen, in a very controlled setting, might be able to "trick" a person into thinking that it is human, but deception isn't the same as functional equivalence. This is why I like the Ex Machina's definition of the Turing test better (just to remind you, this is still a movie review). In the final scene of the movie, after Ava escapes in the helicopter, we see her on a crowded intersection, in the world of real humans, trying to pass as one of them. I think the word "pass" is important in this context, and it is particularly interesting when you think about what the term means to transgender people. In particular, (Wikipedia again) "passing" refers to a transgender person's ability to be regarded at a glance to be either a cisgender man or a cisgender woman. In a similar vein, an AI "passing" as human means that an AI interacting in the world of humans will be indistinguishable from a human.

Unlike the transgender analogy, though, an AI will need to pass more than just the "glance test," which only works for outward physical attributes. For an AI to pass Ex Machina's Turing test, it must surpass the far stricter criteria of being able to interact in the world of humans, which includes navigating through the complexities of love, employment, friendships, and so on, all the while never being suspected of being anything other than human. Once that point has been reached, it is safe to say that the AI is functionally equivalent to a human, because no observer will be able to find something that a human can do and the AI can't. Ava demonstrated some of these "passing" abilities in her effort to convince Nathan that she loved him, but her performance on the "real" test, which begins on a busy intersection in the final scene of the movie, is never revealed.

Stay tuned for Part II, "Mary the colorblind scientist in the Chinese black and white room in Plato's cave."



1 comment: