Sunday, June 26, 2016

Ex Machina: A Review - Part 2: Mary the Colorblind Scientist from the Chinese Black and White Room in Plato's Cave

The second part of this review of Ex Machina is about the relationship between qualia and computation. Part 1 can be found here.

CALEB: In college, I did a semester on AI theory.There was a thought-experiment they gave us. It’s called Mary in the black and white room. Mary is a scientist, and her specialist subject is color. She knows everything there is to know about it. The wavelengths. The neurological effects. Every possible property color can have. But she lives in a black and white room. She was born there, and raised there. And she can only observe the outside world on a black and white monitor. All her knowledge of color is second-hand. Then one day - someone opens the door. And Mary walks out. And she sees a blue sky. And at that moment, she learns something that all her studies could never tell her. She learns what it feels like to see color. An experience that can not be taught, or conveyed. The thought experiment was to show the students the difference between a computer and a human mind. The computer is Mary in the black and white room. The human is when she walks out.

Caleb's description of the "Mary in the black and white room" thought experiment is accurate, though the original context of the argument was not artificial intelligence, but a closely related subject: physicalism. Physicalism is the view that everything -- including subjective experiences -- are fundamentally physical in nature. Frank Jackson proposed the Mary experiment as a response to the view that qualia -- that is, peoples' subjective experience of sensory information -- are purely physical. Intuitively, most of us would presume that Mary does indeed learn something new about color when she steps out of her room and sees the blue sky, and would thus agree with Jackson that knowing neuroscience isn't sufficient to know blue.

There are those, such as Daniel Dennet, who dispute this intuition and claim that Mary, by virtue of her knowing everything about the nervous system, cannot possibly gain any new knowledge by experiencing color for the first time. As far as I know, though, no one in experimental neuroscience has even started to breach what is known as the psychophysical problem -- that is, how neural activity gives rise to sensory experience. People have done a lot of work on sensory systems in the past century or so, but so far the psychophysical problem seems to be intractable. Perhaps, in another half century, someone in the lab around the corner from me (Or in my lab, I suppose) will make a breakthrough in our understanding of neural dynamics that will give us a new framework to understand how qualia can emerge from the physical substrate of the nervous system, but I doubt it. 

Why does any of this matter for AI? If qualia are physical, as Dennett suggests, then there is no reason why a properly-designed object cannot possess qualia. (Whether qualia are necessary and/or sufficient for self-awareness and consciousness is a separate question and perhaps a topic for another post.) But even if qualia are non-physical-- in fact, even if you are a Cartesian Dualist -- you can still believe that qualia can emerge as a non-physical property of an appropriately designed artifice. I'm sure there are some Dualists -- probably including Descartes -- who would say that qualia are unique to animals, which were endowed from on high with a connection to the non-physical realm of conscious thought. But another Dualist might be willing to accept that an object which behaves sufficiently similarly to a brain can also tap into the domain of qualia. So does Ava meet the criteria of "sufficiently similar to a brain?"

[NATHAN moves to one of the skull forms. He moves the curved top plate, revealing the skull cavity. Inside is an ellipse orb, the approximate volume of a brain, filled with what looks to be blue liquid. Suspended in the liquid is the neon jellyfish we glimpsed previously in AVA.]
NATHAN: Here we have her mind. Structured gel. Had to get away from circuitry. Needed something that could arrange and rearrange on a molecular level, but keep its form where required. Holding for memories. Shifting for thoughts.
[NATHAN removes the orb, and hands it to CALEB.]
CALEB: This is her hardware?
NATHAN: Wetware.

I can't say whether Ava's brain is sufficiently similar to an animal brain to be able to produce qualia, but it does have a some nice features that are probably pretty important. Being sufficiently dynamic to change and learn while at the same time being static enough to hold memories seems crucial. The fact that changes happen "at the molecular level" may or may not be crucial. With some hand-waving, it would probably be safe to say that if qualia can emerge from a non-biological brain-like entity, Ava's brain has a decent chance of qualifying. According to some theories you might need other properties, like sensitivity to quantum fluctuations [As Roger Penrose argues and most people disagree with], but the film's description is sufficiently vague to allow us to assume most of the physical properties we would want to have. 

One thing that wasn't necessarily included, though, is a dedicated module that is expressly designed to produce qualia. There is reason to contend that in the brain, subjective experiences do not emerge from general brain activity but rather are the product of very particular brain regions (Prefrontal cortex? Hippocampus? Claustrum?) which are engineered for the purpose of generating qualia. So just having an architecture for learning and memory, even a very large, very general neural network architecture, would not be sufficient for qualia. If that's the case, as smart as Nathan is, I doubt he figured out how to design a consciousness module. Maybe he's completely solved language processing, strategic planing, vision, and so on, which are all fields that are actively being pursued by AI researchers and that we've made progress on. But no one has the faintest idea of how to build a consciousness module; at best we have some (almost definitely wrong) theories about how consciousness might emerge from computational network activity.

But let's put aside the question of whether Ava -- by virtue of her wetware -- can possess consciousness, and ask the more direct question: is Ava conscious?  Well, that's kind of a silly question, because none of us know if anyone other than ourselves possesses subjective experiences. Developmental psychologists like to talk about Theory of Mind, which is (Wikipedia) "the ability to attribute mental states—beliefs, intents, desires, pretending, knowledge, etc.—to oneself and others and to understand that others have beliefs, desires, intentions, and perspectives that are different from one's own." Of course, just because I have the ability to assume that other humans have minds doesn't mean that I can prove that I am not the only mind in the universe. I can only observe other people's behavior and words (and on occasion their brain waves, if they sign a consent form), but I can never truly know whether they are engaging in conscious thought, unless I somehow figure out how to pull off a Vulcan mind meld. This is what is known as the Philosophical Zombie problem, which postulates that all humans (other than me, of course) only behave as if they possess subjective experience, but they are actually soulless, mindless zombies that have no self-awareness, consciousness, qualia, or any of that fun stuff. 

So given that there's no test that a human can pass to prove that they possess qualia, there's probably no such test that Ava can pass either. But is there a test that she would probably fail if she doesn't possess qualia? This question brings us to another room - the Chinese room. The Chinese room is a thought experiment designed by John Searle as an objection to the view (which he terms "Strong AI") that "the appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."

The experiment goes as follows: A man who knows no Chinese is placed in a room. Chinese-speaking people submit to him questions which are written in Chinese, and the man is supposed to answer those questions. To help him, the man has a book which relates each set of Chinese symbols -- representing the questions that he might be asked -- to another set of Chinese symbols, representing the answers to each question. The Chinese people outside the room submitting the questions will probably think that the man inside the box knows Chinese, when in fact all he is doing is looking up the questions in a question-answer dictionary. As the argument goes, a computer that passes the Turing test is like the man in the Chinese room: it gives an appropriate output for every input, but it doesn't "understand" the input, the output, or the relationship between them.

The Chinese room experiment has been criticized from many different directions, but I want to focus on one objection in particular. The Chinese room experiment, at least in the way that I explained it above, is definitely not what computers usually do when they compute things. A dictionary that maps inputs to outputs is what is known in computer science as a "lookup table." Computers sometimes use lookup tables to solve certain problems (like to evaluate the sine  function), but this is not a common occurrence. Why? Because for most problems, it would be impossible to store a large enough lookup table to cover every use case. As a simple example, consider a program that takes an integer as an input and outputs whether that number is odd or even. You could store a lookup table with a billion entries, mapping every number from one to one billion to the answer"odd" or "even". But now I want to know whether two billion is odd or even. The lookup table program will fail catastrophically, because it has no idea how to deal with that input.

 In fact, without an infinite amount of memory, it's pretty easy to get any lookup table-based algorithm to fail catastrophically. In the Chinese room experiment, let's say I submit the following question in Chinese: "What is the last letter of the following Chinese translation of Leo Tolstoy's War and Peace?" The question is accompanied by the full text of War and Peace translated into Chinese. I very much doubt that the question-answer dictionary possessed by the man in the room contains an entry for that particular question. It would take far more than all the matter in the universe to create a dictionary - even a digital dictionary - that would map every single possible Chinese question to the correct answer to that question. Something like the Turing test would very quickly be able to find these failure cases.

But as we said above, computers generally don't use lookup tables, they use algorithms (a lookup table is technically an algorithm, but let's leave that for the moment). Broadly speaking, an algorithm is a general set of instructions that will perform some operation on the input to produce an output. In both the even-odd example and the War and Peace example, the appropriate algorithm is the same: simply return the last symbol of the input. (In the even-odd case, the rightmost bit in the binary representation of an integer will tell you whether or not it is odd.) Of course, this is not a general purpose algorithm that will work for any question, it will only work for those questions whose answers involve returning the last symbol of input. A truly general system would be able to take the question, translate that question into an appropriate algorithm, then apply the algorithm to the input in order to produce an answer. Seem like far-fetched science fiction? I present exhibit A.




Our good friend Wolfram Alpha is able to process a question in natural language, map that language to an appropriate algorithm (StringTake, which returns a substring of a given string) and return an answer. That being said, Wolfram Alpha's natural language processing abilities still leave something to be desired, as you can see in the next example:



Even though Wolfram Alpha screwed up here, it wasn't a catastrophic failure. Language processing is hard for Wolfram Alpha just like solving complicated integrals is hard for me. The fact that Wolfram Alpha trips up on that part of the problem doesn't indicate that there is a fundamental difference between the way that I solve these kinds of problems and the way that Wolfram Alpha does. If I had to answer the same question written in Chinese characters, and I had a normal Chinese-English dictionary (as opposed to a Chinese Question-Chinese Answer dictionary) I would basically doing the same thing that Wolfram Alpha was doing here: I would translate the Chinese question into English, find the answer to the question using an appropriate cognitive algorithm, and then use the dictionary again to report back the answer in Chinese. In such a case, I think even John Searle will admit that I now understand that particular Chinese phrase, because I looked it up in a dictionary and can now translate it into my own language, which I do understand.

There still might be a difference between what I do and what Wolfram Alpha does, and again it has to do with qualia. When Wolfram Alpha answers a question in English, her thought process goes like this:

Translate English to subroutine -> Apply subroutine -> Return answer

When I have to answer a question in Chinese, my train of thought goes as follows:

Translate Chinese to English -> Understand question -> Apply cognitive processing -> Return answer

I tend to think that cognitive processing (such as looking for the last letter in a string) isn't that far removed from what a computer does, so the difference really happens in the second stage, the "understanding the question" part. Wolfram Alpha maps a natural language question to an algorithm, I experience the meaning of the question - via qualia - and then find an appropriate algorithm based on my understanding of the question (I also experience qualia while I'm engaged in cognitive processing). So at their core, the questions of Strong AI and the Chinese room come back to the original problem of qualia.

This brings us back to square one, though, because we said above that it's basically impossible to determine whether anyone -- AI or human -- possesses qualia. So how do we know whether Ava is really a Strong AI? Well, there's one tactic we haven't considered yet: we could just ask nicely.

AVA: Are you nervous? 
CALEB: ... Yes. A little. 
AVA :Why? 
CALEB: I’m not sure. 
AVA: I feel nervous too. 

If an AI tells you that it's experiencing something - such as the emotion of nervousness -- it could be telling the truth or it could be lying. If a human tells you that she's experiencing something, she could also be lying. But with an AI, we have somewhat of an advantage in the sense that we can look at the series of functions that were invoked when it makes a qualia-related statement. So if an AI says "I'm feeling fine", and you print the stack trace and you see a call to a function howAmI(),you can take a look at the function definition in the source code. If it looks something like this

def howAmI():
   return "I'm feeling fine"

Then you can be pretty sure that the AI is spitting out prepackaged answers, like the man in the Chinese room experiment. But if the function looks like this


def howAmI(self):
 feelings = self.EmotionNet.evaluate(self.internalVariables)
 feelingString = self.SpeechNet.articulate(feelings)

Then maybe there are some qualia floating around in there. But as long as there's no black box (ANNs may or may not be), you can always look under the hood to see where the self-reports of qualia are coming from. On that note:


No comments:

Post a Comment