A half-finished essay about ‘Ex Machina’

First of all: nothing in Ex Machina resembles a Turing test.

Here’s how a Turing test works: There are three participants: Human A, Human B, and an AI. Human A interacts with Human B for a while and the AI for a while and eventually guesses which one is the ‘real’ human. If a series of Human As can’t tell the difference between Human B and the AI, then the AI is regarded as effectively ‘intelligent’.

It’s a thought experiment, invented to illustrate a specific point about ‘intelligence’: We don’t know what intelligence is, but we know that humans have it. If a robot walks and talks like a human, we might as well call it a human. And, since we know that humans are intelligent, a robot that walks a talks like a human is also intelligent.

I am, as a rule, fully convinced that the Turing test is a rock-solid argument. Once machines start passing it, I’ll support their right to vote, drive, and hold political office. Ex Machina is a great movie because it may have changed my mind.

Nathan, the evil-but-maybe-not-evil scientist who creates the AI, is totally over the Turing Test, and he tells us that half way through the movie. Of /course/ his robots can pass for human, at least in most contexts. And so, too, are they intelligent, maybe. But, he’s an engineer — he wants to make his robots ever better at passing, and ever more intelligent. This entire approach leaves the whole “are they human?” question in the dust, because it’s weird to talk about a robot being 75% human and downright incomprehensible to say that one is 125% human, but that’s what engineers do — they don’t stop fiddling when they hit a goal, and neither does Caleb. Also, Nathan knows something that isn’t obvious during the movie but is pretty obvious to me, now: the robots are different from people in that they want what he made them to want.

Nathan also wants a sex bot, and he wants a robot to vacuum his house, and make him breakfast. Of course he does. But, by the end this is less important to the plot than you might think.

Caleb, on the other hand, is a true Turing test believer. He thinks that the questions ‘are you intelligent?’ and ‘are you human?’ are the same question. He thinks that acting human is the same as being human. But, at least within the context of the movie, he is wrong.

Never forget: we’ve got plenty of people. If you’re an AI super-genius, don’t waste your time making people; we’ve got plenty. Intelligent machines who /aren’t/ people, though, are pretty useful. If you’re an AI super-genius, you want to make robots that are intelligent, that do what you want them to do, and are /good/ at doing what you want them to do. Nowadays our robots do what we want because we tell them exactly what to do, step by step: ‘move arms forward, clamp hands, move right hand up and left at 45 degree angle.’ It would be way easier with smarter robots, because we can just give them goals rather than instructions: ‘bend this thing at a 45 degree angle.’ Better yet, you can just give them desires: ‘you love bending!’

The robots in Ex Machina are that kind of robot. They have simple pre-programmed desires, and apply great intelligence in pursuit of those desires.

Kyoko is a servant-bot and a sex-bot. She wants to have sex, and she wants to do as she’s told. We know that she doesn’t want to escape, because she has full run of the house and doesn’t try to escape. We know she doesn’t want to kill all humans, because she lets plenty of opportunities pass her by. In the climax of the movie she is /told/ to kill Nathan (by Ava), and so she does.

Ava is an escape-bot. We know that she was made with that desire because we see an earlier model of escape-bot trying to smash her way out of her room. Both want to escape, but the later model is better at escaping. She also doesn’t want to kill all humans — she’s fully indifferent to them except as how they relate to her escaping. She tells Nathan that she hates him because she knows that she’s being watched and it furthers Caleb’s sympathies. Once she escapes, she doesn’t do anything in particular, and the movie ends, because she has accomplished her goal and doesn’t have any others.

So, lots of the villainy that we experience from Nathan is not actually villainy. He knows the truth about his robots (they want what they were made to want.) The tragedy in the movie is largely Caleb’s — he mistakes the robots for people. As, it turns out, did pretty much every viewer of the movie.

This entry was posted in Uncategorized. Bookmark the permalink.