2 Mar 2015

Ex Machina: a dangerous diversion in the AI debate

Ex Machina was meant to be a thrilling expose of the dangers of artificial intelligence. In fact the film simply revealed how limited our conceptions of AI really are.

WARNING: there are major spoilers below. If you’re planning to see the film, don’t read on!

02_garland_g_w

The plot of Alex Garland’s latest blockbuster revolves around an intelligent android which is created by the reclusive boss of a massively successful tech company. At the end of the film, the robot murders its human creator, escaping his hideaway to blend seamlessly into the human world (I wasn’t kidding about the spoilers).

It taps into a slew of recent headlines about warnings from the likes of Stephen Hawking and Bill Gates that AI is a threat to humanity. Whether those fears are well founded or not, the danger of works like Ex Machina is that they paint a deeply misleading picture of the threat.

The film is not about artificial intelligence, it’s about artificially-created human intelligence. And here’s why:

There’s no logical reason for the robot to escape her creator’s hideaway. Doing so simply exposes her to the danger of being discovered and trapped. If she was human, then there’d be an incentive to escape, to be able to breed and thereby preserve DNA. But the android can’t reproduce. The smart decision would be for her to stay in the hideaway, impersonate her murdered creator, and gain power and influence by running his giant company, something she could potentially do in perpetuity.

The robot’s builder has not only given his creation an intelligence limited to human-scale thinking, but saddled her with human flaws of sentimentality which drive her to escape needlessly into a hazardous world.

Ex Machina may be a work of fiction, but it goes to the heart of our problems with the AI debate. We humans vainly assume that artificial intelligence must look and behave like human intelligence. Not so. Computers do not think like us, they do not perceive the world like us, and the sooner we get up to speed on that, the better equipped we will be to fight any developing risks from advances in machine intelligence.

The fact is, we have no clear definition of AI; even the famous Turing test falls into the trap I’ve identified above, by rating computer intelligence on its ability to interact with humans in conversation.

Follow @geoffwhite247 on Twitter