Our brains react differently to artificial vs human intelligence

ByEd Yong
August 03, 2008
6 min read

With their latest film WALL-E, Pixar Studios have struck cinematic gold again, with a protagonist who may be the cutest thing to have ever been committed to celluloid. Despite being a blocky chunk of computer-generated metal, it’s amazing how real, emotive and characterful WALL-E can be. In fact, the film’s second act introduces a entire swarm of intelligent, subservient robots, brimming with personality.

Whether or not you buy into Pixar’s particular vision of humanity’s future, there’s no denying that both robotics and artificial intelligence are becoming ever more advanced. Ever since Deep Blue trounced Garry Kasparov at chess in 1996, it’s been almost inevitable that we will find ourselves interacting with increasingly intelligent robots. And that brings the study of artificial intelligence into the realm of psychologists as well as computer scientists.

Jianqiao Ge and Shihui Han from Peking University are two such psychologists and they are interested in the way our brains cope with artificial intelligence. Do we treat it as we would human intelligence, or is it processed differently? The duo used brain-scanning technology to answer this question, and found that there are indeed key differences. Watching human intelligence at work triggers parts of the brain that help us to understand someone else’s perspective – areas that don’t light up when we respond to artificial intelligence.

WALLE.jpg

I, for one, welco… oh whatever

Ge and Han recruited 28 Chinese students and made them watch a scene where either a detective had to solve a logical puzzle. The problem-solver was either a flesh-and-blood human or a silicon-and-wires computer (with a camera mounted on it). In either case, their task was the same – they were wearing a coloured hat and had to deduce whether it was red or blue. As clues, they were told how many hats of each colour there were in total and how many humans/computers had also been given hats. They could also see one of these peers, and the hat they were wearing.

It’s an interesting task, for both the human and the computer in this mini drama were given the same information and had to make the same logical deductions to get the right answer. The only difference was the tools at their disposal – the human used good, old-fashioned brain power while the computer relied on a program.

The students’ job, as they watched this scene, was to work out if the problem-solver was capable of divining the colour of their hat. As the volunteers reasoned their way to an answer, Ge and Han scanned their brains using a technique called functional magnetic resonance imaging (fMRI).

LIMITED TIME OFFER

Get a FREE tote featuring 1 of 7 ICONIC PLACES OF THE WORLD

They found that the group who watched the humans showed greater activity in their precuneus; other studies have suggested that this part of the brain is involved in understanding someone else’s perspective. The scans also revealed a fall in the activity of the ventral medial prefrontal cortex (vMPFC), an area that helps to compare new information against our own experiences.These two reactions fit with the results of other studies, which suggest that we understand someone else’s state of mind by simulating what they are thinking, while suppressing our own perspective so it doesn’t cloud our reasoning.

But neither the precuneus nor the vMPFC showed any change in the group who watched the computer. And the connections between the two areas were weaker in the students who watched the computer compared to those who saw the humans.

Human-vs-AI.jpg


The differences weren’t for lack of deductive effort; when the students were asked to work out the colour of the problem-solver’s hat for themselves, the scans showed equally strong activation in the brain’s deductive reasoning centres, regardless of whether the students were watching human or machine.

Two strategies

It seems that the technique of placing yourself in someone else’s shoes doesn’t apply to artificial intelligence. Because we are aware that robots and computers are controlled by programmes, we don’t try to simulate their artificial minds – instead, Ge and Han believe that we judge them by their actions.

Indeed, when Ge and Han gave the students the simpler task of just saying what hat colour the problem-solver can see, those watching the computer showed stronger activity in the visual cortex than those watching the humans. That suggests they were paying closer attention to the details of the scene such as where the computer’s camera was pointing. Their precuneus, however, remained unexcited.

These results may help to explain why autistic people seem to enjoy interacting with computers and playing with robots. Autistics face social difficulties because they find it hard to put themselves in other people’s shoes. Indeed, their vMPFCs fail to tune down in the normal way, suggesting that they cannot block their own experiences from interfering with their deductions about someone else’s. But when they interact with robots, they don’t have to do that – remember that the activity of the vMPFC didn’t drop either in the students who watched the problem-solving computers.

Ge and Han conclude that humans understand artificial intelligence and other humans using very different mental strategies. But I wonder if their result applies to all types of AI. In this case, the world of artificial intelligence was represented by a camera linked to a computer, neither of which actually interacted with the study’s participants. Would the results be different if the robot in question was more human in design? What would happen in the precuneus and vMPFC of someone playing with a Robo Sapien toy or watching WALL-E? A question for next time, perhaps.

Reference: PLoS ONE doi: 10.1371/journal.pone.0002797

Image: Wall-E copyright of Pixar; figure by PLoS

Related Topics

Go Further