The machine that identifies images from brain activity alone

ByEd Yong
March 05, 2008
5 min read

Modern brain-scanning technology allows us to measure a person’s brain activity on the fly and visualise the various parts of their brain as they switch on and off. But imagine being able to literally see what someone else is thinking – to be able to convert measurements of brain activity into actual images.
It’s a scene reminiscent of the ‘operators’ in The Matrix, but this technology may soon stray from the realm of science-fiction into that of science-fact. Kendrick Kay and colleagues from the University of California, Berkeley have created a decoder that can accurately work out the one image from a large set that an observer is looking at, based solely on a scan of their brain activity.
The machine is still a while away from being a full-blown brain-reader. Rather than reconstructing what the onlooker is viewing from scratch, it can only select the most likely fit from a set of possible images. Even so, it’s no small feat, especially since the set of possible pictures is both very large and completely new to the viewer. And while previous similar studies used very simple images like gratings, Kay’s decoder has the ability to recognise actual photos.

Visualcortex.gif


Training the brain-reader
To begin with, Kay calibrated the machine with two willing subjects – himself and research partner, Thomas Naselaris. The duo painstakingly worked their way through 1,750 photos while sitting in an fMRI scanner, a machine that measures blood flow in the brain to detect both active and inactive regions. The scanner focused on three sites (V1, V2 and V3) within the part of the brain that processes images – the visual cortex (right).
The neurons in the visual cortex are all triggered by slightly different features in the things we see. All of them have different ‘receptive fields’; that is to say that they respond to slightly different sections within a person’s field of vision. Some are also tuned to specific orientations, such as horizontal or vertical objects, while others fire depending on ‘spatial frequency’, a measurement that roughly corresponds to how busy and detailed a part of a scene is.
By measuring these responses with the fMRI scanner, Kay created a sophisticated model that could predict how each small section of the visual cortex would respond to different images.
Kay and Naselaris tested the model by looking at 120 brand-new images and once again, recording their brain activity throughout the experience. To account for ‘noisy’ variations in the fMRI scans, they averaged the readouts from 13 trials before feeding the results into the decoder.
The duo then showed the 120 new images to the decoder itself, which used its model to predict the pattern of brain activity that each one would trigger. The programme paired up the closest matches for the predicted and actual brain patterns and guessed the order of the images that Kay and Naselnaris had looked at.
Success!
It was incredibly successful, correctly identifying 92% of the images that Kay looked at and 72% of those viewed by Naselnaris. Obviously, using the average of 13 scans is a bit of a cheat, and if the machine were to ever decode brain patterns in real-time, it would need to do so based on a single trial. Fortunately, Kay found that this is entirely feasible, albeit with further tweaking. Even when he fed the decoder with fMRI data from a single trial, it still managed to correctly pick out 51% and 32% of the images respectively.
The decoder could even cope with much larger sets of images. When Kay repeated the experiment with a set of 1,000 pictures to choose from, the machine’s success rate (using Kay’s brain patterns) only fell from 92% to 82%. Based on these results, the decoder would still have a one in ten chance of picking the right image from a library of 100 billion images. That’s over a hundred times greater than the number of pictures currently indexed by Google’s image search.
Obviously, the technology is still in its infancy – we are still a while away from a real-time readout of a person’s thoughts and dreams based on the activity of their neurons. But to Kay, his experiments are proof that such a device is possible and could be a reality sooner than we think.
Reference: Kay, K.N., Naselaris, T., Prenger, R.J., Gallant, J.L. (2008). Identifying natural images from human brain activity. Nature DOI: 10.1038/nature06713

LIMITED TIME OFFER

Get a FREE tote featuring 1 of 7 ICONIC PLACES OF THE WORLD

Related Topics

Go Further