By Werwin15, via Flickr

How We Learn To See Faces

ByVirginia Hughes
September 12, 2013
9 min read

Two eyes, aligned horizontally, above a nose, above a mouth. These are the basic elements of a face, as your brain knows quite well. Within about 200 milliseconds of seeing a picture, the brain can decide whether it’s a face or some other object. It can detect subtle differences between faces, too — walking around at my family reunion, for example, many faces look similar, and yet I can easily distinguish Sue from Ann from Pam.

Our fascination with faces exists, to some extent, on the day we’re born. Studies of newborn babies have shown that they prefer to look at face-like pictures. A 1999 study showed, for example, that babies prefer a crude drawing of a lightbulb “head” with squares for its eyes and nose compared with the same drawing with the nose above the eyes. “I believe the youngest we tested was seven minutes old,” says Cathy Mondloch, professor of psychology at Brock University in Ontario, who worked on that study. “So it’s there right from the get-go.”

These innate predilections for faces change and intensify over the first year of life (and after that, too) as we encounter more and more faces and learn to rely on the emotional and social information they convey. Scientists have studied this process by looking mostly at babies’ abilities as they age. But how, exactly, our brains develop facial expertise — that is, how it is encoded in neurons and circuits — is in large part a mystery.

Two new studies tried to get at this brain biology with the help of a rare group of participants: children who were born with dense cataracts in their eyes, preventing them from receiving early visual input, and who then, years later, underwent corrective surgery.

After recording the brain waves of these children with electro- encephalography (EEG), the researchers suggest that there is a “sensitive period” in brain development for face perception — a window of time during the first two months of life in which the brain requires visual input in order to fully acquire the skill. If the brain doesn’t get this input, it can still learn the crude aspects of face processing — identifying a face as a face, for example — but lacks the fine-tuning ability of distinguishing one face from another. These differences show up not only in the patients’ behaviors, but in their brain waves.

Researchers have been studying cataract patients for years, as a natural experiment of nature versus nurture. After surgery, they can usually distinguish a face from a non-face. And if they see two faces that are identical except for the eyes, for example, they can spot that difference. But their face-detection isn’t sophisticated. “We know they are able to categorize faces and other objects, and even distinguish between two faces, but only if they see them from the same view,” notes Brigitte Röder, professor of neuropsychology at the University of Hamburg in Germany. “If the perspective changes, or lighting of pictures changes, or the emotional expression, they have a hard time.”

FREE BONUS ISSUE

This week, Röder’s team published a study in the Proceedings of the National Academy of SciencesProceedings of the National Academy of SciencesProceedings of the National Academy of Sciences revealing the brain patterns of 11 of these patients while they looked at pictures of faces, houses, and scrambled versions of the faces and houses. The researchers were particularly interested in a specific brain-wave pattern known as the N170. The N170 (which stands for a negative peak around 170 milliseconds) is a pretty famous EEG marker because, in healthy people older than 1 year, it shows up a split-second after the brain sees a face. “It’s a very sensitive response that says, OK, there is a face,” Röder says.

None of the patients, even those who were blind for years before having surgery, had any trouble distinguishing faces from houses. But the way their brains performed the task was different. Whereas healthy controls only elicited the N170 marker after seeing faces, the patients showed it after seeing any kind of visual stimuli.

This makes sense given what we know about early brain development, Röder says. “We are born with a lot of connections in the brain, and these connections are pruned down to 50 percent of the original number,” she says. “This pruning makes a functionally specialized system. It requires input during a particular phase of life, and it seems not to have taken place in these patients.” What’s more, she says, these deficits seem to persist for a long time, maybe forever. “Some of the individuals we’ve studied have been seen for more than 20 years, and they didn’t show this face sensitive response.”

These conclusions broadly agree with what Mondloch’s team proposed in a similar study, published in this month’s issue of Developmental Science. In this work, too, the researchers looked at N170 responses in 11 cataract patients. They used the same face-versus-house task as the other study, as well as a more difficult task using so-called ‘Mooney faces’, in which a photo of a face has been manipulated so that anything with above-average luminance turns white and anything with below-average luminance turns black. On both tasks, the cataract patients were just as good as controls at identifying faces versus non-faces.

The EEG results of this study were different than the first, however. In this one, patients, just like controls, showed a larger N170 response to faces compared with houses (and for upright Mooney faces compared with scrambled ones). Intriguingly, the amplitude of the N170 response was bigger for patients than controls. The researchers interpret these results to mean that when performing these face tasks, patients are either recruiting more neurons, or firing the same neurons more intensely, than controls are. “Their brains seem to be working harder when it comes to face detection,” Mondloch says, presumably because they missed out on a sensitive period for face processing. Her group is now scanning the brains of some of these patients, which might clear up exactly which brain regions are driving the altered EEG response, she says.

The reason the two studies found different EEG patterns, Mondloch says, might be because of differences in the participants’ medical histories. The first study used patients who underwent corrective surgery between 2 months and 14 years old, whereas in her study, the surgeries happened between 62 and 161 days old. Perhaps the earlier the problem is fixed, the more likely that the kids will develop a typical, face-specific N170 response.

The timing question is key. For both studies, all of the patients had at least two months of visual deprivation. “Just two months seems to permanently change the brain’s response to faces, and cause permanent impairments in some face processing skills,” Mondloch says. But nowadays, she adds, cataract patients are getting diagnosed and treated even earlier. “We don’t know yet, if they were treated at two weeks or one month of age, perhaps they wouldn’t have these deficits.”

Other researchers aren’t so sure that the differences seen in the brains of these patients necessarily point to a sensitive period for face processing. “They’re saying that broad vision is not subject to a critical period, but that this fine-tuning, this very particular aspect of facial processing, is subject to a critical period. It would be an odd state of affairs if that was the case,” says Yuri Ostrovsky, visiting professor of neuroscience at Wenzhou Medical College. Of course, the brain sometimes acts in unexpected ways. Ostrovsky has also studied visual perception in previously blind children after cataract surgery, as part of a scientific and humanitarian effort in India called Project Prakash.

Ostrovsky points out that in the first study, 2 of the 11 patients showed a normal N170 response, and that some of the controls showed a somewhat abnormal response. “There’s a lot of variation, and that probably comes from differences in experiences,” he says.

Rather than there being a use-it-or-lose-it sensitive period for complex face processing, it might just be that the patients’ brains never learned to rely on faces as the controls’ brains did, and so naturally they wound up with a different strategy for processing them later on. “It would still be very interesting if the N170 were to be affected by social importance of stimulus,” he says. “That would point to the importance of sociology, not just biology or physical experience.”*

*

Note: Ostrovsky’s quote has been changed since I first published this post because I originally misheard it and slightly misunderstood its intention.

Go Further