National Geographic

Does Language Shape What We See?

At this very moment, your eyes and brain are performing an astounding series of coordinated operations.

Light rays from the screen are hitting your retina, the sheet of light-sensitive cells that lines the back wall of each of your eyes. Those cells, in turn, are converting light into electrical pulses that can be decoded by your brain.

The electrical messages travel down the optic nerve to your thalamus, a relay center for sensory information in the middle of the brain, and from the thalamus to the visual cortex at the back of your head. In the visual cortex, the message jumps from one layer of tissue to the next, allowing you to determine the shape and color and movement of the thing in your visual field. From there the neural signal heads to other brain areas, such as the frontal cortex, for yet more complex levels of association and interpretation. All of this means that in a matter of milliseconds, you know whether this particular combination of light rays is a moving object, say, or a familiar face, or a readable word.

That explanation is far too pat, of course. It makes it seem like the whole process of visual perception has been figured out, when in fact the way our mind sees and interprets reality is in large part a mystery.

This post is about a question that’s long been debated among scientists and philosophers: At what point in that chain of operations does the visual system begin to integrate information from other systems, like touches, tastes, smells, and sounds? What about even more complex inputs, like memories, categories, and words?

We know the integration happens at some point. If you see a lion running toward you, you will respond to that sight differently depending on if you are roaming alone in the Serengeti or visiting the zoo. Even if the two sights are exactly the same, and presenting the same optical input to your retinas, your brain will use your memories and knowledge to put your vision into context and help you interpret the lion as threatening or cute. Here’s a less far-fetched example. In 2000 researchers showed that hearing simple sounds can drastically change how you perceive flashing circles. (If you’re up for a fun 44 seconds, go watch the video those researchers used to prove this effect.)

Some experts argue that our brains integrate information from other systems only after processing the basic visual information. So in the above example, they’d argue that the visual cortex processes the sight of circles first, and then cells in some later, or ‘higher order’, stage of neural computing — in the frontal cortex or temporal cortex, for example — deal with integrating the sound information with the visual information. I’ll call this the ‘modular’ camp, because these experts believe that the visual cells of the brain are encapsulated from other types of cells.

Other scientists, though, say that the brain is integrating information from other systems at the same time that it is processing the visual part. A study published yesterday in the Proceedings of the National Academy of Sciences provides some of the strongest evidence to date for this idea. Gary Lupyan of the University of Wisconsin, Madison found that language — one of our most sophisticated cognitive abilities — affects not only what we see with our eyes, but whether we see anything at all.

“We don’t imagine reality in any way we want — perception is still highly constrained,” Lupyan says. “But there are many cases where you want to augment or even override input from one modality with input from another.”

Lupyan’s study is notable for the clever way it tapped into our ‘lower level’ visual processing. The researchers showed participants different images in their right and left eyes at the same time. In one eye, they’d see a familiar picture, such as a kangaroo or a pumpkin, and in the other they’d see ugly visual noise: a rapidly changing mess of lines. When these two images are presented at the same time, our minds process only the noisy part and completely ignore the static, familiar image. Previous experiments have shown that this so-called ‘continuous flash suppression’ disrupts the early stages of visual perception, “before it reaches the levels of meaning”, Lupyan says.

In Lupyan’s study, participants sometimes heard the name of the static object — like the word ‘kangaroo’ or ‘pumpkin’ — played into their ears. And on these trials, the previously invisible object would pop into their conscious visual perception. If they heard a different word, though, they would not see the hidden object. “So it’s not that they are hallucinating or imagining a dog being there,” Lupyan says. “If they hear the label, they become more sensitive to inputs that match that label.”

Because flash suppression is thought to act on lower level visual processing, Lupyan says these data bolster the idea that even these lower levels are susceptible to inputs from outside of the visual system.

I reached out to several other scientists to see what they thought of the study. The two who responded were largely positive. Michael Spivey, a cognitive scientist at the University of California, Merced, said the study makes an important contribution to the literature, adding to already overwhelming evidence against the modular theory. “I’m constantly amazed that modular theorists of vision fight so hard,” says Spivey (who was Lupyan’s post-doc advisor). He points to anatomical studies showing that the brain has oodles of feedback loops between the frontal and visual cortices. “We don’t have a clear understanding yet of how those signals work, but they’re there.”

David Cox, a neuroscientist at Harvard, also thought the study’s methods were sound, though he wondered whether effects were due to language, per se. “I wonder if you could get a similar effect in animal — say, a monkey — by pre-cueing with a picture of the target of interest. There is a lot of evidence that performance can be increased in a variety of difficult detection scenarios if you know what you are looking for.” Whether the results are due to language or not, though, he says the study demonstrates that non-visual inputs affect early visual processing: “None of this takes away from the result; it’s more a question of interpretation.”

Lupyan is interested in this question, too. In future studies, he plans to repeat the experiment using not only word labels but also non-verbal cues, and associate them not only with familiar objects, but unfamiliar ones. “We’re interested in the origin of these effects, and how much training is necessary,” Lupyan says. “My prediction is you probably don’t need very much experience with a label in order [for it] to modulate visual processing.”

The scientists I really want to hear from, of course, are those in the “vision is modular” camp. Unfortunately none of them responded to my inquiries. If any of you are reading this now, please leave a comment and tell us what you think.

There are 13 Comments. Add Yours.

  1. msb
    August 13, 2013

    Hey, cool, it’s Emily Ward.

    I would describe myself as someone with one toe in the modularist camp, and I don’t think the interpretation is quite right.

    Interocular rivalry is not that low-level, at least not exclusively. Probably binocular summation occurs throughout the various levels of visual processing. This is especially apparent with extreme cases of rivalry. You can cross your eyes and change which rival is dominant with attentional shifts.

    Conscious visual awareness is also not inherently low level. It can involve attention and semantic factors, as we know from change blindness studies.

    Finally and more speculatively, just as food for thought, the meaningful stimuli are presented to the left eye. For most subjects with left-lateralized language, this will inhibit their ability to conceptually process the stimulus. It may be that left semantic pathways that otherwise automatically operate are underactivated with exclusively left visual field information, and are somehow nudged to action with the correct priming.

    I like the study, though!

  2. Ernest Barker
    August 13, 2013

    While recovering from a stoke I had two visions. One, a road sign that was not there. The other, a man and a young girl in the foyer of a restaurant. They were not there. The visions prompt me to read several books on the brain and neurology. All explaining how vision, sound, smell, taste and hearing are tied together to make up what we preceptive in our minds. I conjured the sights, smells and sounds, in my mind, from the written words in a couple of fiction books. Later something in the environment and a slightly messed up brain triggered the visions. At least my case language had the same or bigger effect then vision, hearing, smell, taste and touch. I believe you question “ Does Language Shape What We See?” can only be answered with a YES. The visions are incidental to the question. What is relevant is the fact that I preceptived the sounds, sights, and smells from words the same as if they were from the other senses.

  3. Elyn Kohlap
    August 13, 2013

    @Ernest Barker
    My mother recently suffered a massive stroke during a surgery to remove a brain anuerysm and like your vision she also had two separate cases where one time she saw three of her close friends who are all deceased from a long hard fight with cancer yet she saw them healthy standing around her hospital bed telling her she would be alright she’s just going to have to fight really hard. And the other time she said it was as if she could see inside her head as if her eye was looking backwards and she saw a fire burning in one spot in her head. Coincidentally the fire was burning in the same area of her anuerysm. She lost the complete vision in her left eye due to the stroke so seeing anything at all seems very unusual. I believe some studies should be done in this area if this happens to be a common thing.

  4. Aida Galvàn
    August 13, 2013

    Thank you for the reaserching,every knowledge about our brain is great and help us to have a better and healthier future as human kind.

  5. pilyapearl
    August 13, 2013

    the more we learn about the brain the more we find evidence of highly integrated parallel processes. . . our consciousness is the result of a simultaneous experience of all sensory stimuli of which our awareness is only the most minuscule corner. f* yeah brain science!!

  6. Rdizzie
    August 13, 2013

    While it is an interesting read I do not see how it is all the ground breaking. It is no dffienert tahn dniog hits anym oeplpe illw raed ti dna otn veen onitice the letters are not in order. Nor is it really all that different than knowing the words for a word search, many of them just pop right out with no searching.

  7. Rdizzie
    August 13, 2013

    I just realized this guy spent a ton of research money doing a hidden object search. I can buy a 300 page book of them for $2.99, I hope those that gave him the grant realized their mistake.

  8. Sam
    August 13, 2013

    Yup wasted money. So everyone, try not to think of a pink elephant. A priori and a posteriori posturing in-action. Sigh.

  9. Joe Brewer
    August 14, 2013

    Hi Virginia,

    Thanks for writing on this vitally important topic! I’d like to share some of my own thoughts (side note: I am trained in cognitive linguistics) by offering up an article I recently wrote about the faulty assumptions associated with political polling:

    Why Most Pollsters Are Delusional

    There is ample evidence from research in cognitive psychology, social psychology, neural computational research on speech patterns and language use that all support the claim that human conceptualization involves many nonlinear feedbacks between perception, feeling, conceptual meaning-making, and language use.

    I’d be happy to speak further about this topic. Feel free to email me to talk more.

    Best,

    Joe Brewer
    Director, Cognitive Policy Works

  10. mas’ud
    August 16, 2013

    Hey intelligent! thank you for the informative article. Waiting to know more in coming days…

  11. Pat
    August 23, 2013

    Selection perception guided by prior experience or degree of knowledge, familiarity determines what registers in the brain as something to focus upon – for intellectual curiosity, or to aid the trigger of fight/flight instinct associated with survival, i.e, to avoid pain, induce pleasure.

    The process is probably most evident in Navy seals who bring courage to tasks which would frighten typical soldiers because perceptions are unfamiliar, and hence, induce fear.

    The process of awareness, i.e., what we call consciousness through selective perception is bet fed by education, and the intellect, which is meant to separate humans from animals in their perceptive capacity.
    Ignoring the importance of that process means to embrace much lower quality of human potential than the human is capable of exercising during development.

    Absence of education has broad, and far reaching detriments by making humans weaker, and nations vulnerable to attack. All nations with free college tuition, and affordable, and good child care, are apparently aware of the dangers of capping education to make their populations vulnerable.
    america is unique in its ignorance in that respect.

    There is nothing about the issue that is race-based, gender-based, or religion-based, nor class-based – nor need their be. Americans have one chance to do it right, yet falls short of its own potential….crippling itself with minutae that doesn’t matter in the larger scheme of human progress.

  12. J.L.
    August 27, 2013

    This study reminds me of one conducted by John Lilly (referenced in: Center of the Cyclone) in which there is a word recorded and looped. After listening to the repeated word enough, most people started to hear other words than the one repeated. Now if an index card with one of the similar words was held at the edge of a persons vision, they would begin to hear the word on the card as the one repeated, even though they could not read the word directly.
    In that case, vision (without perception) was changing what they heard. But of course this was all done with a language the participants could understand, I wonder how it would change if the object in the above study was something unfamiliar to the person… or if the language used was one which the person was not affluent in?

  13. Adrian
    November 1, 2013

    The limits of my language means the limits of my world.
    Ludwig Wittgenstein

Add Your Comments

All fields required.

Related Posts