A Blog by

The 19th Century Doctor Who Mapped His Hallucinations

Hubert Airy's 1870 diagram of his migraine aura looks familiar to many migraineurs today.
Hubert Airy’s 1870 diagram of his migraine aura looks familiar to many migraineurs today.
The Royal Society

Hubert Airy first became aware of his affliction in the fall of 1854, when he noticed a small blind spot interfering with his ability to read. “At first it looked just like the spot which you see after having looked at the sun or some bright object,” he later wrote. But the blind spot was growing, its edges taking on a zigzag shape that reminded Airy of the bastions of a fortified medieval town. Only, they were gorgeously colored. And they were moving.

“All the interior of the fortification, so to speak, was boiling and rolling about in a most wonderful manner as if it was some thick liquid all alive,” Airy wrote. What happened next was less wonderful: a splitting headache, what we now call a migraine.

Hubert Airy's drawing shows how his migraine aura grew over the course of about 20 minutes (click the image to expand).
Hubert Airy’s drawing, shown here in its entirety, illustrates how his migraine aura grew over the course of about 20 minutes (click the image to expand).
The Royal Society

Airy was a student when he suffered his first migraine, but he later became a physician. His description of his aura—the hallucinatory symptoms that can precede a migraine—was published in the Philosophical Transactions of the Royal Society in 1870, along with a drawing that showed how the hallucination grew to take over much of his visual field. “It’s an iconic illustration,” says Frederick Lepore, an ophthalmological neurologist at Rutgers Robert Wood Johnson Medical School in New Jersey. “It’s so precise, like a series of time-lapse photographs.”

Lepore showed Airy’s drawing to 100 of his migraine patients who experience a visual aura (only a minority do). Forty-eight of them recognized it instantly, he wrote in a historical note in the Journal of Neuro-Ophthalmology in 2014. He still shows the drawing to his patients today. “People are astonished,” he says. “They say, ‘Where did you get that?’”

What’s more remarkable, Lepore says, is that Airy’s drawing anticipates discoveries in neuroscience that were still decades in the future.

Airy correctly deduced that the source of his hallucinations was his brain, not his eyes. He wasn’t the first to do this, but it was still an open question at the time.

What’s most prescient about his drawing, though, is that it anticipates the discovery of an orderly map of the visual world in the primary visual cortex, a crucial brain region for processing what we see. When Airy published his paper, that discovery was still nearly half a century away.

This diagram by Gordon Holmes illustrates how different regions of the visual field (right) map onto different regions of the primary visual cortex (left).
This diagram by Gordon Holmes illustrates how different regions of the visual field (right) map onto different regions of the primary visual cortex (left).
The Royal Society

Most accounts credit the British neurologist Gordon Holmes with that later discovery. Holmes studied the visual deficits of hundreds of soldiers who’d suffered gunshot wounds to the back of the head in Word War I. “The British helmet was seated high on the head,” Lepore wrote, in a historical paper describing Holmes’s contributions. Unfortunately, this left the primary visual cortex largely unprotected, and provided Holmes many opportunities to study damage to this part of the brain.

By carefully mapping the soldiers’ blind spots and the locations of their wounds, Holmes discovered that damage to the most posterior part of visual cortex (that is, the part farthest back in the head) resulted in blindness at the center of the visual field, whereas wounds located closer to the front of the visual cortex resulted in blindness off to the side. Everything the eyes see maps neatly onto the visual cortex.

Holmes also discovered—and this is the part that relates to Airy’s drawing—that the visual map is magnified at its center. If the visual cortex is a road atlas, the part that represents the center of the visual field is like one of those inset city maps that show a smaller area in lots more detail.

This meshes nicely with Airy’s observation that the zigzags around his blind spot were packed tightly together in the center of his visual field and grew wider in the periphery. “Airy’s drawing fits beautifully with our modern conception of how the visual cortex is organized,” Lepore says.

Hubert Airy's father, George, also saw zigzag hallucinations, but they didn't precede a headache for the elder Airy.
Hubert Airy’s father, George, also saw zigzag hallucinations, but they didn’t precede a headache for the elder Airy.
The Royal Society

There’s still much we don’t know about migraines and migraine auras. One hypothesis is that a sort of electrical wave sweeps across the visual cortex, causing hallucinations that spread across the corresponding parts of the visual field. In a loosely descriptive way, Airy’s time series drawings—showing an ever expanding shape—jibe with this too.

Even less is known about the neural mechanisms that might produce the vivid colors Airy drew and described. There are areas of the visual cortex, including one called V4, that contain neurons that respond to specific colors, as well as other neurons that respond to lines of specific orientations. Perhaps an electrical wave passing through such areas could produce colored zigzags, Lepore says. But no one really knows.

Airy wasn’t the first to draw his migraine aura. In fact, his father, George, who happened to be the Royal Astronomer, had published a sketch of his own zigzag hallucinations five years earlier (see above). A German neurologist published a fairly crude, looping sketch back in 1845. And others did so afterwards. The drawings made by the French neurologist Joseph Babinski (see below) are especially colorful, if lacking in detail.

But Hubert Airy’s drawing has stood the test of time better than most. His paper in the Philosophical Transactions, published at age 31, was his only contribution to the field. It’s written in the somewhat pompous, somewhat conversational style of a 19th-century polymath relating his observations to other learned men. One lengthy section recounts the observations of a Swiss doctor in the original French. Naturally, the readers of such a prestigious journal could translate for themselves.

That Airy got so much right at a time when so little was known about the brain is a testament to his powers of observation, Lepore says. He documented what he saw meticulously, even though it was visible to himself alone.

This detail from Joseph Babinksi's 1890 drawing of his migraine aura shows a zigzag pattern not unlike the one Hubert Airy saw.
This detail from Joseph Babinksi’s 1890 drawing of his migraine aura shows a zigzag pattern not unlike the one Hubert Airy saw.
Wellcome Library

–Greg Miller

A Blog by

The Distributed Brainpower of Social Insects

Attenborough-brainsHere’s David Attenborough, chilling out on a rock in the middle of Africa, with four lumps of plasticine. The smallest one on the far left represents the brain of a bushbaby, a small primate that lives on its own. The next one is the brain of a colobus monkey, which lives in groups of 15 or so. The one after that is a guenon, another monkey; group size: 25. And on the far right: a baboon that lives in groups of 50. “Were you to give a skull to a researcher who works on monkeys, even though they didn’t know what kind of monkey it belonged to, they would be able to accurately predict the size of group in which it lived,” says Attenborough.

That sequence, from The Life of Mammals, is a wonderful demonstration of the social brain hypothesis—a bold idea, proposed in the 1980s, which suggests that living in groups drove the evolution of large brains. Social animals face mental challenges that solitary animals do not: they have to recognise the other members of their cliques, cope with fluid and shifting alliances, manage conflicts, and manipulate or deceive their peers. So as social groups get bigger, so should brains. This idea has been repeatedly tested and confirmed in many groups of animals, including hoofed mammals, carnivores, primates, and birds.

What about insects? Ants, termites, bees, and wasps, also live in large societies, and many of them have unusually big brains—at least, for insects. But in 2010, Sarah Farris from West Virginia University and Susanne Schulmeister from the American Museum of Natural History showed that in these groups, large brains evolved some 90 million years before big social groups. If anything, they correlated with parasitic body-snatching rather than group-living.

“That got people thinking,” says Sean O’Donnell from Drexel University. “In recent years, there’s been a growing rumbling, almost subterranean movement arguing that social brain ideas may not apply to the social insects.” His new study is the latest addition to that movement.

O’Donnell’s team studied potter wasps, which lead solitary lives, and the closely related paper wasps, which live in colonies of varying sizes and complexities. They collected queens and workers from 29 species of these wasps, carefully dissected their brains, and measured the size of their mushroom bodies—a pair of structures in insect brains that control higher mental abilities like learning and memory.

And to their surprise, they found that as the wasp colonies got bigger, their mushroom bodies got smaller. Even within the social paper wasps, the team found that species with distinct queens and workers—a sign of a more complex society—have similarly sized mushroom bodies to those with no such castes.

“The pattern is so clear,” says O’Donnell. “Sociality may actually decrease demands on individual cognition rather than increasing it.”

“Here we have the first concerted evidence that costly brains aren’t needed to allow sociality, when you can do it other ways,” says Robin Dunbar, who first proposed the social brain hypothesis. “There are many routes to sociality.”

What other routes? O’Donnell notes that insects and (most) mammals build their societies in fundamentally different ways. Large mammal societies typically include individuals who are distantly related or even unrelated. Insect societies, by contrast, are basically gigantic families, where all the members are either queens (which reproduce) or their descendants (which do not). You could view these colonies less as groups of individuals and more as extensions of the queens.

As such, their members don’t particularly need to keep track of shifting relationships, or manage conflicts, or manipulate their peers, or any of the other social challenges that, say, a baboon or a human faces. They have less of a need for bigger and more sophisticated brains.

Social insects also benefit from swarm intelligence, where individuals can achieve astonishing feats of behaviour by following incredibly simple rules. They can build living buildings, raise crops, vaccinate themselves, and make decisions about where to live. In some cases, they make decisions in a way that’s uncannily similar to neurons—a colony behaves like a giant brain, and in more than a merely metaphorical way. They have a kind of ‘distributed cognition’, where many of the mental feats that other animals carry out using a single brain happen at the level of the colony.

Entomologist Seirian Sumner from Bristol University says that there are mammals, like meerkats and banded mongooses, which live in simple societies where adults cooperatively raise their young. These are often compared to primitively social insects, like paper wasps. “They share very similar family structures, group sizes and plasticity in behavioural roles,” Sumner says. It would be very interesting to see if the brains of these mammals follow the same patterns as those of O’Donnell’s wasps.

O’Donnell is all in favour of more studies. He wants to see if the same patterns hold in other insect groups that include both social and solitary species, including bees and cockroaches. And he’s intrigued by the naked mole rats—colonial mammals that have queen and worker castes, much like ants and wasps. “If our ideas are correct, we’d expect to see mole rats following a similar pattern to insects,” he says.

Reference: O’Donnell, Bulova, DeLeon, Khodak, Miller & Sulger. 2015. Distributed cognition and social brains: reductions in mushroom body investment accompanied the origins of sociality in wasps (Hymenoptera: Vespidae). Proc Roy Soc B. Citation tbc.

PS: The size of brain regions isn’t always the best indicator of intelligence. I asked O’Donnell about this, and he stands by his decision to focus on the mushroom bodies. “It’s definitely a blunt tool for studying brain evolution,” he says. “Brain tissue is metabolically very expensive, and even if it was just filler, tissue weight is a big deal, especially for a flying insect. We expect there to be really strong constrains on the size of the [mushroom bodies].”

Genius and the Brain

The 92nd St. Y in New York is presenting “Seven Days of Genius” this week. As part of the festivities, the video site Big Think invited me to film a conversation with neuroscientist Heather Berlin about the nature of genius and the origin of creativity in the brain.

Here’s the video, which we taped at YouTube headquarters:

(more…)

We Are Instant Number Crunchers

If you have ever struggled through a math class, you may not think of numbers as natural. They may seem more like a tool that you have learn how to use, like Excel or a nail gun. And it’s certainly true that numbers pop in the archaeological record just a few thousand years ago, with the abruptness you’d expect from an invention. People then improved the number system after that, with the addition of zero and other upgrades.

But scientists have found that we are actually born with a deep instinct for numbers. And a new study suggests that our number sense operates much faster than previously thought. It might be better called our number reflex.

Some of the most compelling evidence for the number sense comes from studies on babies. In a 2010 study, for example, Elizabeth Brannon of Duke and her colleagues showed 6-month-old babies pictures of dots. As they switched between different pictures, they tracked how long the babies looked at each one. In some cases, the pictures were identical. In others, the dots differed in size or spacing. And in still other cases, Brannon and her colleagues added extra dots to the pictures.

When Brannon and her colleagues looked over their data, they found that the attention of the babies tended to be grabbed when they switched the number of dots. What’s more, the babies looked longer at a picture when the difference in the number was bigger.

The number sense in infants is the raw material for math aptitude later in life, as Brannon documented when she followed up on the infants three years later. Brannon found that their sensitivity to numbers as six-month-olds predicted how well they scored on math tests as three-year-olds. Other scientists have also found that a link between number sense and math skills in fourteen-year-olds.

Having discovered our number sense, Brannon and other researchers have begun probing our brains to see how it works biologically. It’s not easy to tease out the number sense from all the other things our brains do when they take in a visual scene. There’s a huge amount of information to decipher in an instant of vision, and our brains use a complex network of regions to get the job done.

When light hits our eyes, the retina takes the first pass at processing the image and then fires signals down the optic nerve to the back of the head. The visual cortex then teases out some basic features, such as brightness, edges, color, and so on. The regions where this processing takes place then send signals to other parts of the brain, which detect more complex things, like body movements and faces.

Some researchers have proposed that our awareness of numbers only emerges late in this pathway. We may first have to detect other features of a scene, and then analyze them in order to figure out how many objects there are in a group. If we look at three lemons on a counter, for example, we might first have to calculate the total area of yellow in our field of vision, determine how much yellow is in each lemon, and then divide the former by the latter.

To probe where our number sense lies on the path of thought, Brannon and her colleagues placed EEG caps on people’s heads. Then they showed their volunteers pictures of dots. As in Brannon’s earlier experiments, they varied the pictures with extra dots, as well as changing the size or spacing. Each time, the scientists recorded the electricity produced by people’s brains as they processed what they saw.

Analyzing the different responses, the scientists noticed one fascinating spike of electrical activity that emerged from the back of the brain. The strength of the spike varied with the number of dots people saw. The more dots, the bigger the spike.

The size and spacing of the dots, by contrast, had no effect on the spike. If we sensed numbers only by analyzing other features of objects, then you might expect to see an influence. But Brannon and her colleagues could find none. They conclude that this spike represents our direct detection of numbers.

What makes this spike even more intriguing is how fast it occurs: just 75 millisecond after the scientists present a picture. At that stage in visual perception, the visual cortex is just starting to process signals from the eye. Numbers, the new research suggests, are so important that we start sensing them before we’re even aware of what we’re seeing.

(For more on our number sense and other discoveries about the brain, see my ebook anthology, Brain Cuttings.)

A Blog by

Fast-Evolving Human DNA Leads to Bigger-Brained Mice

Between 5 and 7 million years of evolution separate us humans from our closest relatives—chimpanzees. During that time, our bodies have diverged to an obvious degree, as have our mental skills. We have created spoken language, writing, mathematics, and advanced technology—including machines that can sequence our genomes. Those machines reveal that the genetic differences that separate us and chimps are subtler: we share between 96 and 99 percent of our DNA.

Some parts of our genome have evolved at particularly high speed, quickly accumulating mutations that distinguish them from their counterparts in chimps. You can find these regions by comparing different mammals and searching for stretches of DNA that are always the same, except in humans. Scientists started identifying these “human-accelerated regions” or HARs about a decade ago. Many turned out to be enhancers—sequences that are not part of genes but that control the activity of genes, telling them when and where to deploy. They’re more like coaches than players.

It’s tempting to think these fast-evolving enhancers, by deploying our genes in new formations, drove the evolution of our most distinguishing traits, like our opposable thumbs or our exceptionally large brains. There’s some evidence for this. One HAR controls the activity of genes in the part of the hand that gives rise to the thumb. Many others are found near genes involved in brain development, and at least two are active in the growing brain. So far, so compelling—but what are these sequences actually doing?

To find out, J. Lomax Boyd from Duke University searched a list of HARs for those that are probably enhancers. One jumped out—HARE5. It had been identified but never properly studied, and it seemed to control the activity of genes involved in brain development. The human version differs from the chimp version by just 16 DNA ‘letters’. But those 16 changes, it turned out, make a lot of difference.

Boyd’s team introduced the human and chimp versions of HARE5 into two separate groups of mice. They also put these enhancers in charge of a gene that makes a blue chemical. As the team watched the embryos of their mice, they would see different body parts turning blue. Those were the bits where HARE5 was active—the areas where the enhancer was enhancing.

Embryonic mice start building their brains on their ninth day of life, and HARE5 becomes active shortly after. The team saw that the human version is more strongly active than the chimp one, over a larger swath of the brain, and from a slightly earlier start.

HARE5 seems to be particularly active in stem cells that produce neurons in the brain. The human version of the enhancer makes these stem cells divide faster—they take just 9 hours to split in two, compared to the usual 12. So in a given amount of time, the mice with human HARE5 developed more neural stem cells than those with the chimp version. As such, they accumulated more neurons.

And they developed bigger brains. On average, their brains were 12 percent bigger than those of their counterparts. “We weren’t expecting to get anything that dramatic,” says Debra Silver, who led the study.

“Ours stands as among the first studies to demonstrate any functional impact of one of these HARs,” she adds. “It shows that just having a few changes to our DNA can have a big impact on how the brain is built. We’ve only tested this in a mouse so we can’t say if it’s relevant to humans, but there’s strong evidence for a connection.”

“I’m really excited that people are following up [on these HARs] and finding out what they do,” says Katherine Pollard from the Gladstones Institutes, who was one of the scientists who first identified these sequences. “It’s been really daunting to figure out what the heck these things do. Each one takes years. These guys went the extra mile beyond what everyone else has been doing, by showing changes in the cell cycle and in brain size.”

“It’s a very clever use of mice as readouts for human-chimp differences,” says Arnold Kriegstein from the University of California, San Francisco. “The [brain] size difference isn’t terribly big, but it’s certainly in the correct direction.”

Eddy Rubin from the Joint Genome Institute is less convinced. His concern is that the team’s methods could have saddled the mice with multiple copies of HARE5 in various parts of their genome. As such, it’s not clear if the differences between the two groups are due to these factors, rather than to the 16 sequence differences between the human and chimp enhancers. “[That] casts major shadow on the conclusions,” says Rubin. “This is an interesting study pursuing an important issue, but the results should be taken with a grain of salt.”

Regardless, Silver’s team are now continuing to study HARE5. Now that their mice have grown up, they are designing tests to see if the adults behave differently thanks to their larger brains. This is important—bigger brains don’t necessarily mean smarter animals. They’re also looking into a few other enhancers. One of them, for example, seems to a control a gene that affects the growth of neurons.

“I think HARE5 is just the tip of the iceberg,” says Silver. “It is probably one of many regions that explain why our brains are bigger than those of chimps. Now that we have an experimental paradigm in place, we can start asking about these other enhancers.”

Reference: Boyd, Skove, Rouanet, Pilaz, Bepler, Gordan, Wray & Silver. 2015. Human-Chimpanzee Differences in a FZD8 Enhancer Alter Cell-Cycle Dynamics in the Developing Neocortex. Current Biology http://dx.doi.org/10.1016/j.cub.2015.01.041

More on enhancers:

Did a gene enhancer humanise our thumbs?

RNA gene separates human brains from chimpanzees

Wi-Fi Brain Implants For Robot Arms

For many paralyzed people, their problem is a communication gap. They can generate the signals in their brain require to control their muscles–to walk, to wash dishes, to weed a garden. But damage to their nervous system prevents those signals from reaching their destination.

Last year, in a feature I wrote for National Geographic about the brain, I recounted the work of scientists and engineers who are trying to bridge that gap. Their dream is to create a technology that reads signals from people’s brains and uses them to control machines. The machines might be robot arms that people could use to feed themselves, or computers to compose emails, or perhaps even exoskeletons that could enable people to walk.

Scientists have been investigating these brain-machine interfaces for decades, and in recent years they’ve made some impressive advances–some of which I described in my story. But it would be wrong to giddily declare that scientists have reached their goal. You need only look at this picture below to get a sense of how far we are from science-fiction dreams.

UPMC
UPMC

This woman, Jan Scheuermann, is at the forefront of brain-machine interface research. She volunteered to have an electrodes injected into the surface of her brain. Researchers at the University of Pittsburgh connected the electrodes to pedestals on top of her scalp. Cables can be attached to the pedestals; they connect to a computer and a power source.

Scheuermann and the scientists worked together to train the computer to recognize signals from her brain and use them to control a robot arm. In December 2012, Scheuermann made news by controlling the robot arm so well she could feed herself a bar of chocolate.

But this system was hardly ready for prime time. The electrode apparatus has to pass through a hole in a patient’s skull, creating the risk of infection. The cables tether the patient to bulky machines, which would make the whole system cumbersome rather than liberating.

In addition, the robot arm had plenty of room for improvement. It had seven degrees of freedom. Scheuermann could control its shoulder, elbow, and wrist joints. The hand, however, could only open and close. So Scheuermann had the same kind of dexterity as if she wore a mitten.

A 1958 pacemaker--wired to a cart of machines. Source: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3232561/
A 1958 pacemaker–wired to a cart of machines. Source: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3232561/

None of this was any reason to dismiss brain-machine interfaces as having reached a dead end. The history of pacemakers started out in much the same place. Today, people can walk around with pacemakers implanted in their chests without anyone around them having the slightest awareness that a device is regulating their heartbeat. Sixty years ago, however, the first pacemakers were enormous, cumbersome affairs. Implanted electrodes were tethered to wires that ran to big machines. Patients either had to lay next to the machines or trundle them around on a cart. The pacemakers were also relatively simple, delivering fixed patterns of electricity to the heart. Too often, they failed to keep the heart working.

In the 1960s, pacemakers became portable and battery-powered. They still needed external wires, but the wires now ran to a small box that a patient could carry on a belt. Finally, pacemakers disappeared into the body completely. In 2009, doctors began implanting pacemakers that not only had their own power supply but could also communicate medical information to doctors with a Wi-Fi connection to the Internet. Pacemakers also deliver more sophisticated signals to the heart, using algorithms to adjust their rhythms. If someone looked at the ungainly state of pacemakers in 1960 and declared them hopeless, they would have been profoundly wrong.

Two new studies are pushing brain-machine interfaces forward in the same way.

hi-res-implant
Yin et al Neuron 2014 http://dx.doi.org/10.1016/j.neuron.2014.11.010 

The first study advances the electrode end of the interface. A team of scientists led by Arto Nurmikko of Brown University developed an implant that requires no wires. The implant can pick up signals from 100 different electrodes. It contains microelectronics that can turn these signals into a Wi-Fi transmission broadcast at a rate of 200 Mb per second. The researchers implanted the device in monkeys and found that they could pick up signals from five yards away with a quality on par with signals delivered by cables. The monkeys went about their business freely, and the scientists could pick out signals they used to walk on a treadmill. When the monkeys fell asleep, the scientists could detect shifts in their brain waves. The whole apparatus runs for over two day straight on a double AA battery.

Yin et al IEEE Trans Biomed Circuits Syst. Apr 2013; 7(2): 115–128. doi:  10.1109/TBCAS.2013.2255874
Yin et al IEEE Trans Biomed Circuits Syst. Apr 2013; 7(2): 115–128. doi: 10.1109/TBCAS.2013.2255874

For now, this device will probably be most useful to researchers who study the behavior of animals. But Nurmikko and his colleagues are also learning lessons for the next generation of brain-machine interfaces for people. In another promising line of research, they have designed a prototype of a fully implantable device. The electrodes go in the brain, while the power source and transmitter sit atop the skull, below the scalp. In the future, scientists may be able to make new devices that take advantage of both studies–implants that can be sealed in the head, transmit a lot of data wirelessly, run efficiently on a long-lasting battery, and not heat up the way electronics sometimes do.

Meanwhile, at the other end of the interface, Scheuermann has been testing out a new and improved robot arm. The Pittsburgh team programmed four different positions that the hand could take, such as pinching the index and thumb together. The researchers had no idea if all those extra degrees of freedom would be too much for their interface to handle. Could it pick out signals in Scheuermann’s brain that were meaningful enough to make full use of the arm’s range of motion?

To train Scheuermann, the scientists had her start her practice on a virtual robot arm, which she used to grab virtual objects on a computer screen. The computer system learned how to recognize certain patterns of neuron signals as commands to change the shape of the robot hand. At the same time, Scheuermann’s own brain became more adept at controlling the robot arm, producing stronger signals. Finally, the scientists had Scheuermann try to pick up a number of different objects. Here’s a sampling of her successes:

Scheuermann, as the scientists had hoped, learned how to manage her new arm. It wasn’t a perfect education. Scheuermann sometimes failed to grab objects, and the scientists never managed to record a success on certain tasks, such as pouring water from one glass into another.

Still, the results were encouraging–and sometimes intriguing. The scientists found some groups of neurons that would fire together in distinctive patterns as Scheuermann moved the arm through all ten dimensions. In other words, these neurons weren’t limited to just bending the elbow or pinching a thumb. In the future, it may be possible to harness these flexible signals to make the arms even more proficient, and to fill the communication gap even more.

Flying Through Inner Space

It’s hard to truly see the brain. I don’t mean to simply see a three-pound hunk of tissue. I mean to see it in a way that offers a deep feel for how it works. That’s not surprising, given that the human brain is made up of over 80 billion neurons, each branching out to form thousands of connections to other neurons. A drawing of those connections may just look like a tangle of yarn.

As I wrote in the February issue of National Geographic, a number of neuroscientists are charting the brain now in ways that were impossible just a few years ago. And out of these surveys, an interesting new way to look at the brain is emerging. Call it the brain fly-through. The brain fly-through only became feasible once scientists started making large-scale maps of actual neurons in actual brains. Once they had those co-ordinates in three-dimensional space, they could program a computer to glide through it. The results are strangely hypnotic.

Here are three examples, from the small to the big. (Click on the cog-wheel icon if you can to make sure you’re watching them at high resolution.)

First is a video from a project called Eyewire. Volunteers play a game to map the structure of individual neurons. Here are a handful of neurons from the retina of a mouse. (More details about the video can be found here.)

The second video is a flight through the entire brain of a mouse, made possible by a new method called CLARITY. This method involves first adding chemicals to the brain to wash out the lipids and other chemicals that give it color. The brain is rendered transparent, even though its neurons remain intact.

Next, scientists douse the brain with compounds that only latch onto certain types of neurons, lighting them up. The researchers can then take pictures of the brain from different angles and combine them into a three-dimensional representation of the brain in which you can distinguish individual neurons. In this video, from the lab of Karl Deisseroth at Stanford University, a very common type of neuron is colored. Flying through the brain, we can start to get a feel for the large-scale connections that stretch across it.

And finally, we come to the newest method–one that didn’t even exist when I was working on my article. Adam Gazzaley of the University of California at San Francisco and his colleagues have made it possible to fly through a representation of a thinking human brain–as it thinks.

Here’s how they built this fly-through, which they call the Glass Brain. First, they gave volunteers a high-resolution MRI scan to get a very detailed picture of the overall shape of their brain. MRI doesn’t let you see individual neurons, but it does mark out the major structures of the brain in fine detail.

Next, they added in more anatomy with a method called diffusion tensor imaging. To use this method (known as DTI for short), scientists reprogram MRI scanners to measure the jostling of water molecules inside of neurons. Many of the neurons in the brain are located in the outer layers of the brain, and they extend long fibers across the inner regions and link up to the outer layers at a distant spot. Many of these fibers are organized together in pathways. The water molecules in the fibers jostle back and forth along that pathway, and so scientists can use their movement to reconstruct their shape.

The combination of MRI and DTI gave Gazzaley and his colleagues both the structures of the brain and the pathways connecting them, all lined up in the same three-dimensional space.

Now came the third ingredient: recordings of the brain’s activity. Gazzaley used EEG, a method that involves putting a cap of electrodes on someone’s head and measuring the electrical activity that reaches from the brain up through the skull to the scalp.

EEG is very fast, measuring changes in brain activity at a resolution of a tenth of a second or less. The drawback to EEG is that it’s like trying to eavesdrop on people in the next room over. A lot of detail gets blurred away as the signals travel from their source. To reconstruct the brain’s inner conversations, Gazzaley and his colleagues programmed a computer to solve mathematical equations that allow it to use the scalp recordings to infer where in the brain signals are coming from. Their program also measured how synchronized signals in different regions were with each other. Combining this information with their map of the brain’s pathways, the scientists could reconstruct how signals moved across the brain.

And here’s a video of what they ended up with. In this case, the volunteer was simply asked to open and shut her eyes and open and close her hand.

As gorgeous as this is simply as a video, there’s more to it. It didn’t take Gazzaley’s computer weeks to crunch all the data from the experiment, calculate the sources of the EEG signals and map them onto the brain. The system can create this movie in real time.

Imagine, if you will, putting on an EEG cap and looking at a screen showing you what’s happening in your brain at the moment you’re looking at it. That’s what this system promises.

I called Gazzaley to get the details of this new view of the brain. It took him and his team a year to build and to validate it–that is, to make sure that the patterns in the video have the same features that well-studied imaging technologies have found in the brain. Now Gazzaley hopes to start using it to record data during experiments and to test some prominent ideas about how the brain processes information.

And this imaging may be useful outside the lab. Gazzaley and his colleagues recently designed a video game that improved the cognition of older people. It may be possible to incorporate their new brain display into a game, allowing people to try to alter their brain activity through a kind of neuro-feedback.

Just recently, Gazzaley got another idea. He put an EEG cap on a colleague and then pushed the output to a set of Oculus Rift virtual reality goggles. Gazzaley put the goggles on and then used an Xbox joystick to fly through his colleague’s brain, which he could look at all around him in three dimensions.

“I had never seen a brain inside out before,” Gazzaley told me. “After that I couldn’t get back to work. I had to lay on the grass for a while.”

Tomorrow I will be speaking about brain mapping in Rochester, New York, in their Arts & Lectures series. You can get information about tickets here.

The Phantom Piano

When the brain goes awry, it can reveal to us some clues to how it works in all of us. In my latest “Matter” column for the New York Times, I look at a rare but fascinating disorder that causes people to hallucinate music. How someone could imagine that a piano was playing nearby–or a marching band or church choir–may tell us something about how our brains make sense of the world by making predictions about what comes next. Check it out.

Let Us Take A Walk In the Brain: My Cover Story For National Geographic

Some of the white-matter connections in my brain. (Thanks to Van Wedeen and colleagues at the Martinos Center for Biomedical Imaging)
White-matter connections in my brain ( Van Wedeen and colleagues at Martinos Center for Biomedical Imaging)

Over the past year, I’ve spent a lot of time around brains. I’ve held slices of human brains preserved on glass slides. I’ve gazed through transparent mouse brains that look like marbles. I’ve spent a very uncomfortable hour having my own brain scanned (see the picture above). I’ve interviewed a woman about what it was like for her to be able to control a robot arm with an electrode implanted in her brain. I’ve talked to neuroscientists about the ideas they’ve used their own brains to generate to explain how the brain works.

This has all been part of my research for the cover story in the current issue of National Geographic. You can find it on the newsstands, and you can also read it online.

On Monday, I was interviewed on KQED about the story, and you can find the recording here.

National Geographic has been doing a lot of interesting work to adapt their magazine stories for the web and tablets. For my story, the great photographs from Robert Clark are accompanied by some fine video.

Here’s one of my favorites–an interview with Jeff Lichtman, a neuroscientist Harvard. He’s one of the people I interviewed for the story, and it was an inescapable torture to have to boil down our  conversation to fit there. In this video, an unboiled Laitman talks about his project to see everything in the brain, with some of the mind-blowing visualizations he and his colleagues have created. I think these images are the clearest proof of just how big a task neuroscientists have taken on in trying to map the brain and understand how it works.

A Blog by

On Dolphins, Big Brains, Shared Genes and Logical Leaps

In 2012, a team of Chinese scientists showed that a gene called ASPM has gone through bouts of accelerated evolution in two very different groups of animals—whales and dolphins, and ourselves.

The discovery made a lot of sense. Many earlier studies had already shown that ASPM is one of several genes that affect brain size in primates. Since our ancestors split apart from chimps, our version of ASPM has changed with incredible speed and shows signs of intense adaptive evolution. And people with faults in the gene develop microcephaly—a developmental disorder characterised by having a very small brain. Perhaps this gene played an important role in the evolution of our big brains.

It seems plausible that it did something similar in whales and dolphins (cetaceans). They’re also very intelligent, and their brains are very big. Compared to a typical animal of the same size, dolphin brains are 4-5 times bigger than expected, and ours are 7 times bigger than expected. The Chinese team, led by Shixia Xu, concluded that “convergent evolution might underlie the observation of similar selective pressures acting on the ASPM gene in the cetaceans and primates”.

It made for a seductive story. I was certainly seduced. In my uncritical coverage of the study, I wrote: “It seems that both primates and cetaceans—the intellectual heavyweights of the animal world—could owe our bulging brains to changes in the same gene.”

Many other scientists were sceptical—check out the comments in my original post—and it seems they were right to be. Three British researchers—Stephen Montgomery, Nicholas Mundy and Robert Barton—have now published a response to Xu’s analysis, and found it wanting. “It’s a completely plausible hypothesis but they didn’t test it very well,” says Montgomery.

In the original paper, Xu’s team looked at how ASPM has changed in 14 species of cetaceans and 18 other mammals, including primates and hippos. ASPM encodes a protein, and some changes in the gene don’t affect the structure of the protein. These “synonymous mutations” are effectively silent. Other “non-synonymous mutations” do change the protein and can lead to dramatic effects (like microcephaly). The Chinese team claimed that a few cetacean families had a high ratio of non-synonymous to synonymous mutations in ASPM—a telltale sign of adaptive evolution.

But Montgomery’s team had two problems with this conclusion. First, it’s statistically weak. Second, it’s not unique to cetaceans. Xu’s team largely looked at brainy groups like cetaceans and primates, but the British trio found exactly the same signature of selection in other mammals, including those with average-sized brains. “It looks like ASPM evolved adaptively in all mammals,” says Montgomery. “It could be that ASPM is a general target of selection in episodes of brain evolution and isn’t specific to large brains.”

Xu’s team also failed to check if the changes they found in ASPM were actually related to differences in cetacean brains. If the gene is changing quickly under the auspices of natural selection, does that translate to equally fast changes in brain size? The Chinese team never explicitly addressed that question. Montgomery’s team did, and their answer was a resounding no.

“We felt a little bad picking on them because it’s quite a common problem,” says Montgomery. “People pick a gene to analyse because it’s linked to something interesting. They find that it’s got this pattern of evolution, and they infer that it’s doing what they thought it was doing. It’s a circular argument. “

“These analyses need to be followed up with experimental work (if that is possible) or treated with caution if not,” says Graham Coop from University of California, Davis. “At best, such studies can only act to generate hypotheses about the role of a particular gene in phenotypic evolution”. That’s because most genes do many jobs, “and we are profoundly ignorant of many of these roles and how they differ across organisms.”

ASPM, for example, isn’t a “brain gene”. It creates molecular structures that help cells to divide evenly. It’s activated in the embryonic cells that make neurons, so if it’s not working properly, fewer neurons are made and individuals end up with small brains. But ASPM is also activated in other parts of the body.

As Vincent Lynch pointed out in a comment to my earlier post, ASPM affects the development of the testes:

“This brain-testis connection was described by Svante Pääbo’s lab. They swapped the mouse and human ASPM genes, I assume hoping to breed a super-intelligent strain of mice, and surprisingly found that nothing happened. Bummer… But rather than uncovering a role for ASPM as a casual agent of increased brain size in the human lineage, these authors found ASPM was required for male fertility (yes, the jokes are obvious) and suggested that the signal of selection observed in humans and other primates is likely related to role in testis. It is on old observation that many testis expressed genes evolve rapidly, many under some form of positive selection.”

So, maybe ASPM’s fast evolution in primates is more a story about nuts than noggins. Then again, Montgomery’s team have indeed found that changes in primate ASPM are related to differences in the size of their brains but not their testes.

These conflicting results illustrate just how important it is to test hypotheses carefully, rather than finding bits of evidence that look nice together, and uniting them through conjecture. It’s a valuable cautionary note to both scientists and journalists alike.

Reference: Montgomery, Mundy & Barton. 2013. ASPM and mammalian brain evolution: a case study in the difficulty in making macroevolutionary inferences about gene–phenotype associations. Proceedings of the Royal Society B http://dx.doi.org/10.1098/rspb.2013.1743

“The Dark Matter of Psychiatric Genetics”

The title of my blog post is provocative, I know, but I’m actually just lifting it from the title of a new commentary in the journal Molecular Psychiatry by Thomas Insel, the director of the National Institutes of Mental Health. In his piece, Insel expresses his excitement about a new way of thinking about how genes can contribute to our risk of psychiatric disorders such as schizophrenia. It’s based on an emerging understanding of the human genome that I explored in a recent story for the New York Times: each of us does not carry around a single personal genome, but many personal genomes.

When we start out as a single fertilized egg, we have a single genome. When the cell divides in two, there’s a tiny chance that any spot in the DNA will mutate. Over many divisions, the copies of that original genome accumulate mutations and become different from one another. Scientists only now have the tools to dig into this so-called mosaicism and see how different our genomes can become.

Scientists have long known that mosaicism can be important for cancer, but it’s only recently that experts on other diseases have thought about it. Insel clearly has turned his mind in its direction. As he notes in his commentary, a number of studies have implicated genes in the risk of conditions such as autism. But the picture is still murky, as reflected by the fact that among identical twins, it’s often the case that one sibling will develop a mental disorder and the other will not.

Part of the solution to this mystery, he suggests, is that the brain is a mosaic.

“The brain’s genome or more accurately genomes, may prove to be even stranger than we have imagined,” Insel writes.

What might be happening is this: when embryos are developing, the neurons of the brain are growing and dividing. A neuron may acquire a mutation, which it then passes down to daughter neurons. That new mutation alters how those neurons work and makes a person prone to developing a particular mental disorder. But you wouldn’t know that this mutation is playing a role if you just took a cheek swab from a patient and sequenced the DNA from the cells you retrieved. The mutation you need to see is locked away in the brain.

Scientists have already linked these late-arising mutations to a few brain disorders. One is hemimegalencephaly, in which one side of the brain becomes bigger than the other. Even though only a few percent of the neurons in the brain carry the mutation, they can still trigger large-scale changes to half of the brain. Some disorders seem to require a one-two punch, in which a child inherits a mutation from a parent, and then a new mutation arises on top of that in the brain.

Insel suspects that some mental disorders may have a similar origin. For example, males are more likely than females to develop most neurodevelopmental disorders. That may be because they’re especially vulnerable to late-arising mutations. While females have two X chromosomes, males have only one, the second X being replaced by a Y. If a mutation arises on the X chromosome as a male embryo develops, there isn’t a healthy back-up on another X chromosome to compensate.

As promising as this line of research may be, however, it won’t be easy to search for the brain’s mosaic. Cheek swabs won’t do. Scientists will need to look at individual neurons in the brain. As Insel notes, technology for probing single cells is improving enormously. But there’s still a needle-in-the-haystack quality to such a search. And the raw material for this kind of search is hard to come by. You can’t grab a few neurons from a living person with the ease that you can get cheek cells. You need autopsied brains donated to science.

So it’s unlikely that doctors would actually run a brain mutation test on patients to search for this mosaicism. Instead, understanding the mosaic brain could offer a more general insight: by identifying the late-arising mutations that lead to mental disorders, scientists will better understand their biology. And that knowledge could, some day, lead to better treatments.

 

 

 

How Our Minds Went Viral

Did viruses help make us human? As weird as it sounds, the question is actually a reasonable one to ask. And now scientists have offered some evidence that the answer may be yes.

If you’re sick right now with the flu or a cold, the viruses infecting you are just passing through. They invade your cells and make new copies of themselves, which burst forth and infect other cells. Eventually your immune system will wipe them out, but there’s a fair chance some of them may escape and infect someone else.

But sometimes viruses can merge into our genomes. Some viruses, for example, hijack our cells by inserting its genes into our own DNA. If they happen to slip into the genome of an egg, they can potentially get a new lease on life. If the egg is fertilized and grows into an embryo, the new cells will also contain the virus’s DNA. And when that embryo becomes an adult, the virus has a chance to move into the next generation.

These so-called endogenous retroviruses are sometimes quite dangerous. Koalas, for example, are suffering from a devastating epidemic of them. The viruses are spreading both on their own from koala to koala and from parents to offspring. As the viruses invade new koala cells, they sometimes wreak havoc on their host’s DNA. If a virus inserts itself in the wrong place in a koala cell, it may disrupt its host’s genes. The infected cell may start to grow madly, and give rise to cancer.

If the koalas manage to survive this outbreak, chances are that the virus will become harmless. Their immune systems will stop their spread from one host to another, leaving only the viruses in their own genomes. Over the generations, mutations will erode their DNA. They will lose the ability to break out of their host cell. They will still make copies of their genes, but those copies will only get reinserted back into their host’s genome. But eventually they will lose even this feeble ability to replicate.

We know this is the likely future of the koala retroviruses, because we can see it in ourselves. Viruses invaded the genomes of our ancestors several times over the past 50 million years or so, and their viral signature is still visible in our DNA. In fact, we share many of the same stretches of virus DNA with apes and monkeys. Today we carry half a million of these viral fossils, which make up eight percent of the human genome. (Here are some posts I’ve written about endogenous retroviruses.)

Most of this viral DNA is just baggage that we hand down to the next generation. But sometimes mutations can transform viral DNA into something useful. Tens of millions of years ago, for example, our ancestors started using a virus protein to build the placenta.

But proteins aren’t the only potentially useful parts that we can harvest from our viruses.

Many human genes are accompanied by tiny stretches of DNA called enhancers. When certain proteins latch onto the enhancer for a gene, they start speeding up the productions of proteins from it. Viruses that infect us have enhancers, too. But instead of causing our cells to make more of our own proteins, these virus enhancers cause our cells to make more viruses.

But what happens when a virus’s enhancer becomes a permanent part of the human genome? Recently a team of scientists carried out a study to find out. They scanned the human genome for enhancers from the youngest endogenous retroviruses in our DNA. These viruses, called human-specific endogenous retroviruses, infected our ancestors at some point after they split off from chimpanzees some seven million years ago. We know this because these viruses are in the DNA of all living people, but missing from other primates.

Once the scientists had cataloged these virus enhancers, they wondered if any of them were now enhancing human genes, instead of the genes of viruses. If that were the case, these harnessed enhancers would need to be close to a human gene. The scientists found six such enhancers.

Of these six enhancers, however, only one showed signs of actually boosting the production of the nearby gene. Known as PRODH, it encodes an enzyme that’s involved in making signaling molecules in the brain. And if the enzyme isn’t working properly, the brain can go awry.

In 1999, scientists shut down the PRODH gene in mice and found a striking change in their behavior. They ran an experiment in which they played a loud noise to the mice at random times. Then they started playing a soft tone just before the noise. Normal mice learn to connect the two sounds, and they become less startled by the loud noise. But mice without PRODH remained as startled as ever.

Other researchers have also found evidence for the importance of PRODH in the human brain. In some studies, mutations to the gene have been linked to schizophrenia, for example. (One study has failed to find that link, though.) A mutation that deletes the PRODH gene and its surrounding DNA has been linked to a rare psychiatric disorder, called DiGeorge syndrome.

Once the scientists had found the virus enhancer near PRODH, they took a closer look at how they work in human cells. As they report in the Proceedings of the National Academy of Sciences this week, they searched for the activity of PRODH in tissue from human autopsies. PRODH is most active in the brain–and most active in a few brain regions in particular, such as the hippocampus, which organizes our memories.

The new research suggests that the virus enhancer is partly responsible for PRODH becoming active where it does. Most virus enhancers in our genome are muzzled with molecular caps on our DNA. That’s probably a defense to keep our cells from making proteins willy-nilly. But the hippocampus and other regions of the brain where PRODH levels are highest, the enhancer is uncapped. It may be left free to boost the PRODH gene in just a few places in the brain.

The scientists also found one protein that latches onto the virus enhancer, driving the production of PRODH proteins. And in a striking coincidence, that protein, called SOX2, is also produced at high levels in the hippocampus.

What makes all this research all the more provocative is that this situation appears to be unique to our own species. Chimpanzees have the PRODH gene, but they lack the virus enhancer. They produce PRODH at low levels in the brain, without the intense production in the hippocampus.

Based on this research, the scientists propose a scenario. Our ancestors millions of years ago were infected with a virus. Eventually it became lodged in our genome. At some point, a mutation moved the virus enhancer next to the PRODH gene. Further mutations allowed it to helped boost the gene’s activity in certain areas of the brain, such as the hippocampus.

The scientists can’t say how this change altered the human brain, but given what we know about brain disorders linked to the PRODH gene, it could have been important.

It’s always important approach studies on our inner viruses with some skepticism. Making a compelling case that a short stretch of DNA has an important function takes not just one experiment, but a whole series of them. And even if this enhancer does prove to have been one important step in the evolution of the human brain, our brains are also the result of many other mutations of a far more conventional sort.

Still, the intriguing possibility remains. Perhaps our minds are partly the way they are today thanks to an infection our ancestors got a few millions of years ago.

[For more on the mighty influence of these tiny life forms, see my book A Planet of Viruses.]

Our Speckled Brains

It’s not exactly true to say that each of us has our own genome. We have genomes. Some of us, known as chimeras, have genomes from more than one person. The cells of children linger behind in their mothers; in the womb, cells from twins can intermingle. The rest of us non-chimeras can trace our genomes to one origin–the fertilized egg from which we developed. But as the cells in our bodies divided, they sometimes mutated, creating a panoply of genetic variation known as mosaicism.

I wrote about chimeras and mosaics in September in the New York Times. My article was a status report of sorts. Scientists have known about our many genomes for decades. But with the advent of single-cell genome sequencing, they’re now learning some surprising things about our genetic multitudes. As a status report, my story was far from the final word. And now, just a couple months later, a new study has come out that sheds more light on a place where our mosaic nature can have huge consequences: our brains.

For a long time, scientists who study mosaicism have focused their attention on its dark side. In the 1960s, for example, scientists recognized that cancer cells were the result of our mosaic nature. Mutations arose in a line of cells, and eventually those mutations drove the cells to grow quickly and develop into tumors. And since then mosaicism research has continued to revolve around diseases. A number of rare diseases such hemimegalencephaly–in which one side of the brain is bigger than the other–have been traced to mutations that arise in developing cells.

This is important research, but it risks providing a lopsided view of our mosaic nature. We are left to wonder how many genomes a healthy person can have. Scientists have started to shift their attention from disease to health, and they’re finding that we can have a surprisingly large amount of variation with no apparent ill effect. In the latest issue of Science, Fred Gage of Salk Institute for Biological Studies and his colleagues provide a deep look into the mosaic nature of healthy brains.

First, they watched the brain’s mosaic emerge. They grew three colonies of human stem cells, rearing each of them in a broth of nutrients. Mixed into that broth where chemicals that coaxed the stem cells to develop into neurons. The scientists then plucked out 40 of these neurons and analyzed their genomes. Thirteen of the 40 cells had changed markedly from their ancestors. Some had accidentally gained an extra copy of a chromosome, while others had copies of smaller chunks of DNA. In other neurons, chunks of DNA had been chopped out. The changes were never the same, which meant that they had originated separately.

The scientists then turned their attention to real brains. They took tissue samples from three healthy people who had died in their twenties in accidents. From those samples, the scientists isolated 110 neurons and surveyed their genomes.  In those neurons, they found a similar pattern to the one they saw in their dishes of stem cells. Forty-five out of the 110 neurons had either extra copies of DNA or missing segments. Again, none of the neurons shared the same mutations. That finding means it’s unlikely that the neurons share mutations that arose in a single neuron early in development. Instead, new mutations kept emerging as the brains matured and neurons divided.

Far from being a rare, dangerous fluke, in other words, mosaic neurons turn out to be abundant in our brains. The figure at the bottom of this post shows how this new study expands our understanding of how we become mental mosaics.

With so much mutating going on in our brains, it may be hard to believe that our brains can work at all. In a commentary accompanying the paper, Evan Macosko and Steven McCarroll of Harvard sketch out some defenses our brains may have against this genomic messiness. For one thing, mutations tend to emerge in the parts of the genome that a cell uses least. So many of the mutations that Gage and his colleagues have discovered may affect genes that don’t matter in the brain anyway.

Even if a mosaic neuron does turn out to be defective, the brain may have ways to prevent it from causing much trouble. When the brain develops, it starts by producing an abundance of connections between its neurons. Only later does it then prune many of those connections back. The brain may take its pruning shears to defective mosaic neurons with particular vigor, cutting them off from conversations with other cells.

It’s even possible that those misfit neurons can let our brains perform in new ways, Macosko and McCarroll suggest. The brain may not just tolerate diversity. It may depend on it.

A: If a cell mutates very early in development, its descendants will be found across much of the body. B: A mutation that arises later in the brain and causes cells to proliferate may be easily detected. C: A subtler mosaic forms when neurons experience unique, late-developing mutations. From Macosko & McCarroll, Science 2013
A: If a cell mutates very early in development, its descendants will be found across much of the body. B: A mutation that arises later in the brain and causes cells to proliferate may be easily detected. C: A subtler mosaic forms when neurons experience unique, late-developing mutations. From Macosko & McCarroll, Science 2013

Mouseunculus: How The Brain Draws A Little You

Inside each of us is a miniature version of ourselves. The Canadian neurologist Wilder Penfield discovered this little person in the 1930s, when he opened up the skulls of his patients to perform brain surgery. He would sometimes apply a little electric jolt to different spots on the surface of the brain and ask his patients–still conscious–to tell him if they felt anything. Sometimes their tongues tingled. Other times their hand twitched. Penfield drew a map of these responses. He ended up with a surreal portrait of the human body stretched out across the surface of the brain. In a 1950 book, he offered a map of this so-called homunculus.

Wilder Penfield's homunculus. Source: http://cercor.oxfordjournals.org/content/23/5/1005.ful
Wilder Penfield’s homunculus. Source: http://cercor.oxfordjournals.org/content/23/5/1005.ful

For brain surgeons, Penfield’s map was a practical boon, helping them plan out their surgeries. But for scientists interested in more basic questions about the brain, it was downright fascinating. It revealed that the brain organized the sensory information coming from the skin into a body-like form.

There were differences between the homunculus and the human body, of course. It was as if the face had been removed from the head and moved just out of reach. The area that each body part took up in the brain wasn’t proportional to its actual size. The lips and index finger were gigantic, for instance, while the forearm took up less space than the tongue.

That difference in our brains is reflected in our nerve endings. Our fingertips are far more sensitive than our backs. That’s because we  don’t need to make fine discriminations with our backs, while we use our hands for all sorts of things–like picking up objects or using tools–that demand sensory power.

The shape of our sensory map reflects our evolution, as bipedal tool-users. When scientists have turned to other species, they’ve found homunculi of different shapes, the results of their different evolutionary paths. This picture, taken from my book The Tangled Bank, shows three subterranean mammals: a mole, a naked mole rat, and a star-nosed mole.

From The Tangled Bank, 2nd Edition, Carl Zimmer, Roberts & Company 2013
From The Tangled Bank, 2nd ed., Carl Zimmer, Roberts & Company 2013

The top row shows their actual body shape, and the bottom row shows the relative amount of space on the sensory map devoted to each part. The expanded parts reflect the kinds of sensory information they gather. Moles dig with their hands, for example, to search for worms and other prey. They can’t rely on their sight in the dark; instead, they use their sense of smell, their whiskers, and the sensitive skin on their nose. Naked mole rats, on the other hand, use their teeth instead of their hands to dig. Star-nosed moles, finally, have evolved a bizarre hand-like structure on their noses, which they use to quickly probe the soft mud that they dig through.

Dennis O’Leary, a biologist at Salk Institute for Biological Studies, and his colleagues have spent the past few years investigating how the sensory map takes shape. They have studied mice, which have a sensory map finely tuned to their own way of life as nocturnal rodents that search for food aboveground. The mice depend on their whiskers to let them know about their surroundings. Each whisker is surrounded by a dense cluster of nerve endings, all of which feed information into the brain.

Here’s a drawing of the mouse in its actual proportions, with the whiskers and other regions of its body highlighted.

The sensory map of a mouse. Courtesy of Andreas Zembrzycki and Jamie Simon, Salk Institute for Biological Studies
The sensory map of a mouse. Image from Zembrzycki et al. Nature Neuroscience, August 2013, doi:10.1038/nn.3454

And here is a picture of the mouse’s sensory map–what O’Leary and his colleagues have nicknamed the mouseunculus:

Courtesy of Andreas Zembrzycki and Jamie Simon, Salk Institute for Biological Studies
Image from Zembrzycki et al. Nature Neuroscience, August 2013, doi:10.1038/nn.3454

Reflecting their dependence on their whiskers, the sensory map is dominated by clusters of neurons that process whisker signals. Each of those clusters, called a barrel, is bigger than the cluster of neurons dedicated to the mouse’s entire foot.

The mouseunculus–or any other mammal sensory map–does not instantly take shape in the embryonic brain. It requires experience to grow. Before birth and afterwards, the neurons in the skin deliver signals into the brain. Those signals stimulate the growth of new neurons, as well as the emergence of connections between the cells. If the brain can’t get signals from part of its body–perhaps due to a birth defect that leads to the loss of a limb, say, or due to nerve damage–the sensory map will develop abnormally.

Does that mean that our sensory maps depend on their existence simply on the signals they receive from the skin? O’Leary and his colleagues have found that the answer to this question is no, as they report in the new issue of Nature Neuroscience. A mammal’s genes also help guide the cartographer’s pen.

The new study builds on O’Leary’s earlier discovery that mutations to certain genes alter the structure of the cortex, the thin outer layers of the brain where its most sophisticated information processing takes place. O’Leary and his colleagues decided to look at how one of those genes, called Pax6, influenced the development of the sensory map in particular.

To find out, they developed an intricate technique to shut down Pax6 only in the sensory map and nowhere else in the brain of mouse embryos. The mice were born healthy, and were able to feed their sensory maps with their typical diet of information.

A week after the mice were born, the scientists followed in Penfield’s steps and mapped the mouseunculus. And here’s what they saw:

Courtesy of Andreas Zembrzycki and Jamie Simon, Salk Institute for Biological Studies
Image from Zembrzycki et al. Nature Neuroscience, August 2013, doi:10.1038/nn.3454

The map nicely shows that without Pax6, many of the barrels only grew to a small portion of their normal size, in some cases ending up 80% smaller than in mice with Pax6 switched on in the sensory map. A few barrels never developed at all.

Clearly, O’Leary’s research shows, genes play a big part in building an accurate sensory map. But they do more. Their effects ripple downwards, to earlier steps in the information pathway through the brain.

This diagram shows some stops along that pathway. The signals from the body travel to the back of the brain, and then forward to a structure deep in the brain called the thalamus, before finally reaching the sensory map. At each step, the neurons that process the signals are also organized in maps that correspond to the mouse’s body. They have distinct clumps of neurons for each neuron, called barreloids or barrelettes depend on which part of the brain they’re found.

thalamus map
Image from Zembrzycki et al. Nature Neuroscience, August 2013, doi:10.1038/nn.3454

O’Leary and his colleagues found that when they shut down Pax6 in the sensory map, it wasn’t just the sensory map that changed shape. The thalamus changed too. As with the sensory map, the thalamus’s barreloids shrank or disappeared. O’Leary and his colleagues’ research indicates that the thalamus depends on signals from the sensory map to develop normally. Without those signals, some of the developing neurons in the thalamus die.

The sensory map and the neurons that feed it data turn out to be entangled in an intimate conversation. Signals rising from the skin shape the map, while the genes in the map’s neurons influence it as well–and their influence extends downward into the pathway. This dialogue may be crucial for fine-tuning the entire nervous system, so that we develop sensory maps and sensory neurons that match each other tightly.

Like Penfield’s original map, O’Leary’s research illuminates some of the fundamental questions about how the sensory map works, and it may likewise turn out to offer some practical benefits.  Mutations to genes like Pax6 may alter the sensory map, and their disruption may extend downstream to the thalamus. Those mutations may play a role in brain disorders like autism.

As O’Leary and his colleagues point out in their new paper, previous studies on people with autism have revealed some differences in the activity of genes in the brain. In particular, the genes involved in marking off different areas of the cortex have different patterns of activity.

This change to the brain-shaping genes may explain the fact that some scientists have found changes to the structure of the cortex. While the overall size of the cortex is the same in autistic brains and normal brains, the front portion of the cortex is enlarged in people with autism.

This shift may also mean that the region of the cortex further towards the back of the brain gets smaller. And it just so happens that our homunculus lurks back there. It’s possible, in other words, that in people with autism, the disruption of the cortex leaves them with a smaller sensory map.

If the sensory map is indeed smaller in people with autism, the effects might radiate outward to the thalamus. A recent study on the brains of 17 autistic people revealed that they did indeed have a smaller thalamus on average.

If this hypothesis is correct–and it’s important to note that it’s still based on the study of relatively few people–it could explain why people with autism often have trouble with processing the information of their senses. And it might even point towards new ways to treat autism, by nurturing the inner homunculus.

[Update 3 pm: This post was updated to O’Leary’s correct institution–the Salk Institute, not Scripps Institute. Two great institutes very close both in space and in my mind.]

[Update 11 pm: The post initially stated moles are rodents. My bad.]

How Our Outside World Turned Inward

The nervous system that sprouts from the brain may seem like an incomprehensible tangle. But anatomists can divide it pretty cleanly into two parts. One part is directed to the outside world, while the other is turned inward.

The somatic nerves take in sensory information from the outside world from our eyes, nose, ears, and skin. They also relay commands to move muscles. They are essential for our responding to the external world. Visceral nerves, on the other hand, detect information about our internal state. They sense blood pressure, the queasiness in our guts, even the level of oxygen in our bodies. And they also send signals to those organs, causing racing hearts, gasping lungs, and puking stomachs.

Recently, Marc Nomaksteinsky of Institut de Biologie de l’École Normale Supérieure in Paris and his colleagues explored the evolution of this divide. They discovered evidence suggesting that it’s a profoundly ancient one.

Tracking the evolution our nervous system is especially hard. Teeth and bones leave behind sturdy fossils that contain clues to how earlier forms gave rise to later ones–a fish’s fin becoming a foot, for example. Neurons dissolve away after death. The brain, with the consistency of custard, cannot withstand the elements and leaves behind a hollow cavity. That cavity can tell scientists about the size and shape of a brain, but not much about the function of the brain within. The holes through which neurons pass through the skull and other bones offer the skimpiest of hints about what signals they relayed in life.

Scientists can add to those skimpy hints from fossils by comparing living animals. Humans and other living primates share certain features in their brains not found in other animals. The emergence of our primate ancestors 60 million years ago was marked by a massive expansion of the visual cortex, for example.

Nomaksteinsky and his colleagues used a different method to explore the evolution of our two nervous systems. The somatic and visceral nerves in our bodies have distinctive molecular profiles. Each type makes its own combination of proteins to carry out its own particular task. Almost all the visceral nerves, for example, make a protein called Phox2b. The somatic nerves that relay sensory information, on the other hand, all make a protein called Brn3.

The scientists wondered if they could find neurons with these molecular profiles in distantly related animals. They chose to look at a snail and a related species called Aplysia. Both species are mollusks, which sit on a branch of the animal evolutionary tree far from our own. The common ancestor of mollusks and us lived about 600 million years ago, at an early stage in animal evolution.

Nomaksteinsky and his colleagues found versions of Phox2b, Brn3, and other markers of somatic and visceral nerves in the mollusks. What’s more, they found the two kinds of markers in two distinct sets of neurons. This is pretty remarkable when you consider how different our nervous systems are. We humans and other vertebrates have one big brain in our head, out of which sprouts a system of neurons. Molluscs have a cluster of neurons in their head, but they also have clusters in other parts of their body, all connected in what looks like a complex snarl.

But when you consider what the two kinds of neurons do in mollusks, some similarities emerge. Some of mollusk neurons with a “somatic” profile are sensitive to touch and pain–just like some of our own somatic neurons are. Some of mollusk neurons with a “visceral” profile control a siphon they use to suck in water in order to filter food. That’s the sort of function our own visceral nerves carry out with our lungs and digestive system.

These results suggest that a snail’s nervous system is split between the outer and inner worlds much like ours is. The molecular profile of their neurons suggests the split didn’t evolve indepedently, once in molluscs and vertebrates. It arose instead in our common ancestor–a small, worm-shaped creature crawling on the ocean floor.

In a commentary on the paper, Paola Bertucci and Detlev Arendt of European Molecular Biology Laboratory speculate on how these two parts of the nervous system may have arisen. In us, the visceral system senses the inner chemistry of our bodies. But for an ocean worm 600 million years ago, this kind of information was important to sense in its external environment, too–the pH of the sea water, its saltiness, its oxygen levels, and so on. Perhaps the entire nervous system started pointing outward. Only later did it evolve to tell us something about our inner world.