A Blog by

People Sometimes Like Stinky Things—Here’s Why

Updated September 30, 2015

A corpse flower smells like a heady mix of rotten fish, sewage, and dead bodies. It’s a stench meant to draw flies, but just as surely, it draws tourists. Braving a blustery Chicago night, thousands of people lined up Tuesday for a whiff of a corpse flower named Alice at the Chicago Botanic Garden.

This woman shows a classic "disgusted" face in a video about the 2013 blooming of a corpse flower (see video, top).
This woman shows a classic “disgusted” face in a video about the 2013 blooming of a corpse flower (see video, top).

In fact, the demand to see and smell a corpse flower is so great that botanical gardens now vie to own one. Gardeners lavish them with care, hoping to force more stinky blooms from a plant whose scent is so rare (up to a decade between flowerings) and so fleeting (eight to 12 hours) that visitors are often disappointed to miss peak stench.

But why do people want to smell the thing? The reaction is usually the same: the anticipation, the tentative sniff, then the classic scrunched-up face of disgust. And yet everyone seems happy to be there.

It turns out there’s a name for this: benign masochism.

Psychologist Paul Rozin described the effect in 2013 in a paper titled “Glad to be sad, and other examples of benign masochism.” His team found 29 examples of activities that some people enjoyed even though, by all logic, they shouldn’t. Many were common pleasures: the fear of a scary movie, the burn of chili pepper, the pain of a firm massage. And some were disgusting, like popping pimples or looking at a gross medical exhibit.

The key is for the experience to be a “safe threat.”

“A roller coaster is the best example,” Rozin told me. “You are in fact fine and you know it, but your body doesn’t, and that’s the pleasure.” Smelling a corpse flower is exactly the same kind of thrill, he says.

It’s a bit like kids playing war games, says disgust researcher Valerie Curtis of the London School of Hygiene and Tropical Medicine. “The ‘play’ motive leads humans (and most mammals, especially young ones) to try out experiences in relative safety, so as to be better equipped to deal with them when they meet them for real,” she says.

People around the world make the same face when disgusted, with a downturned mouth and sometimes a protruding tongue.
People around the world make the same face when disgusted, with a downturned mouth and sometimes a protruding tongue.

So by smelling a corpse flower, she says, we’re taking our emotions for a test ride. “We are motivated to find out what a corpse smells like and see how we’d react if we met one.”

Our sense of disgust, after all, serves a purpose. According Curtis’ theory of disgust, outlined in her insightful book “Don’t Look, Don’t Touch, Don’t Eat,” the things most universally found disgusting are those that can make us sick. You know, things like a rotting corpse.

Yet our sense of disgust can be particular. People, it seems, are basically fine with the smell of their own farts (but not someone else’s). Disgust tends to protect us from the threat of others, while we feel fine about our own grossness.

Then there are variations in how we perceive odors. Some smells are good only in small doses, as perfumers know. Musk, for instance, is the base note of many perfumes but is considered foul in high concentrations. Likewise for indole, a molecule that adds lovely floral notes to perfumes but is described as “somewhat fecal and repulsive to people at higher concentrations.”

University of California Botanical Garden
University of California Botanical Garden

No one has yet, to my knowledge, tried out a low dose of corpse flower in a perfume (though you can try on an indole brew in “Charogne,” which translates to “Carrion,” by Etat Libre d’Orange). But someone could. There’s an entire field of perfumery—called headspace technology, it was pioneered by fragrance chemist Roman Kaiser in the 1970s—that’s dedicated to capturing a flower’s fragrance in a glass vial and then re-creating the molecular mix chemically. I would love to see someone give eau de corpse flower a whirl, if only they can find a headspace vial large enough.

The stench of a corpse flower, after all, is a mix of compounds, including indole and sweet-smelling benzyl alcohol in addition to nasties like trimethylamine, found in rotting fish. So I’d be very curious to know if a small amount of corpse flower would be a smell we would hate, or maybe love to hate.

I’ll leave you with my favorite example of a “love to hate” smell, from my childhood in the 1980s. At a time when I loved Strawberry Shortcake dolls and scratch-and-sniff stickers, the boys in my class were playing with He-Man dolls. Excuse me, action figures. And among the coolest, and grossest, of them was Stinkor. He was black and white like a skunk, and his sole superpower was to reek so badly that his enemies would flee, gagging.

To give Stinkor his signature stink, Mattel added patchouli oil to the plastic he was molded from. (This confirms the feelings of patchouli-haters everywhere.) It meant that you couldn’t wash Stinkor’s smell away, and it wouldn’t fade like my Strawberry Shortcakes did. The smell was one with Stinkor. And of course, children loved him.

Writer Liz Upton describes the Stinkor figure that she and her brother adored (their mother did not). The kids would pull Stinkor out and scratch at his chest, smelling him again and again. “Something odd was going on here,” Upton writes. “Stinkor smelled dreadful, but his musky tang was strangely addictive.”

If you’re the kind of benign masochist who wants to smell Stinkor for yourself, you can pay $125 or more for a re-released collector’s edition Stinkor—or you can just find an old one on eBay. The amazing thing: 30 years later, the original Stinkor dolls still stink. And people still buy them.

A Blog by

Injecting Electronics Into Brain Not as Freaky as it Sounds

No need to wait for the cyborg future—it’s already here. Adding to a growing list of electronics that can be implanted in the body, scientists are working to perfect the ultimate merger of mind and machine: devices fused directly to the brain.

A new type of flexible electronics can be injected through a syringe to unfurl and implant directly into the brains of mice, shows a study published Monday in Nature Nanotechnology. Researchers injected a fine electronic mesh and were able to monitor brain activity in the mice.

“You’re blurring the living and the nonliving,” says Charles Lieber, a nanoscientist at Harvard and co-author of the study. One day, he says, electronics might not only monitor brain activity but also deliver therapeutic treatments for Parkinson’s disease, or even act as a bridge over damaged areas of the brain. Deep brain stimulation is already used for Parkinson’s, but uses relatively large probes, which can cause formation of scar tissue around the probe.

The tiny size (just a couple of millimeters unfurled) of the new devices allow them to be placed precisely in the brain while minimizing damage, a separate team of Korean researchers note in an accompanying article. Ultimately, the goal is to interweave the electronics so finely with brain cells that communication between the two becomes seamless.

And that’s just the latest in the merging of electronics into the human body. While Lieber envisions using the implants in science and medicine—for example, to monitor brain activity and improve deep-brain stimulation treatment for Parkinson’s disease—others are already using non-medical electronic implants to become the first generation of cyborgs. These do-it-yourselfers call themselves biohackers, and they aren’t waiting for clinical trials or FDA approval to launch the cybernetic future.

At the website Dangerous Things, you can buy a kit—complete with syringe, surgical gloves and Band-Aid—to inject a small electronic device into your own body. The kits use a radio-frequency ID tag, or RFID, similar to the chips implanted to identify lost dogs and cats. These can be scanned to communicate with other devices. The site warns that implanting the chips should be done with medical supervision and “is strictly at your own risk.”

X-Ray Amal Graafstra
An X-ray image of Amal Graafstra’s hands shows the two electronic tags he had implanted. Image: Dangerous Things

The website’s charismatic founder, Amal Graafstra, has  RFID implants in each hand, and can use them to unlock doors and phones, log into computers, and start his car by waving a hand.

“One of the holy grails of biohacking is the brain-computer interface,” Graafstra says. He likens brain-wiring efforts so far to eavesdropping on neural activity with a glass to our ears and then shouting back with a bullhorn; electronics simply overwhelm the subtle communication between brain cells. “The ultimate goal, I think, would be a synthetic synapse,” he says, in which nanomaterials would function much like living brain cells, allowing far more nuanced communication between mind and machine.

An article in the Telegraph in October 2014 sums up today’s state of the art in brain-hacking:

“Quietly, almost without anyone really noticing, we have entered the age of the cyborg, or cybernetic organism: a living thing both natural and artificial. Artificial retinas and cochlear implants (which connect directly to the brain through the auditory nerve system) restore sight to the blind and hearing to the deaf. Deep-brain implants, known as “brain pacemakers,” alleviate the symptoms of 30,000 Parkinson’s sufferers worldwide. The Wellcome Trust is now trialling a silicon chip that sits directly on the brains of Alzheimer’s patients, stimulating them and warning of dangerous episodes.”

The goal of a complete merger of biology and technology is exciting to champions of transhumanism, which aims to enhance human intelligence, abilities, and longevity through technology.

But not everyone is thrilled about a future filled with genetic engineering, artificial intelligence, and cyborg technology. Implanting electronics in the brain, more so than in the hands or even the eye, goes directly to one of the biggest fear about cyborgs: a threat to free will. Could someone hijack an implant to control its user’s thoughts or actions? Or to read their minds?

That’s unrealistic, at least with current technology. The kinds of electronics that Lieber and others are working on have inherently limited use—such as delivering a small electric pulse to a particular spot—and would be useful only to people with a serious medical condition.

“Some people think we’re going to implant a microprocessor in people’s heads,” Lieber says, “but that has to interface to something.” And a tiny electronic device attached to one part of the brain simply cannot take over a person’s thoughts. “There’s always going to be someone interested in doing something bad,” he adds, so it’s important to monitor the technology as it becomes more sophisticated.

Graafstra says biohacking has “some maturing to do,” and studies like Lieber’s are a good step in bringing scientific rigor to what has at times been a Wild West.

“I think the biohacker understands that we are our brains,” he says. “You are your mind, and the body is the life support system for the mind. And like an SUV, it’s upgradeable now.”


A Blog by

Now We Know What It Feels Like to Be Invisible

First, Arvid Guterstam made himself invisible. When he looked down at his body, there was nothing there.

He could feel he was solid; he hadn’t vanished into thin air. He even felt a paint brush tickle his transparent belly, while the brush appeared to be stroking nothing but air.

Being invisible is “great fun,” Guterstam reports, “but it’s an eerie sensation. It’s hard to describe.”

Then he took off his virtual reality headset and was back in the laboratory, fully visible. Guterstam is a medical doctor and PhD student, and he had just pulled off the first fully convincing illusion of complete invisibility. He went on to test 125 other people, and reports Thursday in Scientific Reports that seven out of ten also felt the illusion, and it was realistic enough to make them feel and respond physically as if a group of people could not see them.

One day, just maybe, cloaking devices might make human invisibility possible. Guterstam wants to know what that will feel like—and what these people might do. How about their morals? If you take away the chance of being caught, will people, as we might suspect, lose their sense of right and wrong?

But before he can test the moral fortitude of the newly transparent, Guterstam has to get people to feel completely invisible. He and his colleagues in Henrik Ehrsson’s laboratory at Sweden’s Karolinska Institute have succeeded at many other body-morphing illusions, including making my Phenomena colleague Ed Yong feel in turn that he had left his body, shrunk to the size of a doll, and grown a third arm.

They already convinced people they had an invisible hand. But what about the whole body? “This is definitely pushing the boundaries of how bizarre an illusion of this kind can get,” Guterstam says.

Gustav Mårtensson
A simple trick creates the illusion of an invisible hand. Gustav Mårtensson
A simple trick creates the illusion of an invisible hand.

This time, they had people put on a virtual-reality headset that showed the view from a second headset, mounted at head height on nothingness. If you were in this getup, a scientist would touch you with a paint brush while simultaneously touching the nothingness in the same place, as though a body were there. So as you felt the brush, your eyes would be telling you that the brush was touching your nothingness body.

When a scientist swiped a knife toward the invisible belly, people’s heart rates went up and they broke out in a sweat, the classic stress response. When put in front of an audience of serious-looking people staring them down, “visible” people also got stressed. But the “invisible” people—not so much. They felt so completely invisible that their bodies responded as though they really were invisible. Since the audience couldn’t see them, there was no reason to feel uncomfortable.

The illusion works because, as the team has learned from these tricks, it’s shockingly easy to create an out-of-body experience. Our sense that we reside within our bodies—what we can think of as our sense of self—is not fixed. Instead of being firmly locked in our body, our sense of self can float free, as if on a tether.

Our brains, it appears, create this body sense moment by moment, continuously monitoring our senses and putting the “me” where those senses say it should be. Move the senses, and you move the me. All it takes is creating a mismatch between where I see I’m being touched and where I feel it.

This is all very interesting, but what do we do with it? Well, Ehrsson’s group is also working on better prosthetic devices for amputees, that would harness the sense of self to make the prosthetic feel like a true body part. One day, we might even control robots with our movements and actually feel that we’ve jumped into the robotic body.

And then there’s the dream of actual invisibility, with all its moral dilemmas. For now, we don’t need to fret too much: The closest we’ve gotten is disappearing a cat and a goldfish, and only behind a fixed cloaking device and from the right angles.

All this talk of invisibility leads, inevitably, to The Question. A question whose answer, many believe, says something deep about each of us. If you could be the only person on Earth with a superpower, and you could choose between flight and invisibility, which would you choose?

Flight, many feel, is the noble choice. Invisibility is for thieves and perverts. Yet when we’re honest with ourselves, that’s exactly what many of us are, so invisibility maintains its secret allure.

And Arvid Guterstam? “I would probably say flying,” he says.

A Blog by

Why Do We See the Man in the Moon?

Take a look at the slideshow above. The photos depict, in order: tower binoculars, a tank tread, tree bark, headphones, a tray table, a toilet, eggs, and more tree bark. Yet I perceived every one of them as a face, and I bet you did, too.

That’s because, as I wrote about a few weeks back, most people are obsessed with faces. We see faces everywhere, even in things that are most definitely not faces. The most famous example is probably the man in the moon. The weirdest has got to be the person who reportedly paid $28,000 for an old grilled cheese sandwich whose burn marks outline the face of the Virgin Mary.

This phenomenon, called face pareidolia, isn’t new (Leonardo da Vinci even wrote about it as an artistic tool). But nobody knows much about how or why our brains create this illusion. This week I came across a fascinating brain-imaging study that begins to investigate these questions. The paper, published in the journal Cortex, is titled “Seeing Jesus in Toast,” and this fall it won an Ig Nobel Prize, awarded “for achievements that first make people laugh then make them think.”

The study hinges on a clever method for inducing pareidolia inside of a brain scanner. The researchers showed 20 volunteers hundreds of “noise images” — squares comprised of black, white, and gray blobs — and told them that half of the images contained hard-to-detect faces. (The participants had been through a training period in which they saw clearly defined faces in such images, so they were used to the act of searching for a face within the noise.) After seeing a noise image, the volunteer would press a button indicating whether she saw a face in it or not. Unbeknownst to the participants, none of the noise images contained any overt faces.

The scientists reasoned that trials in which participants reported seeing a face were examples of pareidolia. To confirm this, the researchers took all of the images in which a participant saw a face and combined them into an average image. They then subtracted from that the average of all of the images in which the same participant did not see a face. The result of that subtraction, somewhat amazingly, was a crude face shape, suggesting that participants really were experiencing face pareidolia.

A week later, the same participants came back in the lab and went through a similar procedure. This time, though, they were told that half of the noise images they saw contained a hard-to-detect letter. In reality, none of them did — in fact, the images were exactly the same as those they saw the previous week.

All of these experiments happened inside of a brain scanner, allowing the scientists to compare which parts of the brain are activated during face pareidolia, letter pareidolia, and no pareidolia.

It turns out that a particular brain area — the right fusiform face area (FFA) — showed enhanced activation during face pareidolia but not letter pareidolia or no pareidolia. What’s more, the higher a volunteer’s activation in the right FFA, the more her subtracted composite image looked like a face, the study found.

This is an intriguing finding, the researchers say, because of what’s already known about the FFA. Previous studies had found that this area is specialized for processing true faces (hence the name). The fact that it’s also active for these imagined faces suggests that it’s involved in our more abstract conceptions of faces, as opposed to simply responding to the basic visual pattern of two eyes on top of a nose.

And why do our brains so easily create faces? There’s a compelling evolutionary explanation, the researchers write. “This tendency to detect faces in ambiguous visual information is perhaps highly adaptive given the supreme importance of faces in our social life.”

Regardless of what’s going on in my brain, there’s something delicious about looking at photos of face pareidolia, don’t you think? If you have your own examples, please share — the weirder the better!

A Blog by

Category Fail

I’ve written a lot of stories about autism research, and I’d say one of the biggest scientific developments in the past few years was the creation of ‘autistic’ mice. Researchers first found many, many genes associated with autism in people, and then created dozens of mouse models that carry one or more of those same genetic glitches.

In the fall of 2011, for example, one team debuted mice with extra copies of a gene called UBE3A. Approximately 1 to 3 percent of children with autism carry extra copies of the same gene. These mutant mice show little interest in social interactions, compared with controls. They also emit fewer vocalizations and repetitively groom themselves. This was heralded as something of an autism trifecta, as the animals mimicked the three ‘core’ symptoms of people with the disorder: deficits in social behaviors and in communication, as well as repetitive behaviors.

The same goes for mouse models based on environmental, rather than genetic triggers. Mice whose mothers got an infection while pregnant end up with abnormal social interactions and vocalizations, and they repetitively bury marbles. Once again, the animals show all three “core” deficits, and are thus considered to be a valid model of autism.

There’s a nice and tidy logic to this approach, understandably appealing to neuroscientists. If a mouse model mimics the three behaviors used to define autism, then studying the cells and circuits of those mice could lead us to a better understanding of the human disorder. But there’s a big hole in that logic, according to a provocative commentary published by Eric London in this month’s issue of Trends in Neurosciences. The problem is that the symptoms of autism — like those of all psychiatric disorders — vary widely from one person to the next. So using the fuzzy diagnostic category of ‘autism’ to guide research, he writes, “is fraught with so many problems that the validity of research conclusions is suspect.”

London begins with a short history of the Diagnostic and Statistical Manual of Mental Disorders, or DSM, the book that since 1980 has dictated what collections of symptoms define one disorder or another. There’s nothing wrong with a categorical diagnosis, per se. It can have enormous explanatory power. If a doctor diagnoses you with strep throat, for example, you have a good idea of what that is (a bacterial infection) and how you might treat it (antibiotics). “A psychiatric diagnosis, by contrast, is rarely as informative,” London writes.

People diagnosed with schizophrenia, bipolar disorder, depression, or autism often don’t know what caused the trouble, and they struggle with unpredictable symptoms, ineffective treatments, and unpredictable responses to those treatments.

What’s more, most people who fall into the bucket of one psychiatric disorder also meet criteria for others. London cites some fascinating numbers: Some 90 percent of people with schizophrenia, for example, have another diagnosis as well. More than 60 percent of people with autism have another diagnosis, and one-quarter have two or more. “Autism is comorbidly present in over 50 specific diagnoses comprising other genetic and medical conditions,” London writes.

The three supposedly core behaviors of autism don’t correlate well with each other, he adds. In other words, many kids just have one or two of the three. Francesca Happé has conducted many studies suggesting that each of these symptoms is inherited independently, suggesting that each has its own, separate biological cause.

The danger of focusing on these three behaviors is that it might cause clinicians and researchers to overlook other symptoms that are common in people with autism. Many kids with autism have gastrointestinal issues, for example, and many show a range of motor problems, such as head lag, trouble sitting up, or a wobbly gait. And more than 80 percent of people with autism have anxiety, London notes. Mouse models of the disorder may have some of these problems, too, but researchers don’t usually test for them.

The DSM has tried to address some of these problems. Its latest version, released last year, defines autism with two criteria: social and communication deficits, and repetitive behaviors. But London doesn’t think that goes nearly far enough, for all the reasons outlined above. He proposes an even broader category of “neurodevelopmental disorder,” which would include more than 20 different DSM categories, including autism and schizophrenia. Just as they do today, clinicians could still focus on specific symptoms — whether sensory sensitivities, anxiety, psychosis, attentional problems, etc. — when deciding how to treat each person.

London’s commentary is only the latest in an old debate about diagnoses: Is it better to lump, or to split? Some scientists agree with him, others don’t, and I see merit in the scientific arguments on both sides. One point I think sometimes doesn’t get enough attention, though, is the social power of a diagnosis.

These labels carry meaning, for better or worse. For people with mysterious illness, such as chronic fatigue syndrome, a label can make them feel acknowledged and validated, or completely marginalized. Diagnoses for brain disorders, such as Asperger’s syndrome, can unite people under a common identity, or create dangerous societal stigma. Rational diagnostic categories are crucial for scientific progress, as London argues. But scientists would do well to remember that their labels also have lasting consequences outside of the lab.

A Blog by

“Malformed” Is the Best Brain Book I Read This Year (and Maybe Ever)

Of all the glossy photo books to showcase on your coffee table, your first choice might not be one of decaying human brains. But it should be, so long as that book is “Malformed.”

The first few pages give a sense of what you’re in for: hauntingly beautiful photographs of brains (see slideshow above). One photo shows a seemingly normal brain, plump and pink-gray, floating in cloudy liquid inside a glass jar. Another shows a thick slice of each hemisphere sitting on top of wet, white gauze. In another, three small brains are tucked inside a jar with a yellowing label noting the condition their donors were born with: Down’s Syndrome.

Photographer Adam Voorhes took these photos and dozens of others in a forgotten storeroom at the University of Texas at Austin. There, on a wooden shelving unit, sit about 100 brain specimens from people who once lived in the Austin State Hospital between the 1950s and 1980s. The hospital was once called the Texas State Lunatic Asylum, and its residents were (or rather, were considered to be) mentally ill.

These stunning photos of their brains make up the bulk of the book, but they are accompanied by several equally lively essays about the history of the collection, written by journalist Alex Hannaford. Together, the pictures and text tell two compelling stories. The first is the sordid history of this asylum and others like it, and how we’ve changed our approach to treating mental illness. The second story — one that, by the way, has no end in sight — is how the material goo of the brain interacts with the environment to shape our behavior.

The Austin State Hospital, formerly known as the Texas State Lunatic Asylum. Photo via Wikipedia.
The Austin State Hospital, formerly known as the Texas State Lunatic Asylum. Photo via Wikipedia

The Texas State Lunatic Asylum was founded, in 1853, with a quarter million dollars from the federal government and a surprisingly progressive mandate. Its supporters believed that the best treatment for the mentally ill was fresh food, fresh air, and a little peace and quiet. So the asylum grounds, enclosed by a cedar fence, included vegetable gardens, fruit orchards, oak and pecan trees, and even a string of lakes. Patients could roam as they pleased.

Within two decades, though, this idyllic picture began to crack. “Overcrowding, illness, escape and even some fairly horrific suicide attempts — all were documented in the pages of the local paper,” Hannaford writes.

Some of the most interesting parts of the book are the descriptions of these early asylum patients. Many, as you might expect, were diagnosed with insanity or mania. Others had conditions that we don’t typically associate with mental illness today, such as epilepsy, stroke, Alzheimer’s, and Down Syndrome. Still other diagnoses were, at least to me, wholly unexpected: love, masturbation, menopause, “excessive study,” “religious excitement,” and even “melancholia caused by deranged menstruation.”

None of these early patients had their brains removed at death. The brain collection began in the 1950s, apparently at the whim of the hospital’s pathologist, Coleman de Chenar. When he died, in 1985, six major scientific institutions, including Harvard Medical School, wanted his brain collection. It ended up at the University of Texas.

Why such interest in these homely lumps of dead tissue? Because of the tantalizing idea that brains can reveal why a sick person was sick. In some cases, gross anatomy indeed provides answers, albeit vague. There are many pictures in “Malformed” showing brains with obvious abnormalities, such as an asymmetrical shape, dark, blood-filled grooves, or a complete lack of folding.

It’s satisfying to think, ‘A ha, that’s why they were disturbed.’ Hannaford tells a fascinating story, for example, about a man named Charles Whitman. One day in 1966, the 25-year-old engineering student at the University of Texas went on a shooting rampage, killing 16 people and wounding 32 before being shot by police. In a note he left behind, Whitman asked to be autopsied, “urging physicians to examine his brain for signs of mental illness,” Hannaford writes. De Chenar performed the autopsy. When examining the killer’s brain, the doctor found, right in the middle, a 5-centimeter tumor.

A later report concluded that this tumor, which was highly malignant, “conceivably could have contributed to his inability to control his emotions and actions.” On the other hand, Whitman also allegedly suffered from child abuse and mental illness. So there’s no way to know, for sure, what caused what.

And that’s the case for all postmortem brain investigations, really. A couple of years ago I wrote a story for Scientific American about researchers in Indiana who are doing DNA analyses on century-old brain tissue that once belonged to mental patients. It’s unclear whether the DNA will be useable, after all this time. Even if it is, the researchers will be left with the unanswerable question of cause and effect. Did a particular genetic glitch cause the patient to have delusions? And how many healthy people are walking around right now with slightly abnormal brains that will never be subjected to scientific scrutiny?

This sticky issue, by the way, persists whether the person in question is mentally ill or mentally exceptional. Earlier this year I wrote about Einstein’s brain, which was stolen at autopsy, carved into 240 pieces, and (eventually) distributed to several laboratories. These researchers have published half a dozen studies reporting supposedly distinctive signatures of Einstein’s brain. “The underlying problem in all of the studies,” I wrote in that piece:

“…is that they set out to compare a category made up of one person, an N of 1, with a nebulous category of ‘not this person’ and an N of more than 1. With an N of 1, it’s extremely difficult to calculate the statistical variance — the likelihood that, for example, Einstein’s low neuron-to-glia ratio is real and not just a fluke of that particular region and those particular methods. Even if the statistics were sound, you’d still have the problem of attributing skills and behaviors to anatomy. There’s no way to know if X thing in Einstein’s brain made Einstein smart/dyslexic/good at math/you name it, or was just an X thing in his brain.”

“Malformed” is able to make that point more subtly and beautifully than anything else I’ve read. By looking at these brains, each photographed with such care, the irony is obvious: At one point not so long ago, we were willing to take away a person’s freedom — perhaps the ultimate sign of disrespect — for innocuous behaviors considered “abnormal.” And yet, at the same time, we went to great lengths to remove and preserve and label and, yes, respect these people’s dead brain tissue.

It would be wonderful if these specimens someday make a solid contribution to the science of mental illness. If they never do, though, they’re still valuable. They tell a story of a dark chapter in our history — one that I hope is never re-opened.

A Blog by

Personhood Week: Why We’re So Obsessed with Persons

It’s Personhood Week here on Only Human. To recap the week: Monday’s post was about conception, and Tuesday’s about the age of majority. Wednesday’s tackled DNA and dead bodies, and yesterday I took yet another opportunity to opine about the glories of pet-keeping. Today’s installment asks why we’re so fixated on pinning down the squishy notion of personhood.

I’d love to hear about how you guys define personhood, and why. Feel free to leave comments on these posts, or jump in to the #whatisaperson conversation on Twitter.


People have been trying to define personhood for a long time, maybe since the beginning of people. The first recorded attempt came from Boethius, a philosopher from 6th-Century Rome, who said a person was “an individual substance of rational nature.” Fast-forward a thousand years and Locke says it’s about rationality, self-awareness, and memory. Kant adds that humans have “dignity,” an intrinsic ability to freely choose. In 1978, Daniel Dennett says it’s intelligence, self-awareness, language, and being “conscious in some special way” that other animals aren’t. The next year Joseph Fletcher lays out 15 criteria (!), including a sense of futurity, concern for others, curiosity, and even IQ.

“Personhood is a concept that everyone feels they understand but no one can satisfactorily define,” wrote Martha Farah and Andrea Heberlein in a fascinating 2007 commentary for The American Journal of Bioethics. Farah and Heberlein are neuroscientists, and they note that neuroscientific tools may be useful for investigating some of the psychological concepts — reason, self-awareness, memory, intelligence, emotion — historically associated with personhood. But even if we had complete neurological understanding of these skills, they say, it would be no easier to define what a person is and isn’t.

But neuroscience does have something interesting to contribute to this discussion: a provocative explanation for our perennial obsession with personhood. “Perhaps this intuition does not come from our experiences with persons and non-persons in the world, and thus does not reflect the nature of the world,” Farah and Heberlein write. “Perhaps it is innate and structures our experience of the world from the outset.” In other words, maybe we’re born with the notion of personhood — and thus find it everywhere we look.

As evidence of this idea Farah and Heberlein turn to study of the so-called “social brain,” regions of the brain that help us navigate life in our very social world.

Take faces. We know that certain brain circuits are responsible for recognizing faces because in some people those structures don’t work properly: People with a condition known as prosopagnosia have no trouble distinguishing between complex objects, and yet they can’t tell one face from another. And some people have the opposite problem: They can’t tell objects apart but have no trouble recognizing faces. Almost 20 years ago, scientists discovered a region of the brain, called the fusiform face area, that is selectively activated when we look at faces.

Farah and Heberlein go on to list many other brain areas tied to people-identification. Looking at bodies (but not faces) activates another part of the fusiform gyrus, and watching body movement (made up only of points of light, and not actual body parts) activates the superior temporal sulcus. The  temporal parietal junction, meanwhile, seems to process the theory of mind, our ability to think about what other people are thinking.

The neuroscientists argue that this network of people-related regions has “a surprising level of automaticity,” meaning that it’s activated regardless of whether we’re consciously thinking about people. Social brain areas are activated not only when we look at realistic photographs of faces or bodies, but when we look at smiley faces or stick figures. Some of us might see a man in the gray craters of the moon, or the face of the Virgin Mary in the burned folds of a grilled cheese sandwich. We automatically assign agency to things as well. In one famous experiment from the 1940s, researchers created a simple animation of two triangles and a circle; watching it, you can’t help but think that the larger triangle is bullying the poor circle:

The social brain also has “a high degree of innateness,” the scientists write, meaning that it’s switched on even in newborns, who have obviously had scant real-world experience with people. A study in 1991 found, for example, that babies just 30 minutes old are more likely to look at face-like shapes that other kinds. (You can see those shapes for yourself in this piece about illusions I wrote for Nautilus.) Some research on autism, a strongly genetic condition, also bolsters the idea of the innateness of the social brain. Many people with autism prefer to interact with objects rather than people, and have difficulty processing facial expressions. People with autism also show differences in activity in the “social brain” regions mentioned above.

At the end of their commentary, Farah and Heberlein make an interesting distinction between persons and plants. Science, they say, offers an objective definition of plants: they are organisms that get their energy through photosynthesis. But science has found no such criteria for personhood. Why? “We suggest that this is because the category ‘plant’ has a kind of objective reality that the category ‘person’ does not,” they write.

Let’s assume for a moment that these neuroscientists are right — that the distinction between persons and non-persons is not something that exists in the world outside of our minds. Does that mean I’ve just wasted a week going on and on about this illusion?

Here’s why I think the personhood notion so valuable. We are people. Our people-centric minds evolved for a reason (namely, our species depends on social interactions) and our people-centric minds dictate how our society works. So maybe personhood is not based in reality. It’s the crux of our reality.

A Blog by

When Grief Is Traumatic

As Vicki looked at her son in his hospital bed, she didn’t believe he was close to death. He was still young, at 33. It had been a bad car accident, yes, but he was still strong. To an outsider, the patient must have looked tragic — unconscious and breathing through a ventilator. But to Vicki, he was only sleeping. She was certain, in fact, that he had squeezed her hand.

Later that day, doctors pronounced Vicki’s son brain-dead. And for the next two years, she couldn’t stop thinking about him. She felt terribly guilty about the circumstances of his death: He and a friend had been drinking before they got in the car. She knew he was a recovering alcoholic, and that he had recently relapsed. She couldn’t shake the thought that she should have pushed him harder to go back to rehab. Every day Vicki flipped through a scrapbook of his photos and articles about his death. She turned his motorcycle helmet into a flowerpot. She let housework pile up and stopped seeing her friends. “She seemed to be intent on holding onto him,” one of her therapists wrote about her case, “at the cost of reconnecting with her own life.”

Vicki is part of the 10 percent of grievers who have prolonged grief, also known as complicated grief or traumatic grief. Grieving is an intense, painful, and yet altogether healthy experience. What’s unhealthy is when the symptoms of grief — such as yearning for the dead, feeling anger about the loss, or a sense of being stuck — last for six months or more.

Very unhealthy. Over the past three decades, researchers have tied prolonged grief to an increased risk of a host of illnesses, including: sleep troubles, suicidal thoughts, and even heart problems and cancer. (That’s not to say that grief necessarily causes these conditions, but rather that it’s an important, and possibly predictive, marker.)

At the same time, there’s been a big debate among researchers about what prolonged grief is, exactly. Is it a bona fide disorder? And if it is a disorder, then is it just another variety of of depression, or anxiety, or post-traumatic stress disorder (PTSD)?

Prolonged grief is in a psychiatric class of its own, according to Holly Prigerson, director of the Center for Research on End of Life Care at Weill Cornell Medical College. When Prigerson first started studying bereavement, back in the 1990s, “psychiatrists thought that depression was the only thing you had to worry about,” she says. “We set out to [determine if] grief symptoms are different and actually predict more bad things than depression and PTSD.”

Her group and others have found, for example, that antidepressant medications don’t alleviate grief symptoms. In 2008, another group found that the brain activity of prolonged grievers when looking at photos of their lost loved ones is different than that of typical grievers. In 2009, Prigerson proposed formal clinical criteria for complicated grief, which include daily yearning for the deceased, feeling emotionally numb, identity confusion, or difficulty moving on with life.

When I first wrote about prolonged grief, for a Scientific American article in 2011, Prigerson and others were lobbying for prolonged grief to be added as a formal diagnosis in ­the Diagnostic and Statistical Manual of Mental Disorders (DSM), the “bible” of psychiatric disorders. That didn’t happen; instead the condition is mentioned briefly in the appendix. “It’s frustrating,” Prigerson says. She is hopeful, though, that the disorder will be included in the next version of the International Classification of Diseases (ICD), the diagnosis guide used by the World Health Organization.

Why all this hoopla over the clinical definitions of pathological grief? Because the determinations made by the DSM and ICD dictate what treatments insurance companies will cover. From Prigerson’s perspective, it means that the roughly 1 million Americans who develop complicated grief each year will have to pay for treatment themselves (assuming they even get properly assessed). That’s an important point from a public health perspective. But more interesting to me is what that treatment is — and how it might shed light on what grief is.

The best treatment for prolonged grief seems to be cognitive behavioral therapy (CBT), a talk therapy in which the patient identifies specific thoughts and feelings, ferrets out those that aren’t rational, and sets goals for the future. In 2005, Katherine Shear of Columbia University reported that a CBT tailored for complicated grief worked for 51 percent of patients.

Part of that tailoring is something called “imaginal exposure,” in which patients are encouraged to revisit feelings or memories that trigger their grief. A similar exposure approach is often used to treat PTSD: Patients will repeatedly recall their most traumatic memories and try to reframe them in a less emotionally painful context. About half of people with PTSD who try exposure therapy get better.

A spate of studies suggest that exposure therapy is also an important part of complicated grief therapy. A couple of weeks ago, for example, researchers from Australia and Israel published a randomized clinical trial of 80 prolonged grievers showing that CBT plus exposure therapy leads to significantly better outcomes than CBT alone.

“The findings from this paper make me think we really need to explore the benefits of making people confront, in some sense, their worst nightmares and fears,” Prigerson says.

This is somewhat counter-intuitive, she adds, because grief has historically been defined as a disorder of attachment and loss, not trauma. In fact, only about half of people seeking treatment for complicated grief meet criteria for PTSD. If grief is a disorder of attachment, then it wouldn’t make sense to encourage patients to think about their loss even more. And yet, somehow this repeated exposure does seem to work.

“We don’t really know the mechanisms here,” Prigerson says. It could be that many people with complicated grief are also dealing with traumatic memories. Or it could be that grief and PTSD are not the same thing, “but that there’s something to exposure therapy that appears to tap into the attachment bond.”

These are questions for future studies. I’m struck by how often CBT techniques — which, at their most fundamental level, are simply about identifying destructive feelings and attempting to reframe them — work, and work for a wide range of disorders. It makes some of the livid arguments over what counts as “real” pathology, or what’s grief versus depression versus anxiety, seem rather beside the point.

In any case, exposure therapy worked for Vicki. After two years of struggling with regular talk therapy, she began seeing a CBT therapist. These sessions included imaginal exposures of her most vivid and painful memories: seeing her son in his hospital bed, and remembering him squeezing her hand. In addition to recalling the scene to her therapist every week, every day Vicky listened to audio tapes of herself telling the story.

Every week these recollections became less painful for Vicki. Her scores on tests of anxiety and grief dropped rapidly, particularly from the fourth to eighth week. She started reading sympathy cards that she had previously avoided. She stopped looking through the scrapbook, and started reaching out to friends and family again.

The treatment led to a dramatic reframing of the way she remembered her son and their relationship. “She said that repeatedly telling the story of his death had helped her to realize that he lived a dangerous life and that he was an independent adult who made his own life decisions,” the case report reads. At her final session, she said the treatment had allowed her “to begin to enjoy her life again.”


I made up Vicki’s name. I found her story in this case report, in which she’s called “Ms. B.”.

A Blog by

The Dog Mom’s Brain

When people ask me if I have kids, my standard answer is, “I have a dog.” My husband and I are the first to admit that we tend to treat our pup like a “real” child. He eats organic food. Our apartment is littered with ripped plush toys. We talk to him in stupid high-pitched voices. He spends almost all of his time with us, including sleeping and vacations. When he’s not with us he’s at a daycare center down the street — and I spend much of that time worrying about whether he’s OK. It’s probably not a full-blown separation anxiety disorder, but when we’re separate, I’m anxious.

On an intellectual level I understand that having a dog is not the same as having a human child. Still, what I feel for him has got to be something like maternal attachment. And a new brain-imaging study backs me up on this.

Researchers from the Massachusetts General Hospital scanned the brains of 14 women while they looked passively at photos of their young children, photos of their dogs, and photos of unfamiliar children and dogs.

As it turned out, many areas of the brain involved in emotion and reward processing — such as the amygdala, the medial orbitofrontal cortex, and dorsal putamen — were activated when mothers viewed their own children or dogs, but not when they viewed unfamiliar photos.

Of course, we don’t really need a fancy (and expensive) neuroimaging experiment to demonstrate how much dogs mean to their people. Two-thirds of American households have pets, and we spend a whopping $58 billion a year to take care of them. Upon losing a pet many people experience intense grief, similar to losing a close friend or family member. And dogs, too, show attachment behaviors toward their caretakers just as human children do.

Still, the imaging results add some interesting nuance to the dog-human relationship. For example, a brain region known as the fusiform gyrus was activated more when mothers looked at their dogs then when they looked at their kids. This might be because the area is involved in face processing. “Given the primacy of language for human-human communication,” the authors write, “facial cues may be a more central communication device for dog-human interaction.”

Conversely, two areas in the midbrain — the substantia nigra and ventral tegmental area — were active when mothers looked at their children but not when they looked at their pups. These brain areas are lousy with dopamine, oxytocin, and vasopressin, chemicals involved in reward and affiliation. This could mean that these areas are crucial for forming pair bonds within our own species, but not so relevant for the bonds we form with pets.

These results come with the usual caveats for brain imaging studies. It was a small sample of only women, and the brain snapshots were taken at just one point in time. Nevertheless, I think studies like these offer important counter-points to what I see as a growing trend of poking fun at pet-human bonds (even by pet owners themselves).

No, I don’t personally endorse doggie birthday parties, 22-karat gold leafed food bowls, or even pet chemotherapy. But neither do I begrudge those who do. Dogs may not be children, but they’re still our babies.

A Blog by

The Other Polygraph

You’ve no doubt heard about the polygraph and its use as a lie detector. The homely box records physiological changes — such as heart rate and electrical skin conductance — that are indirect signatures of emotion. Because these biomarkers tend to change when people tell lies, criminal investigators have long used the polygraph as a crude tool for detecting deception.

But there’s a huge problem with the polygraph: it’s all-too-frequently wrong. Truth-tellers may show a strong physiological response to being questioned if they’re nervous or fearful, which they often are — particularly if the target of a hostile interrogation.

“You end up with a lot of false positives,” says John Meixner, a law clerk to Chief Judge Gerald Rosen of the United States District Court for the Eastern District of Michigan. The traditional polygraph suffers not only from false positives, but missed positives: With a bit of training, liars can pass the test by intentionally turning down their emotions.

Because of these considerable flaws, polygraph evidence is almost never allowed in court. But it’s still used routinely by federal law enforcement agencies, not only for screening accused criminals but potential new employees.

It turns out there’s a much more accurate way to root out deception: a 55-year-old method called the ‘concealed information test’. The CIT doesn’t try to compare biological responses to truth versus lies. Instead, it shows whether a person simply recognizes information that only the culprit (or the police) could know.

Early studies on the CIT used biomarkers gleaned from a standard polygraph machine. But Meixner and his colleagues at Northwestern University have been studying the approach with electroencephalography (EEG), a technology that measures brain waves with a cap of harmless electrodes. When people are reminded of something they’ve personally experienced — whether an object, person or event — their brains produce a specific brain wave, known as the P300, in a fraction of a second.

Many research groups have set up mock crimes in the laboratory to show how the P300 might be used as an investigative tool. For example, several years ago Meixner and his advisor, J. Peter Rosenfeld, carried out a study in which they asked participants to pretend that they were part of a deadly terrorism plot. The subjects wrote a letter to the leader of the plot noting that there would be a bombing in Houston in July. Later the researchers showed the participants key words — “bomb”, “Houston”, and “July” — while recording their brain activity with EEG. Their P300 brain waves were significantly larger after reading those words than they were after reading words denoting other weapons, cities, or months.

The downside of studies like that one, however, is that they rely on artificial laboratory environments. In an upcoming issue of Psychological Science, Meixner and Rosenfeld report the accuracy of the CIT using a far more realistic situation.

In the new study, 24 volunteers wore a small video camera for four hours while they went about their normal routine. The researchers then looked through that video footage and chose words related to whatever each volunteer had experienced, such as “grocery store” (place visited), “Michael” (friend talked to), or “red” (color of umbrella).

The next day the volunteers came into the lab and had their brain waves recorded while they looked at the reminder words (such as “grocery store”) as well as similar words that were not related to what they experienced (such as “movie theater”).

“This test only involved information learned by subjects in a natural environment, entirely of their own volition, in the same way you’d be testing a criminal defendant,” Meixner says. In an investigative context, a suspect’s brain could be monitored while it responds to words or a picture of a specific object from the crime scene, such as a murder weapon.

Similar to the terrorism study, this one found that after seeing a word related to their previous day’s experience, participants produced a large-amplitude P300 brain wave that did not show up when they saw other, similar words. This didn’t happen for a control group that was shown the same words but hadn’t experienced anything related to them the previous day.

“There was perfect discrimination between the two groups,” Meixner says.

Other experts in the field are impressed by the findings.

“This research is the first convincing demonstration that incidentally acquired real-life memories can be reliably detected in people’s EEG brain activity at an individual level,” says Zara Bergström, a lecturer in psychology at the University of Kent in the U.K.

“Of course, there are still many differences between the types of memories that were detected in this study and those of a real criminal,” she adds. For example, a person’s P300 may be affected if they are under stress or intoxicated, “as many criminals may be.”

Meixner agrees. This technology, he says, seems to be much more reliable than the traditional polygraph. But the only way to know for sure is to start doing research on it in real-world investigations. “It wouldn’t be hard to run a pilot program and start testing it tomorrow — we’ve been trying to do that,” he says.

Law enforcement has good reason to invest in the CIT, according to John J. B. Allen, a distinguished professor of psychology at the University of Arizona.

“The CIT has a very important characteristic, and this is in stark contrast to conventional polygraph tests administered in the U.S.: The CIT rarely has a false positive result,” Allen told me by email. “Although the CIT may sometimes miss detecting when someone has information, it does an excellent job of protecting innocent examinees against false accusation.”

Allen was writing to me on his way back from Japan, where the CIT approach is used frequently. “The Japanese police administer over 6,000 CIT exams per year,” he noted.

So why aren’t any investigators using it here?

One reason, according to Meixner, is that the CIT only works if it’s given at the very beginning of an interrogation. Otherwise, through the process of questioning, the suspect may gain knowledge about the crime that he or she didn’t have before. Investigators are “not especially keen on that.”

The CIT is also a bit less versatile than the traditional polygraph, because investigators have to know some hard facts about the crime before testing the suspect. In a real-world terrorism plot, for example, investigators wouldn’t necessarily know what city or month or weapon to ask about.

But the biggest reason we don’t use the CIT, according to Meixner and Rosenfeld, is probably cultural. As they wrote in a review paper last year: “The members of the practicing polygraph community simply do not like giving up [that] which they are used to.”

A Blog by

Emotion Is Not the Enemy of Reason

This is a post about emotion, so — fair warning — I’m going to begin with an emotional story.

On April 9, 1994, in the middle of the night, 19-year-old Jennifer Collins went into labor. She was in her bedroom in an apartment shared with several roommates. She moved into her bathroom and stayed there until morning. At some point she sat down on the toilet, and at some point, she delivered. Around 9 a.m. she started screaming in pain, waking up her roommates. She asked them for a pair of scissors, which they passed her through a crack in the door. Some minutes later, Collins opened the door and collapsed. The roommates—who had no idea Collins had been pregnant, let alone what happened in that bloody bathroom—called 911. Paramedics came, and after some questioning, Collins told them about the pregnancy. They lifted the toilet lid, expecting to see the tiny remains of a miscarried fetus. Instead they saw a 7-pound baby girl, floating face down.

The State of Tennessee charged Collins with second-degree murder (which means that death was intentional but not premeditated). At trial, the defense claimed that Collins had passed out on the toilet during labor and not realized that the baby had drowned.

The prosecutors wanted to show the jury photos of the victim — bruised and bloody, with part of her umbilical cord still attached — that had been taken at the morgue. With the jury out of the courtroom, the judge heard arguments from both sides about the admissibility of the photos. At issue was number 403 of the Federal Rules of Evidence, which says that evidence may be excluded if it is unfairly prejudicial. Unfair prejudice, the rule states, means “an undue tendency to suggest decision on an improper basis, commonly, though not necessarily, an emotional one.” In other words, evidence is not supposed to turn up the jury’s emotional thermostat. The rule takes as a given that emotions interfere with rational decision-making.

This neat-and-tidy distinction between reason and emotion comes up all the time. (I even used it on this blog last week, it in my post about juries and stress.) But it’s a false dichotomy. A large body of research in neuroscience and psychology has shown that emotions are not the enemy of reason, but rather are a crucial part of it. This more nuanced understanding of reason and emotion is underscored in a riveting (no, really) legal study that was published earlier this year in the Arizona State Law Journal.

In the paper, legal scholars Susan Bandes and Jessica Salerno acknowledge that certain emotions — such as anger — can lead to prejudiced decisions and a feeling of certainty about them. But that’s not the case for all emotions. Sadness, for example, has been linked to more careful decision-making and less confidence about them. “The current broad-brush attitude toward emotion ought to shift to a more nuanced set of questions designed to determine which emotions, under which circumstances, enhance legal decision-making,” Bandes and Salerno write.

The idea that emotion impedes logic is pervasive and wrong. (Actually, it’s not even wrong.) Consider neuroscientist Antonio Damasio’s famous patient “Elliot,” a businessman who lost part of his brain’s frontal lobe while having surgery to remove a tumor. After the surgery Elliot still had a very high IQ, but he was incapable of making decisions and was totally disengaged with the world. “I never saw a tinge of emotion in my many hours of conversation with him: no sadness, no impatience, no frustration,” Damasio wrote in Descartes’ Error. Elliot’s brain could no longer connect reason and emotion, leaving his marriage and professional life in ruin.

Damasio met Elliot in the 1980s. Since then many brain-imaging studies have revealed neural links between emotion and reason. It’s true, as I wrote about last week, that emotions can bias our thinking. What’s not true is that the best thinking comes from a lack of emotion. “Emotion helps us screen, organize and prioritize the information that bombards us,” Bandes and Salerno write. “It influences what information we find salient, relevant, convincing or memorable.”

So does it really make sense, then, to minimize all emotion in the courtroom? The question doesn’t have easy answers.

Consider those gruesome baby photos from the Collins case. Several years ago psychology researchers in Australia set up a mock trial experiment in which study volunteers were jury members. The fictional case was a man on trial for murdering his wife. Some mock jurors heard gruesome verbal descriptions of the murder, while others saw gruesome photographs. Jurors who heard the gruesome descriptions generally came to the same decision about the man’s guilt as those who heard non-greusome descriptions. Not so for the photos. Jurors who saw gruesome pictures were more likely to feel angry toward the accused, more likely to rate the prosecution’s evidence as strong, and more likely to find the man guilty than were jurors who saw neutral photos or no photos.

In that study, photos were emotionally powerful and seemed to bias the jurors’ decisions in a certain direction. But is that necessarily a bad thing?

In a similar experiment, another research group tried to make some mock jurors feel sadness by telling them about trauma experienced by both the victim and the defendant. The jurors who felt sad were more likely than others to accurately spot inconsistencies in witness testimony, suggesting more careful decision-making.

These are just two studies, poking at just a couple of the many, many open questions regarding “emotional” evidence in court, Bandes and Salerno point out. For example, is a color photo more influential than black and white? What’s the difference between seeing one or two gory photos verses a series of many? What about the framing of the image’s content? And what about videos? Do three-dimensional animations of the crime scene (now somewhat common in trials) lead to bias by allowing jurors to picture themselves as the victim? “The legal system too often approaches these questions armed only with instinct and folk knowledge,” Bandes and Salerno write. What we need is more data.

In the meantime, though, let’s all ditch that vague notion that “emotion” is the enemy of reason. And let’s also remember that the level of emotion needed in a courtroom often depends on the legal question at hand. In death penalty cases, for example, juries often must decide whether a crime was “heinous” enough to warrant punishment by death. Heinous is a somewhat subjective term, and one that arguably could be — must be? — informed by feeling emotions.

Returning to the Collins case, at first the trial judge didn’t think the gruesome baby photos would add much to what the jury had heard in verbal testimony. There was no question that Collins had had a baby, that she knew it, and that the baby had died of drowning. The judge asked the medical examiner whether he thought the photos would add anything to his testimony. He replied that the only extra thing the pictures would depict was what the baby looked like, including her size. The judge decided that was an important addition: “I don’t have any concept what seven pounds and six ounces is as opposed to eight pounds and three ounces, I can’t picture that in my mind,” he said, “but when I look at these photographs and I see this is a seven pound, six ounce baby, I can tell more what a seven pound, six ounce baby … is.”

So the jury saw two of the autopsy photos, and ultimately found Collins guilty of murder. Several years later, however, an appeals court reversed her conviction because of the prejudicial autopsy photos.

“Murder is an absolutely reprehensible crime,” reads the opinion of the appeals court. “Yet our criminal justice system is designed to establish a forum for unimpaired reason, not emotional reaction. Evidence which only appeals to sympathies, conveys a sense of horror, or engenders an instinct to punish should be excluded.”

A Blog by

Why Jurors and Policemen Need Stress Relief

I’ll be sitting on a jury tomorrow for the first time. The logistics are annoying. I have to take an indefinite time off work, wait in long security lines at the courthouse, and deal with a constant stream of bureaucratic nonsense. But all that is dwarfed by excitement. And, OK, yes, some pride. My judgments will affect several lives in an immediate and concrete way. There’s a heaviness to that, a responsibility, that can’t be brushed aside.

My focus on jury duty may be why a new study on social judgments caught my eye. Whether part of a jury or not, we judge other people’s behaviors every day. If you’re walking down a city sidewalk and someone slams into you, you’re probably going to make a judgment about that behavior. If you’re driving down the highway and get stuck behind a slow car, you’re probably going to make a judgment about that driver’s behavior. If somebody leaves a meandering and inappropriate comment on your blog…

Since the 1960s psychology researchers have known that people tend to make social judgments with a consistent bias: We’re more likely to attribute someone’s behavior to inherent personality traits than to the particulars of the situation. The guy who bumps into me on the sidewalk did so because he’s a dumb jerk, not because he’s rushing to the hospital to see his sick child. The driver is slow because she’s a feeble old lady, not because her engine is stalling.

Those are flippant examples, but this bias, known as the ‘fundamental attribution error’ or FAE, can be pernicious. Consider a policeman who’s making a split-second decision about whether to shoot a suspect wearing a hoodie. Because of the FAE, he “might make a shoot decision based on stereotypical characteristics about that person, and fail to take into account the context,” says Jennifer Kubota, an assistant professor of psychology at the University of Chicago. But the suspect “could be wearing a hoodie just because it’s cold outside.”

We can overcome this bias, but it takes time and deliberate thought. Studies have shown that when people are distracted or under strong time pressure, they’re more likely to make the FAE.

In the new study, now in press in Biological Psychology, Kubota and her colleagues found another factor that pushes people toward the FAE: stress.

To create physiological stress, the researchers asked volunteers to plunge their forearms into ice water for three minutes. This so-called ‘cold-pressor task’ is known to spike levels of cortisol, a stress hormone.

After the stress exposure, volunteers read statements about a fictional character and saw a picture of the person’s face. They would get one sentence of behavioral information (“Jenny read a book in an hour”) and another sentence of situational information (“The book was a children’s book”). Then they gave two ratings: 1) the degree to which the behavior was caused by dispositional factors as opposed to situational ones, 2) how much they liked the fictional person.

As it turns out, compared with non-stressed participants, those who were exposed to stress (and showed increases in cortisol) were more likely to make dispositional attributions than situational ones. They also gave more negative evaluations of the fictional characters.

“When we’re under stress we’re more likely to think that someone behaved the way they did because of something about their personality,” Kubota says. “And we’re ignoring all of these important situational and environmental factors that actually could have had a pretty big impact on why they did what they did.”

The differences between stressed and unstressed groups were small, but nevertheless notable, says Amy Arnsten, a professor of neurobiology at Yale who was not involved in the work. The cold-water stress, after all, is quite subtle compared with common real-world stressors such as sleep deprivation, divorce or financial woes.

The findings also “fit perfectly with what we already know” about stress and the brain, Arnsten says, a topic she has been studying for 30 years. In times of acute stress, our rational brain circuits (centered in the prefrontal cortex) rapidly shut down and our more primitive ones (based in the amygdala and basal ganglia) take over. “The automatic, unconscious circuits in your brain become in charge of decisions,” she says.

The same thing happens, it seems, when we’re making a social judgment. Last year a brain-imaging study reported that when people make judgments based on situational factors, they show more activity in their dorsolateral prefrontal cortex (DLPFC) than when they make judgments based on personality traits. Because stress is particularly damaging to circuits in the DLPFC, it makes sense that stress would make situational judgments more difficult and exacerbate the FAE.

“This has a lot of relevance to what’s going on right now with the police in places like Ferguson,” Arnsten says. “If the police are stressed, they’re going to be more likely to attribute bad things to people.” It may also come into play in conflict zones such as the Middle East and the Ukraine, she adds. “People become primitive [and] seek revenge” against those they perceive as inherently “bad.” This bias makes them “unable to see the bigger situation and represent long-term solutions that would actually be more helpful.”

In a second experiment, Kubota’s team tried to replicate their findings using more realistic scenarios. The researchers shared 30, one-sentence stories about crime with 204 American volunteers recruited online with Amazon’s Mechanical Turk. The vignettes varied in the number of situational details. For example, the sentence “A woman stabs another woman to death after an argument” has less situational information than “A 13-year-old boy in the slums of Chicago robs an 87-year-old man of $2.27.”

For each sentence, volunteers rated how much they thought the behavior was caused by dispositional factors as opposed to situational ones, as well as whether they believed the behavior was criminal, how much they liked the offender, and how severe the offender’s punishment should be.

Consistent with the first experiment, this one found that the higher the level of (self-reported) current stress, the more likely the person was to attribute a criminal behavior to the offender’s disposition.

After talking through these findings, I told Kubota about my upcoming jury service and asked her what I could do, if anything, to combat the FAE. She gave two pieces of advice. “First, for jurors, there are a number of important ways to decrease your stress level,” she said, such as doing relaxation exercises or mindfulness training.

Second, regardless of stress level, the best way to combat the FAE “is to give yourself a bit more time,” she said. Take the time to think of the person you’re judging and the complexity of their unique situation. “Put yourself in their shoes.” I’ll do my best.

A Blog by

The Point of Pointing

Five years ago cognitive scientist Rafael Núñez found himself in the Upper Yupno Valley, a remote, mountainous region of Papua New Guinea. The area is home to some 5,000 indigenous people, and Núñez and his graduate student, Kensy Cooperrider, were studying their conceptions of time.

Most of you reading this post have a Western understanding of time, in which time has a spatial relationship with our own bodies. The past is behind us, the future ahead. I look forward to Christmas and reach back into my memories. But that particular cognitive framework is not universal. Núñez’s work has shown, for example, that the Aymara people of the Andes think about time in the opposite way; for them, the future is behind and the past lies ahead.

An anthropologist working in Papua New Guinea, Jürg Wassmann, suspected that the Yupno have yet another way of thinking about time, and invited Núñez and Cooperrider to come down and investigate. The Yupno have no electricity and no roads; getting to a city involves a several-day hike. They live in small thatch huts surrounded by green mountains. This rolling landscape, the researchers discovered, is what centers the the Yupno’s conception of time. For them, the past is downhill and the future uphill.

Kensy Cooperrider
Above, homes in the Upper Yupno Valley. Photo by Rafael Núñez. Below, a Yupno man talks about the future. Photo by Kensy Cooperrider.

Núñez and Cooperrider figured this out by analyzing the way the Yupno point during natural speech. And in the midst of doing those experiments, the researchers stumbled onto something else unexpected: The Yupno don’t point like Westerners do.

We Westerners have a boring pointing repertoire. Most of the time, we just jut out our arm and index finger. If our hands are occupied — carrying a heavy load, say — then we might resort to a jerk of the head or elbow. But if the pointer finger’s free, we’ll point it.

Not so for the Yupno. Within a few days of their arrival in the valley, Núñez and Cooperrider noticed that the Yupno often point with a sharp, coordinated gesture of the nose and head that precedes them looking toward the point of interest. Here’s how the scientists described the nose part of the gesture, dubbed the ‘S-action’, in a 2012 paper:

The kernel of the nose-pointing gesture is a distinctive facial form that is produced by a contraction of the muscles located bilaterally on both sides of the nose, which raise the upper lip and slightly broaden the wings of the nose,” they write. “Informally, the combined effect of pulling the nose upward and pulling the brow downward and inward may be characterized as an effortful scrunching together of the face.

Last year Núñez and Cooperrider made a second trip to the Yupno Valley to get a better understanding of how often the Yupno use the S-action, and why.

For this study (which was funded by the National Geographic Society), the researchers designed a game in which two people must work together to put various colored blocks into a particular configuration. One person, the director, sees a photo of the target configuration and then instructs the other person, the matcher, on where to move the pieces to make them match the photo.

The game presents a tough communication challenge that players meet by using lots of demonstratives (“This one over here!”, “That one over there!”) and frequent pointing, Núñez says.

The Yupno tend to use nose pointing more than finger pointing, as Cooperrider reported at the Cognitive Science Society meeting in July. That sharply contrasts with what the researchers observed among college students playing the same game in Núñez’s lab at the University of California, San Diego. Westerners, in the researchers’ words, “stuck unwaveringly to index finger pointing.”

A California college student (top) and a Yupno man (bottom) play the communication game. From Cooperrider et al, 2014

OK, so culture seems to affect pointing behavior. But there are lots of ways in which Westerners are different from the Yupno. Why, I asked Núñez, should we care about pointing?

Pointing, he answered, seems to be a fundamental building block of human communication. Great apes are never seen pointing in the wild. And in human babies, pointing develops even before the first word.

If we want to understand why people point, then it’s critical to look at how all people point, not just the WEIRD (Western, educated, industrialized, rich, democratic) ones. “If we want to understand human evolution and human minds, we need to really look at variety,” Núñez says. And whatever theories researchers come up with to explain the evolutionary or neural roots of pointing, “they would have to be able to explain all of these different forms.”

The Yupno aren’t the only ones who point with their face. Lip pointing — in which protruding lips precede an eye gaze toward the area of interest — has been observed in people from Panama, Laos, and other groups in Australia, Africa, and South America. Head pointing, according to one study, happens frequently among people speaking Arabic, Bulgarian, Korean, and African-American Vernacular English.

Núñez speculates that early human ancestors used a wide variety of pointing gestures, and these have been shaped and pruned over time depending on the needs of a particular culture.

He doesn’t know why the Yupno prefer nose pointing, but speculates that it could be related to their penchant for secrecy. On the second day of his first visit, Núñez was walking through the woods with about 25 children behind him. He was struck by their quiet: For the entire 30 minutes, the children were whispering. He soon noticed that Yupno adults did it, too. “The amount of whispering that we observed in this community is unbelievable,” he says.

So perhaps the S-action is a way to convey meaning in a less showy way than extending an arm that everyone can see. “In this community, it’s very important to know who’s saying what to whom and about what and at what time,” he says. “There are a lot of cases where you don’t want to be seen saying something to somebody.”

But that’s just a hypothesis. Also mysterious: Why did Western culture lose its pointing variety?

Actually, Núñez muses, we may still be evolving on that front. Consider someone at a conference presenting information to several hundred people. What do they use? A laser pointer.

“If you want to call attention to something 25 meters away, no body part could be used to achieve that goal,” he says. “In our digital era we’re finding new ways to achieve the same fundamental goal that our ancestors had: How can I drag your attention to this particular thing?

A Blog by

Brain Zaps Boost Memory

Researchers who study memory have had a thrilling couple of years. Some have erased memories in people with electroshock therapy, for example. Others have figured out, in mice, how to create false memories and even turn bad memories into good ones.

Today, another “No way, really?!” study gets added to the list. Scientists have boosted memory skills in healthy volunteers by zapping their brains with weak electromagnetic pulses.

The memory gain was fairly small — not enough for most of us to notice in our everyday lives, the researchers say. But even a modest improvement could be meaningful for people with conditions that damage memory, such as a stroke, heart attack, traumatic brain injury or Alzheimer’s disease.

“This memory network that we targeted has been shown to be impaired in a variety of disorders,” says lead investigator Joel Voss, a neuroscientist at Northwestern University.

If people with these disorders show similar memory gains in future experiments, the technology could be easily translated into the clinic, Voss adds. “It’s definitely the kind of thing that could be turned into an intervention that could be implemented in the hospital, and eventually maybe a doctor’s office.”

The technology, called transcranial magnetic stimulation or TMS, involves a wand emitting a changing magnetic field. When pressed against the skull, it induces changes in the electrical patterns of nearby neurons.

In the new study, published today in Science, the researchers used TMS to stimulate a spot in the outer layers of the brain called the lateral parietal cortex. Activity in this region is known to be strongly synchronized with the hippocampus, a region crucial for memory. (The hippocampus itself is too deep to be directly affected by TMS.)

To the person being stimulated, “it feels like somebody flicking the outside of your head with their finger”, says Voss, who says he has experienced it hundreds of times. “You can’t feel anything in terms of your thinking,” he says. “You don’t feel souped up afterwards.”

The researchers stimulated this area in 16 adult volunteers for five consecutive days. Each stimulation session lasted 20 minutes, during which volunteers would feel 2 seconds of pulsing, then 28 seconds of nothing, then 2 seconds of pulsing, and so on.

Brain scans of the volunteers before and after their week of stimulation showed that the treatment significantly increased connectivity between the hippocampus and four other areas, including the lateral parietal cortex. So it seems that stimulating one part of the hippocampal memory network (the lateral parietal cortex) led to more robust connections in other parts of the same network.

After stimulation the volunteers also performed better on a difficult memory test. They saw a series of 20 photographs of faces and heard a random word, such as ‘chair’ or ‘hat’, paired with each. A few minutes later they were shown the same pictures and asked what word had been paired with it. After five days of brain stimulation, the volunteers got roughly 13 pictures right, on average, compared with 10 before the treatment.

The researchers carried out some important control experiments. For example, they repeated the same procedure except zapped the motor cortex, which doesn’t have much synchrony with the hippocampus. This stimulation led to no noticeable differences in memory network connectivity or in memory test scores. (It did, however, lead to some noticeable changes in the volunteers’ behaviors. “It makes the hand and arm twitch 20 times a second, causing the arm to lift right up off the table,” Voss says. “It’s a little weird.”)

The work underscores the idea, long supported by animal studies, that memory is not just about the hippocampus, but the connections between the hippocampus and the brain’s outer layers.

“This study is exciting because it shows that the hippocampus doesn’t act alone — its connections with other brain regions are important for memory,” says Maureen Ritchey, a neuroscientist at the University of California, Davis, who was not involved with the work.

Ritchey and others say the study raises a host of intriguing questions to be tackled in future work. For example, Ritchey asks, would the same memory boost happen if a different part of the memory network is stimulated?

Bernhard Staresina, a neuroscientist at the University of Birmingham in the U.K, muses on a few more: How long do the effects last? And would the stimulation affect other kinds of memories, such as where you parked your car or whether you recognize a face? And perhaps most provocatively: Can the same approach be used not only to boost memories, but to weaken them?

“Sometimes negative experiences can exert lasting and debilitating effects, evidenced for example in post-traumatic memory disorder,” Staresina says. “Can [TMS] be used to disturb the memory network and thereby – perhaps complementary to psychotherapy – help alleviate detrimental effects resulting from unwanted memories?”

Voss and his colleagues are taking a step toward clinical translation by doing a similar study on elderly adults with mild cognitive impairment, a pre-symptomatic form of Alzheimer’s. “They all have reduced connectivity of this network so hopefully this will be something that works,” he says.

When it comes to people with healthy brains, however, Voss says we should forget about TMS. “I don’t think this is the kind of thing you’d want to do as a study aid,” he says. That’s because memory skills are determined in large part by what we can’t control: our genes.

“The number one way to improve memory abilities,” he jokes, “is to find two people with really good memories and get them to have children.”

A Blog by

Peak Zone

In June 1958, 17-year-old Edson Arantes do Nascimento, better known as Pelé, arrived in Stockholm with the rest of the Brazilian national football team to play against Sweden in the World Cup Finals. Just before the game, as the peppy marching beats of the Brazilian national anthem rang out, Pelé’s thoughts wandered. He thought of his mother back home, too nervous to listen to the game on the radio. Then the whistle blew and the men were off. Pelé and his teammates were shocked by the skill of the Swedes, who scored their first goal within four minutes. Only then, he writes in his 1977 autobiography, did Pelé get his head in the game:

…Suddenly I felt a strange calmness I hadn’t experienced in any of the other games. It was a type of euphoria; I felt I could run all day without tiring, that I could dribble through any of their team or all of them, that I could almost pass through them physically. I felt I could not be hurt. It was a very strange feeling and one I had never felt before.

I came across this passage, believe it or not, in a study published this week in the journal Consciousness and Cognition. In it, Janet Metcalfe of Columbia University and her colleagues used Pelé’s words to define a somewhat fuzzy psychological concept: the feeling of being “in the zone.” You’re probably familiar with the feeling, especially if you’re an athlete, musician, artist, writer, or video-game aficionado. It’s the mental state of being focused intently on a specific task, a complete absorption that allows you to forget any self-consciousness and lose all sense of time. For me, it’s the (all too elusive) feeling that makes writing fun.

People have presumably been getting in the zone for millennia. But it didn’t get much scientific attention until 1990, when psychology researcher Mihaly Csikszentmihalyi published his now-famous book, Flow. In the book Csikszentmihalyi defines flow essentially the same way that Metcalfe defines being in the zone (and her study uses the terms interchangeably). Csikszentmihalyi proposed that flow happens when a person finds a task that is optimally challenging — not too hard, not too easy, just right.

Later studies by other researchers supported this idea. But this so-called ‘balance hypothesis’, according to Metcalfe, doesn’t account for an individual’s variability. I can carry out a task with a constant level of challenge and sometimes feel in the zone and sometimes not feel it. Metcalfe uses professional basketball players as a classic example. Most players show a consistent level of ability throughout the course of a season and are faced with a steady onslaught of competition. And yet, they’ll report being in the zone in some games and in a slump in others. Why?

In 1985 a different research group gave an answer they called the hot hand phenomenon. By analyzing shooting records of the Philadelphia 76ers and the Boston Celtics, the researchers showed that players are amazingly consistent over the course of a season. Nevertheless, on games when a player happens to make a string of baskets, he will say he was in the zone or had a “hot hand.” In reality these lucky strings are statistical flukes; the player is just as good as he ever is. Nevertheless they make him perceive his own ability in a more positive light.

Metcalfe thought the hot hand data offered a big insight into what causes flow. In what she called the ‘balance-plus hypothesis’, she proposed that feeling in the zone comes from two things: an optimal level of challenge (as Csikszentmihalyi suggested) and a high level of perceived performance.


The new study tests this hypothesis. The researchers recruited 45 college students to play a Tetris-like computer game in which Xs and Os float down the screen. When the letters get to the bottom, participants are supposed to use a cursor to catch the Xs and avoid the Os. As in Tetris, when the letters come down slowly the game is fairly easy and when they come down fast it’s difficult.

On the first trial the computer would randomly choose a letter speed. On the very next trial, the participant would choose. Each trial lasted about 20 seconds, and afterwards participants were asked for two self-ratings: how much they felt “in the zone,” and how well they thought they had performed.

On the trials in which the computer selected the speeds, participants gave the highest zone ratings at moderate speeds, consistent with previous studies on the balance hypothesis. But the researchers also found that on trials in which the speed was the same — that is, the challenge to the participant was the same — they gave higher zone ratings after trials in which they had perceived their game performance to be better. (This was true even when they hadn’t, in fact, performed any better, just like the study of the basketball players.)

What’s more, the study found that both the level of challenge and perceived performance affected the participants’ choices. On the trials in which they determined the game speed, participants chose the speeds that aligned with maximum zone ratings. This underscores another important feature of flow: it’s intrinsically rewarding. Why else would they choose the speed of peak zone as their preferred level of play?

Metcalfe and her colleagues are interested in the study’s implications for everyday learning. Her past worked has shown that when figuring out what to study during a work session, students tend to choose materials that are not too hard and not too easy. The research on flow suggests that this is because these materials are the most rewarding.

But the new study adds a second variable, suggesting that students can get even more reward when learning if they perceive their efforts in a positive way. And how can they do that, you ask? On that key question, unfortunately, the current study provides no answers.

Pelé vs. the Swedish goalkeeper at the 1958 World Cup final. Photo via Wikipedia
Pelé vs. the Swedish goalkeeper at the 1958 World Cup final. Photo via Wikipedia

The science of zone is still in its infancy, for sure, and I wouldn’t put too much stock into any single new study, including this one. But Metcalfe’s ideas are intriguing enough to try to incorporate in my day-to-day habits. (Because, why not?) Perhaps there’s a way I can periodically give myself positive feedback while writing, for example. Or maybe I could try reminding myself of past successes just before starting something daunting. I’ll let you know how it goes. And in the meantime, I’d love to hear from anybody who has found their own tricks for reaching peak zone.

You might wonder, by the way, whether the Pelé anecdote I began with points to the opposite conclusion of Metcalfe’s. It was only after the Swedes scored a goal, after all, that his mind clicked into the zone. So, at that moment, wouldn’t he have perceived his performance negatively, not positively?

I won’t pretend to know what was in his mind at that moment. But from what he writes in his book, it seems that this goal actually reminded him, and his team, of how good they really were:

“For a moment we were stunned, deaf to the screams of delight from the stands, as if we couldn’t believe such a thing could happen to us,” Pelé writes. Their next emotion wasn’t panic, but excitement. It was “as if the Swedish goal was what Brazil had needed all along to pull us out of our slump.”