A Blog by

This Is a Goodbye Post

Some bittersweet news: This is the last installment of Only Human. After two stimulating and fun years blogging at Phenomena, I’m starting a new job: building an investigative science desk for BuzzFeed News.

When I launched this blog I intended to write, as I put it in my first post, “stories about people — what we’re made of, what we do, why we do it.”

The human beat proved to be a bit too broad. This week, after looking back at all of my posts, I realized that Only Human has focused on a handful of subtopics: criminal justice, memory, obesity, dogs, kids, and the business of science. Below I’ve listed links to representative posts from each.

The best part about writing for Phenomena has been sharing this little corner of the big, bad internet with my smart, enthusiastic and frequently hilarious co-bloggers (and friends), Nadia Drake, Brian Switek, Ed Yong and Carl Zimmer. And our blogging overlord, Jamie Shreeve, couldn’t have been kinder or more supportive. Truly.

And readers! You, too, have been wonderful — curious, encouraging, inspiring, provocative, and (almost always) constructive.  The comment thread on my post about losing my dog (which, after more than a year, is still going strong!) has been one of the most rewarding experiences in my writing career.

I look forward to seeing how Phenomena continues to thrive and evolve, and I hope you’ll check out BuzzFeed’s emerging science coverage. Though I won’t be on these pages anymore, you can always find me on Twitter or by email. Happy New Year to all — and here’s to new beginnings.



Criminal Justice

Making Juries Better: Some Ideas from Neuroeconomics

How Many People Are Wrongly Convicted?

My DNA Made Me Do It?

Why Jurors and Policemen Need Stress Relief

Emotion Is Not the Enemy of Reason

The Other Polygraph


Shocking Memories Away

Brain Zaps Boost Memory

The Chatty Hippocampus

Drug Tweaks Epigenome To Erase Fear Memories

After Death, H.M.’s Brain Uploaded To the Cloud

And the Memory Wars Wage On


The Obesity Apologists

The Humble Heroes of Weight-Loss Surgery

Expanding Guts in Pythons and People

Why Do Obese Women Earn Less Than Thin Women (and Obese Men)?


On Losing a Dog

People and Their Pets

How Voices Tickle the Dog Brain

The Dog Mom’s Brain

People and Dogs: A Genetic Love Story


When Do Kids Understand Death?

When Do Kids Understand Numbers?

Math for Babies

When Do Kids Understand Infinity?

On Learning Animal-ness

How We Learn To See Faces

The Business of Science (and Science Journalism)

So Science Gets It Wrong. Then what?

So Science…Might Have Gotten It Wrong. Now What?

The Science of Big Science

Resveratrol Redux, Or: Should I Just Stop Writing About Health?

The Power of a Press Release

A Blog by

Why Do We See the Man in the Moon?

Take a look at the slideshow above. The photos depict, in order: tower binoculars, a tank tread, tree bark, headphones, a tray table, a toilet, eggs, and more tree bark. Yet I perceived every one of them as a face, and I bet you did, too.

That’s because, as I wrote about a few weeks back, most people are obsessed with faces. We see faces everywhere, even in things that are most definitely not faces. The most famous example is probably the man in the moon. The weirdest has got to be the person who reportedly paid $28,000 for an old grilled cheese sandwich whose burn marks outline the face of the Virgin Mary.

This phenomenon, called face pareidolia, isn’t new (Leonardo da Vinci even wrote about it as an artistic tool). But nobody knows much about how or why our brains create this illusion. This week I came across a fascinating brain-imaging study that begins to investigate these questions. The paper, published in the journal Cortex, is titled “Seeing Jesus in Toast,” and this fall it won an Ig Nobel Prize, awarded “for achievements that first make people laugh then make them think.”

The study hinges on a clever method for inducing pareidolia inside of a brain scanner. The researchers showed 20 volunteers hundreds of “noise images” — squares comprised of black, white, and gray blobs — and told them that half of the images contained hard-to-detect faces. (The participants had been through a training period in which they saw clearly defined faces in such images, so they were used to the act of searching for a face within the noise.) After seeing a noise image, the volunteer would press a button indicating whether she saw a face in it or not. Unbeknownst to the participants, none of the noise images contained any overt faces.

The scientists reasoned that trials in which participants reported seeing a face were examples of pareidolia. To confirm this, the researchers took all of the images in which a participant saw a face and combined them into an average image. They then subtracted from that the average of all of the images in which the same participant did not see a face. The result of that subtraction, somewhat amazingly, was a crude face shape, suggesting that participants really were experiencing face pareidolia.

A week later, the same participants came back in the lab and went through a similar procedure. This time, though, they were told that half of the noise images they saw contained a hard-to-detect letter. In reality, none of them did — in fact, the images were exactly the same as those they saw the previous week.

All of these experiments happened inside of a brain scanner, allowing the scientists to compare which parts of the brain are activated during face pareidolia, letter pareidolia, and no pareidolia.

It turns out that a particular brain area — the right fusiform face area (FFA) — showed enhanced activation during face pareidolia but not letter pareidolia or no pareidolia. What’s more, the higher a volunteer’s activation in the right FFA, the more her subtracted composite image looked like a face, the study found.

This is an intriguing finding, the researchers say, because of what’s already known about the FFA. Previous studies had found that this area is specialized for processing true faces (hence the name). The fact that it’s also active for these imagined faces suggests that it’s involved in our more abstract conceptions of faces, as opposed to simply responding to the basic visual pattern of two eyes on top of a nose.

And why do our brains so easily create faces? There’s a compelling evolutionary explanation, the researchers write. “This tendency to detect faces in ambiguous visual information is perhaps highly adaptive given the supreme importance of faces in our social life.”

Regardless of what’s going on in my brain, there’s something delicious about looking at photos of face pareidolia, don’t you think? If you have your own examples, please share — the weirder the better!

A Blog by

The Power of a Press Release

In 2011, Petroc Sumner of Cardiff University and his colleagues published a brain imaging study with a provocative result: Healthy men who have low levels of a certain chemical in a specific area of their brains tend to get high scores on tests of impulsivity.

When the paper came out, thousands of people across England were rioting because a policeman had shot a young black man. “We never saw the connection, but of course the press immediately saw the connection,” Sumner recalls. Brain chemical lack ‘spurs rioting’, blared one headline. Rioters have ‘lower levels’ of brain chemical that keeps impulsive behaviour under control, said another.

“At the time, like most scientists, we kind of instinctively blamed the journalists for this,” Sumner says. His team called out these (shameful, really) exaggerations in The Guardian, and started engaging in debates about science and the media. “We quickly began to realize that everyone was arguing on the basis of anecdote and personal experience, but not evidence. So we decided to back off, stop arguing, and start collecting data.”

And the data, published today in BMJ, surprised Sumner. His team found that more than one-third of academic press releases contain exaggerated claims. What’s more, when a study is accompanied by an exaggerated press release, it’s more likely to be hyped in the press.

Because press releases are almost always approved by a study’s leaders before being distributed, Sumner’s findings suggest that scientists and their institutions play a bigger role in media hype than they might like to acknowledge.

“We’re all under pressure as scientists to have our work exposed,” Sumner says. “Certainly I think a lot of us would be quite happy not to take responsibility for that — just to say, ‘Well, we can’t do anything about it, if they’re going to misinterpret that’s up to them but it’s not our fault’. And I guess we’d like to say, it is really important and we have to do something more about it.”

Sumner and his colleagues looked at 462 health or medicine-related press releases about issued by 20 British universities in 2011. For each press release, the researchers also analyzed the scientific study it was based on, and news articles that described the same findings.

The researchers limited the analysis to health and medicine partly because (as I’ve written about before) these stories tend to influence people’s behavior more than, say, stories about dinosaurs or space. They focused on three specific ways that press releases can distort or exaggerate: by implying that a study in animals is applicable to people; by making causal claims from observational data; and by advising readers to change their behaviors (“these results suggest that aspirin is safe and effective for children,” say, or, “it’s dangerous to drink caffeine during pregnancy”).

More than one-third of the press releases did each of these things, and the misinformation showed up in the media, too. For example, among press releases that gave exaggerated health advice, 58 percent of subsequent news articles also contained exaggerated health advice. In contrast, among press releases that didn’t make exaggerated recommendations, only 17 percent of news articles did so. The researchers found similar trends for causal claims and for inferring that animal work applies to people.

“We certainly don’t want to be blaming press officers for this,” Sumner says. “They’re part of the system. The academics probably don’t engage as much as they should.”

I called Matt Shipman, a science writer and press information officer at North Carolina State University, to ask what he thought of the findings. Shipman has been a press officer for seven years, and before that he was a journalist. “The numbers are very powerful,” he said, and they underscore the importance of press releases at a time when reporters often don’t have the time or resources for thorough reporting. (Shipman has just signed on with Health News Review to rigorously evaluate the quality of health-related press releases.)

Shipman also brought up an important caveat. Because this study is observational, it doesn’t prove that press releases are themselves the cause of hype. “If a researcher is prone to exaggeration, which leads to exaggerated claims in a news release, the researcher is likely to also be prone to exaggeration when conducting interviews with reporters,” Shipman says. “The news release may be a symptom of the problem, rather than the problem itself.”

When he writes press releases, Shipman says he almost always begins by meeting with the researcher in person and asking him or her to explain not only the findings, but what work led to them, why they’re interesting, and what other experiments they might lead to. Then Shipman writes a draft of the release and sends it back to the researcher for approval. He asks the scientist to check not only for factual inaccuracies, but for problems in emphasis, context, or tone. Different press officers at other institutions, however, write press releases using far less rigorous methods, as I have learned by swapping stories with them over the years. And some press officers are judged by the quantity of stories that come out in big outlets, which naturally creates an incentive to make research seems newsworthy, even when it might not be.

“What I think is probably the case is that all of the variables at play here — the researchers, the press officers, and the journalists — are all humans,” Shipman says. “And all of them are capable of making mistakes, intentionally or unintentionally.”

So. Is there any concrete way to reduce those mistakes?

In an editorial accompanying the BMJ study, author and doctor Ben Goldacre makes two suggestions. First, the authors of press releases and the researchers who approved them should put their names on the releases, he writes. “This would create professional reputational consequences for misrepresenting scientific findings in a press release, which would parallel the risks around misrepresenting science in an academic paper.” That seems reasonable to me.

Second, to boost transparency, press releases shouldn’t only be sent to a closed group of journalists, Goldacre writes. “Instead, press releases should be treated as a part of the scientific publication, linked to the paper, referenced directly from the academic paper being promoted, and presented through existing infrastructure as online data appendices, in full view of peers.”

That sounds good, but “would require a significant shift in the culture,” according to Shipman. Press officers would have to be brought into the process much earlier than they are now, he says. And scientists would have to be far more invested in press releases than many of them are now.

I think we journalists need to own our portion of the blame in this mess, too. Let’s go back to Sumner’s 2011 brain-imaging study, for example. His university’s press release didn’t have any wild exaggerations, and it certainly didn’t make a connection between the research and the riots. That came from the journalists (and/or their editors).

But that actually doesn’t happen very often, it turns out,” Sumner says. “Most of the time, the media stories stay pretty close to what’s in the press release.”

Which isn’t exactly great news, either.

A Blog by

Category Fail

I’ve written a lot of stories about autism research, and I’d say one of the biggest scientific developments in the past few years was the creation of ‘autistic’ mice. Researchers first found many, many genes associated with autism in people, and then created dozens of mouse models that carry one or more of those same genetic glitches.

In the fall of 2011, for example, one team debuted mice with extra copies of a gene called UBE3A. Approximately 1 to 3 percent of children with autism carry extra copies of the same gene. These mutant mice show little interest in social interactions, compared with controls. They also emit fewer vocalizations and repetitively groom themselves. This was heralded as something of an autism trifecta, as the animals mimicked the three ‘core’ symptoms of people with the disorder: deficits in social behaviors and in communication, as well as repetitive behaviors.

The same goes for mouse models based on environmental, rather than genetic triggers. Mice whose mothers got an infection while pregnant end up with abnormal social interactions and vocalizations, and they repetitively bury marbles. Once again, the animals show all three “core” deficits, and are thus considered to be a valid model of autism.

There’s a nice and tidy logic to this approach, understandably appealing to neuroscientists. If a mouse model mimics the three behaviors used to define autism, then studying the cells and circuits of those mice could lead us to a better understanding of the human disorder. But there’s a big hole in that logic, according to a provocative commentary published by Eric London in this month’s issue of Trends in Neurosciences. The problem is that the symptoms of autism — like those of all psychiatric disorders — vary widely from one person to the next. So using the fuzzy diagnostic category of ‘autism’ to guide research, he writes, “is fraught with so many problems that the validity of research conclusions is suspect.”

London begins with a short history of the Diagnostic and Statistical Manual of Mental Disorders, or DSM, the book that since 1980 has dictated what collections of symptoms define one disorder or another. There’s nothing wrong with a categorical diagnosis, per se. It can have enormous explanatory power. If a doctor diagnoses you with strep throat, for example, you have a good idea of what that is (a bacterial infection) and how you might treat it (antibiotics). “A psychiatric diagnosis, by contrast, is rarely as informative,” London writes.

People diagnosed with schizophrenia, bipolar disorder, depression, or autism often don’t know what caused the trouble, and they struggle with unpredictable symptoms, ineffective treatments, and unpredictable responses to those treatments.

What’s more, most people who fall into the bucket of one psychiatric disorder also meet criteria for others. London cites some fascinating numbers: Some 90 percent of people with schizophrenia, for example, have another diagnosis as well. More than 60 percent of people with autism have another diagnosis, and one-quarter have two or more. “Autism is comorbidly present in over 50 specific diagnoses comprising other genetic and medical conditions,” London writes.

The three supposedly core behaviors of autism don’t correlate well with each other, he adds. In other words, many kids just have one or two of the three. Francesca Happé has conducted many studies suggesting that each of these symptoms is inherited independently, suggesting that each has its own, separate biological cause.

The danger of focusing on these three behaviors is that it might cause clinicians and researchers to overlook other symptoms that are common in people with autism. Many kids with autism have gastrointestinal issues, for example, and many show a range of motor problems, such as head lag, trouble sitting up, or a wobbly gait. And more than 80 percent of people with autism have anxiety, London notes. Mouse models of the disorder may have some of these problems, too, but researchers don’t usually test for them.

The DSM has tried to address some of these problems. Its latest version, released last year, defines autism with two criteria: social and communication deficits, and repetitive behaviors. But London doesn’t think that goes nearly far enough, for all the reasons outlined above. He proposes an even broader category of “neurodevelopmental disorder,” which would include more than 20 different DSM categories, including autism and schizophrenia. Just as they do today, clinicians could still focus on specific symptoms — whether sensory sensitivities, anxiety, psychosis, attentional problems, etc. — when deciding how to treat each person.

London’s commentary is only the latest in an old debate about diagnoses: Is it better to lump, or to split? Some scientists agree with him, others don’t, and I see merit in the scientific arguments on both sides. One point I think sometimes doesn’t get enough attention, though, is the social power of a diagnosis.

These labels carry meaning, for better or worse. For people with mysterious illness, such as chronic fatigue syndrome, a label can make them feel acknowledged and validated, or completely marginalized. Diagnoses for brain disorders, such as Asperger’s syndrome, can unite people under a common identity, or create dangerous societal stigma. Rational diagnostic categories are crucial for scientific progress, as London argues. But scientists would do well to remember that their labels also have lasting consequences outside of the lab.

A Blog by

“Malformed” Is the Best Brain Book I Read This Year (and Maybe Ever)

Of all the glossy photo books to showcase on your coffee table, your first choice might not be one of decaying human brains. But it should be, so long as that book is “Malformed.”

The first few pages give a sense of what you’re in for: hauntingly beautiful photographs of brains (see slideshow above). One photo shows a seemingly normal brain, plump and pink-gray, floating in cloudy liquid inside a glass jar. Another shows a thick slice of each hemisphere sitting on top of wet, white gauze. In another, three small brains are tucked inside a jar with a yellowing label noting the condition their donors were born with: Down’s Syndrome.

Photographer Adam Voorhes took these photos and dozens of others in a forgotten storeroom at the University of Texas at Austin. There, on a wooden shelving unit, sit about 100 brain specimens from people who once lived in the Austin State Hospital between the 1950s and 1980s. The hospital was once called the Texas State Lunatic Asylum, and its residents were (or rather, were considered to be) mentally ill.

These stunning photos of their brains make up the bulk of the book, but they are accompanied by several equally lively essays about the history of the collection, written by journalist Alex Hannaford. Together, the pictures and text tell two compelling stories. The first is the sordid history of this asylum and others like it, and how we’ve changed our approach to treating mental illness. The second story — one that, by the way, has no end in sight — is how the material goo of the brain interacts with the environment to shape our behavior.

The Austin State Hospital, formerly known as the Texas State Lunatic Asylum. Photo via Wikipedia.
The Austin State Hospital, formerly known as the Texas State Lunatic Asylum. Photo via Wikipedia

The Texas State Lunatic Asylum was founded, in 1853, with a quarter million dollars from the federal government and a surprisingly progressive mandate. Its supporters believed that the best treatment for the mentally ill was fresh food, fresh air, and a little peace and quiet. So the asylum grounds, enclosed by a cedar fence, included vegetable gardens, fruit orchards, oak and pecan trees, and even a string of lakes. Patients could roam as they pleased.

Within two decades, though, this idyllic picture began to crack. “Overcrowding, illness, escape and even some fairly horrific suicide attempts — all were documented in the pages of the local paper,” Hannaford writes.

Some of the most interesting parts of the book are the descriptions of these early asylum patients. Many, as you might expect, were diagnosed with insanity or mania. Others had conditions that we don’t typically associate with mental illness today, such as epilepsy, stroke, Alzheimer’s, and Down Syndrome. Still other diagnoses were, at least to me, wholly unexpected: love, masturbation, menopause, “excessive study,” “religious excitement,” and even “melancholia caused by deranged menstruation.”

None of these early patients had their brains removed at death. The brain collection began in the 1950s, apparently at the whim of the hospital’s pathologist, Coleman de Chenar. When he died, in 1985, six major scientific institutions, including Harvard Medical School, wanted his brain collection. It ended up at the University of Texas.

Why such interest in these homely lumps of dead tissue? Because of the tantalizing idea that brains can reveal why a sick person was sick. In some cases, gross anatomy indeed provides answers, albeit vague. There are many pictures in “Malformed” showing brains with obvious abnormalities, such as an asymmetrical shape, dark, blood-filled grooves, or a complete lack of folding.

It’s satisfying to think, ‘A ha, that’s why they were disturbed.’ Hannaford tells a fascinating story, for example, about a man named Charles Whitman. One day in 1966, the 25-year-old engineering student at the University of Texas went on a shooting rampage, killing 16 people and wounding 32 before being shot by police. In a note he left behind, Whitman asked to be autopsied, “urging physicians to examine his brain for signs of mental illness,” Hannaford writes. De Chenar performed the autopsy. When examining the killer’s brain, the doctor found, right in the middle, a 5-centimeter tumor.

A later report concluded that this tumor, which was highly malignant, “conceivably could have contributed to his inability to control his emotions and actions.” On the other hand, Whitman also allegedly suffered from child abuse and mental illness. So there’s no way to know, for sure, what caused what.

And that’s the case for all postmortem brain investigations, really. A couple of years ago I wrote a story for Scientific American about researchers in Indiana who are doing DNA analyses on century-old brain tissue that once belonged to mental patients. It’s unclear whether the DNA will be useable, after all this time. Even if it is, the researchers will be left with the unanswerable question of cause and effect. Did a particular genetic glitch cause the patient to have delusions? And how many healthy people are walking around right now with slightly abnormal brains that will never be subjected to scientific scrutiny?

This sticky issue, by the way, persists whether the person in question is mentally ill or mentally exceptional. Earlier this year I wrote about Einstein’s brain, which was stolen at autopsy, carved into 240 pieces, and (eventually) distributed to several laboratories. These researchers have published half a dozen studies reporting supposedly distinctive signatures of Einstein’s brain. “The underlying problem in all of the studies,” I wrote in that piece:

“…is that they set out to compare a category made up of one person, an N of 1, with a nebulous category of ‘not this person’ and an N of more than 1. With an N of 1, it’s extremely difficult to calculate the statistical variance — the likelihood that, for example, Einstein’s low neuron-to-glia ratio is real and not just a fluke of that particular region and those particular methods. Even if the statistics were sound, you’d still have the problem of attributing skills and behaviors to anatomy. There’s no way to know if X thing in Einstein’s brain made Einstein smart/dyslexic/good at math/you name it, or was just an X thing in his brain.”

“Malformed” is able to make that point more subtly and beautifully than anything else I’ve read. By looking at these brains, each photographed with such care, the irony is obvious: At one point not so long ago, we were willing to take away a person’s freedom — perhaps the ultimate sign of disrespect — for innocuous behaviors considered “abnormal.” And yet, at the same time, we went to great lengths to remove and preserve and label and, yes, respect these people’s dead brain tissue.

It would be wonderful if these specimens someday make a solid contribution to the science of mental illness. If they never do, though, they’re still valuable. They tell a story of a dark chapter in our history — one that I hope is never re-opened.

A Blog by

Personhood Week: Why We’re So Obsessed with Persons

It’s Personhood Week here on Only Human. To recap the week: Monday’s post was about conception, and Tuesday’s about the age of majority. Wednesday’s tackled DNA and dead bodies, and yesterday I took yet another opportunity to opine about the glories of pet-keeping. Today’s installment asks why we’re so fixated on pinning down the squishy notion of personhood.

I’d love to hear about how you guys define personhood, and why. Feel free to leave comments on these posts, or jump in to the #whatisaperson conversation on Twitter.


People have been trying to define personhood for a long time, maybe since the beginning of people. The first recorded attempt came from Boethius, a philosopher from 6th-Century Rome, who said a person was “an individual substance of rational nature.” Fast-forward a thousand years and Locke says it’s about rationality, self-awareness, and memory. Kant adds that humans have “dignity,” an intrinsic ability to freely choose. In 1978, Daniel Dennett says it’s intelligence, self-awareness, language, and being “conscious in some special way” that other animals aren’t. The next year Joseph Fletcher lays out 15 criteria (!), including a sense of futurity, concern for others, curiosity, and even IQ.

“Personhood is a concept that everyone feels they understand but no one can satisfactorily define,” wrote Martha Farah and Andrea Heberlein in a fascinating 2007 commentary for The American Journal of Bioethics. Farah and Heberlein are neuroscientists, and they note that neuroscientific tools may be useful for investigating some of the psychological concepts — reason, self-awareness, memory, intelligence, emotion — historically associated with personhood. But even if we had complete neurological understanding of these skills, they say, it would be no easier to define what a person is and isn’t.

But neuroscience does have something interesting to contribute to this discussion: a provocative explanation for our perennial obsession with personhood. “Perhaps this intuition does not come from our experiences with persons and non-persons in the world, and thus does not reflect the nature of the world,” Farah and Heberlein write. “Perhaps it is innate and structures our experience of the world from the outset.” In other words, maybe we’re born with the notion of personhood — and thus find it everywhere we look.

As evidence of this idea Farah and Heberlein turn to study of the so-called “social brain,” regions of the brain that help us navigate life in our very social world.

Take faces. We know that certain brain circuits are responsible for recognizing faces because in some people those structures don’t work properly: People with a condition known as prosopagnosia have no trouble distinguishing between complex objects, and yet they can’t tell one face from another. And some people have the opposite problem: They can’t tell objects apart but have no trouble recognizing faces. Almost 20 years ago, scientists discovered a region of the brain, called the fusiform face area, that is selectively activated when we look at faces.

Farah and Heberlein go on to list many other brain areas tied to people-identification. Looking at bodies (but not faces) activates another part of the fusiform gyrus, and watching body movement (made up only of points of light, and not actual body parts) activates the superior temporal sulcus. The  temporal parietal junction, meanwhile, seems to process the theory of mind, our ability to think about what other people are thinking.

The neuroscientists argue that this network of people-related regions has “a surprising level of automaticity,” meaning that it’s activated regardless of whether we’re consciously thinking about people. Social brain areas are activated not only when we look at realistic photographs of faces or bodies, but when we look at smiley faces or stick figures. Some of us might see a man in the gray craters of the moon, or the face of the Virgin Mary in the burned folds of a grilled cheese sandwich. We automatically assign agency to things as well. In one famous experiment from the 1940s, researchers created a simple animation of two triangles and a circle; watching it, you can’t help but think that the larger triangle is bullying the poor circle:

The social brain also has “a high degree of innateness,” the scientists write, meaning that it’s switched on even in newborns, who have obviously had scant real-world experience with people. A study in 1991 found, for example, that babies just 30 minutes old are more likely to look at face-like shapes that other kinds. (You can see those shapes for yourself in this piece about illusions I wrote for Nautilus.) Some research on autism, a strongly genetic condition, also bolsters the idea of the innateness of the social brain. Many people with autism prefer to interact with objects rather than people, and have difficulty processing facial expressions. People with autism also show differences in activity in the “social brain” regions mentioned above.

At the end of their commentary, Farah and Heberlein make an interesting distinction between persons and plants. Science, they say, offers an objective definition of plants: they are organisms that get their energy through photosynthesis. But science has found no such criteria for personhood. Why? “We suggest that this is because the category ‘plant’ has a kind of objective reality that the category ‘person’ does not,” they write.

Let’s assume for a moment that these neuroscientists are right — that the distinction between persons and non-persons is not something that exists in the world outside of our minds. Does that mean I’ve just wasted a week going on and on about this illusion?

Here’s why I think the personhood notion so valuable. We are people. Our people-centric minds evolved for a reason (namely, our species depends on social interactions) and our people-centric minds dictate how our society works. So maybe personhood is not based in reality. It’s the crux of our reality.

A Blog by

Personhood Week: People and Their Pets

It’s Personhood Week here on Only Human. Today’s installment is about people and our fur babies. Monday’s post was about conception, Tuesday’s about the age of majority, and yesterday’s about identifying dead bodies. Tomorrow’s post, the last in the series, goes to neuroscientists who argue that “personhood” is a convenient, if illusory construction of the human brain.

I’d love to hear about how you guys define personhood, and why. Feel free to leave comments on these posts, or jump in to the #whatisaperson conversation on Twitter.


I would be remiss, in a series about personhood, not to mention animal rights and the notion of non-human personhood. It’s incredibly interesting.*  And yet… it’s not an issue that I can think about with much clarity or insight. When it comes to animals, my choices are full of contradictions and hypocrisies. I eat meat, wear leather, and endorse the use of animal models in medical research. On the other hand, I’m totally taken with the growing body of research demonstrating that non-human animals have cognitive skills once thought to be uniquely human. I believe animal cruelty is wrong and, as regular readers know all too well, I consider my dog part of the family.

So it’s that last thing I’m going to discuss here: pet-keeping. Nearly two-thirds of American families allow animals (animals!) to live with them. People are (arguably, more on that below) the only species to keep pets. Why do we bother? And what does our love of pets say about our personhood?

Scientists have proposed many different theories, as Harold Herzog outlines in the current issue of Animal Behavior and Cognition. Herzog, a professor of psychology at Western Carolina University in North Carolina, has been studying our relationship with animals for decades. His theory, which I find quite compelling, is that our love of pets comes from an innate predisposition to form emotional attachments, combined with rapid and powerful cultural evolution.

Herzog’s paper first addresses the question of whether humans are the only species to keep pets. You might assume we’re not, especially if you’ve seen the adorable “animal odd couple” YouTube clips like this one:

The thing is, Herzog argues, all of the non-human examples of inter-species attachments happen in households, zoos or wildlife parks. In other words, they happen when humans are around to facilitate. There’s one notable exception: A group of free-ranging capuchin monkeys in Brazil apparently adopted an infant marmoset named Fortunata. “The capuchins carried the marmoset around, played with it, and frequently fed the much smaller monkey,” Herzog writes. This illustrates that some non-human animals have the emotional capacity and care-taking skills to become attached to a member of another species.

Even if other species do keep pets, it’s pretty rare. Humans, in contrast, are downright pet-crazy. Why?

One set of theories says pet-keeping is an adaptive trait, meaning that it enhances our evolutionary fitness. Some studies have proposed, for example, that pet ownership leads to better health — everything from increasing the odds of surviving a heart attack to boosting mood and self-esteem. But this idea (no matter how much we pet owners would like to believe it) doesn’t have a lot of scientific support. In fact, some studies have shown that people who own pets have a higher risk of mental health problems, such as depression and panic attacks. What’s more, our pets can spread diseases through mites, tics, fleas, worms, and various viruses. And — this really floored me — every year more than 85,000 people get seriously injured after tripping over their pets.

In the same evolutionary vein, some researchers argue that pets improve our ability to attract mates or to care for our (human) children. There is a bit of evidence for the former. In a 2008 study, for example, a man approached random women and asked them out on a date. When the man took a dog along with him, his success rate increased three-fold. The idea that having a pet increases your ability to empathize with and take care of your children is also plausible but, according to Herzog, has not been formally tested.

Another set of pet-keeping theories says that the trait is an evolutionary byproduct. It could be, for example, that we love pets because we’re attracted to baby-like faces, or because of our strong parental urges, or because of our tendency to anthropomorphize animals.

Each of these ideas may have some merit, Herzog says, but he doesn’t believe any of them adequately explain the pet-keeping phenomenon, for several good reasons.

If pet-keeping were a purely (or even largely) biologically driven trait, it would be difficult to explain why its popularity has spiked in the last 200 years, and particularly since World War II — a tiny blip on the timeline of human evolution. As a rough marker of this change Herzog turns to Google Ngram, a tool that tracks the frequency of words published in books. If you put the word “pet” into Google Ngram, you’ll see a sharp rise since about 1960.

Similarly, if pet-keeping were biological you’d expect all human cultures to do it. While it’s true that most human cultures have pets in their home, the way they interact with them is remarkably variable. Herzog cites a study published in 2011 comparing pet-keeping practices in 60 societies around the world. The study found a large variety of species of pets, including some that seem quite odd from a Western perspective: ostriches, tortoises, bears, bats. The most common pet species is the dog, but even then, people are very different in the way they keep dogs.

Of the 60 cultures surveyed, 53 have dogs, but only 22 consider dogs to be pets. Even then, pet dogs are usually used for specific purposes such as hunting or herding. Just seven cultures regularly feed their dogs and let them live inside the house, and only three cultures play with dogs. The study’s general conclusion, as Herzog puts it: “The affection and resources lavished upon pets in the United States and Europe today is a cultural anomaly.”

Herzog’s own studies have measured cultural influences by tracking the popularity of dog breeds over time. He and his colleagues analyzed records from the American Kennel Club from 1927 and 2005. It turns out that, just like baby names, chart-topping songs, and other examples of popular culture, dog breed preferences follow a specific pattern, with most people choosing among a small number of breeds. The AKC recognizes 160 breeds, and yet nearly two-thirds of all registrations went to just 15 of them.

What’s more, breed popularity can shift rapidly due to cultural whims. After the Disney movie 101 Dalmatians was re-released, in 1985, Dalmatian registrations went up five-fold. And Old English Sheepdogs saw a 100-fold increase after the 1959 movie The Shaggy Dog. (Herzog notes, however, that not all popular dog references spur people to own them: The incredibly popular Taco Bell ad campaign had no effect on Chihuahua registrations.)

It’s hard for me to pinpoint my own motivations for having a dog. It’s a lot of extra work, not to mention money and time. Then again, I don’t really need an evolutionary explanation. All I can say is it makes me happy.

*If you’re interested in learning more about the animal personhood movement/various philosophies of animal rights, I’d recommend:

—Charles Siebert’s story in The New York Times Magazine about Steven Wise, a lawyer who has filed lawsuits on behalf of several chimpanzees to contest their confinement in cages. Wise’s first case focused on a chimp named Tommy who is, as Siebert puts it, “the first nonhuman primate to ever sue a human captor in an attempt to gain his own freedom.”

—Virginia Morell’s National Geographic Q&A with Lori Marino, a scientist-advocate who studies the cognitive abilities of dolphins and other animals. “Person doesn’t mean human,” Marino says in the piece. “Human is the biological term that describes us as a species. Person, though, is about the kind of beings we are: sentient and conscious. That applies to most animals too. They are persons or should be legally.”

—A fascinating back-and-forth conversation between ethicist Peter Singer and Judge Richard Posner published in Slate in 2001.

—The Wikipedia entry for animal rights. Lots of philosophical mumbo-jumbo, but interesting. Prepare to stay there awhile.

A Blog by

Personhood Week: When Dead Bodies Become Dead People

It’s Personhood Week here on Only Human. Today’s installment is about what it means to give a name to a dead body. Monday’s post was about conception, and yesterday’s about the age of majority. Tomorrow goes to non-human animals, and Friday to neuroscientists who argue that “personhood” is a convenient, if illusory construction of the human brain.

I’d love to hear about how you guys define personhood, and why. Feel free to leave comments on these posts, or jump in to the #whatisaperson conversation on Twitter.


On the night of July 19, 1916, halfway through the First World War, troops from Australia and Great Britain attacked German positions in Fromelles, in northern France. The Germans were prepared. The battled ended the next day, after thousands of Brits and Aussies had died. It was, according to a magazine produced by the Australian government, “the worst 24 hours in Australia’s entire history.”

In 2002, an Australian amateur historian named Lambis Englezos visited Fromelles and noticed that the number of graves was far fewer than the number of soldiers reported missing from the battle. He suspected that the Germans had buried many in mass graves, and over the next few years he convinced reporters at 60 Minutes Australia of his theory. Its eventual broadcast, as well as reports from Red Cross records and aerial photos, led to an official investigation. In 2008 and 2009, archaeologists dug up five mass graves, containing 250 bodies.

Then came the question of identifying them. After more than 90 years, standard identification methods — fingerprints, medical and dental records — weren’t available. But there was DNA, deep inside the bone marrow. So the researchers extracted samples from the remains and then re-buried each body in its own grave.

This launched the Fromelles Identification Project (FIP), a joint effort by the Australian and British governments to find living descendants of the dead soldiers and convince them to donate their own DNA for matching. (The Y chromosome, passed through male descendants, changes very little from one generation to the next; same goes for mitochondrial DNA that is passed down through the female line.) So far 1,000 Australians have donated DNA to the effort, and 144 soldiers have been identified by name. The scientific, ethical and privacy concerns surrounding this project are fascinating. But before digging in to those, I think it’s important to address why people (via their governments) are willing to put so much effort and resources into identifying dead bodies in the first place.

The first answer is practical. Surviving family members often need to confirm a dead person’s identity before having access to their estate, pension, life insurance policy, and so on. Identification is also thought to ease families’ emotional toll. As bioethicist Jackie Leach Scully writes in a study published earlier this year, “the certainty of death is generally thought to be better than the ongoing emotional anguish of fearing but not knowing.”

It’s hard to see how these benefits would be valid for family members 90 years later, though. Nobody’s still settling their great-great grandfather’s estate, after all, and they’re not likely to be mourning his death, either.

But the justification goes beyond practical concerns. In To Know Where He Lies, a book about unidentified bodies from the Bosnian War in the 1990s, anthropologist Sarah Wagner explains how DNA identifications can have larger, more abstract consequences for the community (emphasis mine):

DNA became the critical, entrusted, indeed indispensable proof of individual identity for the thousands of sets of nameless mortal remains… Matching genetic profiles promised to reattach personhood (signposted by a name) to physical remains and, thereby, to reconstitute the identified person as a social — and political — subject.

The FIP smartly decided to conduct a social, ethical, and historical study along with its DNA efforts. Scully’s new paper, published in New Genetics in Society, gives pilot data showing how FIP participations described their motivations for getting involved. It’s a small study — based on email responses of several dozen participants and in-person interviews with five of them — but fascinating all the same. Some of these responses have challenged my own conceptions about the value of historical research, not to mention what it means to be a person.

In her initial emails to FIP participants, Scully simply asked whether they would be interested in being interviewed in the future. She received 116 responses. Of these, about one-third provided additional information about why they wanted to get involved. “These were more than just curiosity about a long-lost relative or interest in being part of a high profile and prestigious national project,” Scully writes. “Many email respondents indicated a powerful emotional investment.”

For instance, one woman said that after she learned she was a direct female descendant of one of the soldier’s sisters, “I literally jumped around the living room for several minutes.” Another participant said, during an in-person interview, “It was like winning the lottery as far as I was concerned. Skin was tingling, hairs standing up.”

Of all of the responses she received, Scully notes, about half said that part of their motivation involved “looking after” or “caring” for the dead or for the family the dead left behind. They said this even when they didn’t believe in any kind of afterlife. Here’s one interview exchange:

Participant: I’m doing it for George.

Interviewer: How does that work?

Participant: I dunno! I’m a bit of an agnostic, I don’t believe in life after death, you know.

That’s a bit hard for my mind to understand. Scully says it might be about respecting the memory of the deceased, which is still alive in the minds of other people. Identifying the body by name, she says, might help ensure “that the biography through which he is remembered has the ending that casts the best backward light on the life that has gone.”

Other participants are involved not to care for the dead, per se, but to honor relatives they had real relationships with. “Two interviewees said that it was ‘for my mother,’ who in both cases was a younger sister of the dead man,” Scully notes. Another participant reached out to their father after a 20-year absence after learning of the project, because it was he who first recounted the story of the dead soldier. This person told Scully that the FIP “helped to bridge a gap between my father and I while, at the same time, allowing us to bridge a gap with our family’s past.”

Soliciting DNA samples for body identifications also raises significant ethical and privacy concerns, similar to those that come up during any kind of genetic genealogy project. Genetic comparisons among family members might reveal long-buried family secrets, such as inaccurate paternity, that can cause unexpected emotional turbulence. Many people may be willing to take that risk, but they should at least know about it before getting involved. Unfortunately, none of the five participants Scully interviewed remembered these issues being mentioned before they signed on.

Then there are the related issues of privacy and consent: What if one family member wants to participate but another, sharing some of the same DNA, does not? A few of the email respondents told Scully that some of their relatives were “skeptical or hostile” about their involvement.

What if relatives have differing opinions about how long the identifications must go on? After the World Trade Center attacks in 2001, for example, body parts were strewn everywhere. Some families wanted to know every time a sample was identified as their relative, whereas others found these constant updates upsetting. “The ability to ‘give a name to’ the tiniest scraps of tissue is a new problem that is unique to DNA-based identification,” Scully notes.

And finally, there’s the more abstract concern that, to my mind, may in the long run pose more problems than the rest: that DNA identification will lead to what Scully calls the “geneticization of family.” The partner of a missing man is not (usually) genetically related to him. The same goes for his adopted children, or children his partner conceived using a sperm donation. Does that mean these relationships are any less familial, or any less important?

In this (albeit subtle) way, DNA identification may be contributing to our society’s growing obsession with biological identity, with biological personhood. This technology, Scully writes, is “likely to reinforce further the status of the genome as the most important, or even only, constituent of both individual and family identity.”

A Blog by

Personhood Week: Do Kids Count?

It’s Personhood Week here on Only Human. Today’s installment is about young people: When do they get autonomy? When do their decisions count?

Yesterday’s post was about conception, and tomorrow’s will be about the identification of dead bodies. Thursday goes to non-human animals, and Friday to neuroscientists who argue that “personhood” is a convenient, if illusory construction of the human brain.

I’d love to hear about how you guys define personhood, and why. Feel free to leave comments on these posts, or jump in to the #whatisaperson conversation on Twitter.


In 1891, the U.S. Supreme Court heard a case about negligence that was really about personhood.

The plaintiff, the Union Pacific Railway Company, was asking the court to force a woman, Clara Botsford, to submit to a surgical examination. Why? Botsford was suing the company for negligence related to a top bunk in one of its train’s sleeping cars. The bunk fell while she was under it, “rupturing the membranes of the brain and spinal cord” and causing “permanent and increasing injuries.” The company wanted its own doctors to examine Botsford and confirm the diagnoses, but she did not consent to the examination.

The Supreme Court ruled against the company. As Justice Horace Gray wrote in his opinion (emphasis mine):

“No right is held more sacred or is more carefully guarded by the common law than the right of every individual to the possession and control of his own person, free from all restraint or interference of others unless by clear and unquestionable authority of law.”

This is just one of many examples from U.S. case law illustrating that a big part of personhood is autonomy. In our society, people are supposed to have control over their own bodies and make independent decisions about their lives. This idea drives the modern medical concept of informed consent, in which an individual is supposed to give permission before receiving medical therapies or participating in a research study.

That autonomy principle, though, gets sticky when applied to a subset of humans that most of us would surely call persons: minors.

Take a case from the mid-1970s, when two children living in a state mental hospital in Georgia filed a class-action lawsuit against state officials. Their guardians had committed them against their will, which the kids claimed was a violation of the Due Process clause of the 14th Amendment. The case, known as Parham v. J.R., went to the Supreme Court, which ultimately ruled against the kids. “The law’s concept of the family,” the opinion reads, “rests on a presumption that parents possess what a child lacks in maturity, experience, and capacity for judgment required for making life’s difficult decisions.”

Neuroscience research in the last couple of decades bolsters the idea that the teenage brain is not fully mature. The front of the brain — including the areas important for planning, executive function, and inhibition — develops last. This may be why teenagers tend to be risk-takers, getting in car accidents, taking dangerous drugs and having unprotected sex. They’re also not the best decision-makers, prioritizing short-term over long-term consequences, and often succumbing to peer pressure. (David Dobbs wrote beautifully about this line of research in a 2011 feature in National Geographic.)

Science doesn’t dictate law, of course. If it did, then we might not grant people full rights until around age 25, when the brain has fully developed. Instead the U.S. puts the age of majority at 18. After that, you become a person with the right to make your own choices (with the notable exception of drinking alcohol). As law professor Jonathan F. Will wrote in a 2006 paper: “In the eyes of the law, there is something magical about the stroke of midnight on the eve of one’s eighteenth birthday.”

The medical profession, however, has shown a growing respect of the rights of minors since the 1970s. Part of its rationale is practical: There are some important medical conditions that adolescents might not want to share with their parents. For example, most states today allow minors to independently seek treatment for sexually transmitted diseases, birth control, drug addiction, and sexual abuse. Similarly, most states allow pregnant girls under 18 to make decisions regarding the pregnancy.

The changing stance of the medical profession is also based on the bioethical principles of autonomy and informed consent. In order for an adult to give informed consent, she must do so voluntarily, fully understand the nature of the treatment and its possible consequences, and be deemed “competent” to make decisions.  And what is competence? As Will explains in that 2006 paper, competence means the person has the cognitive ability to communicate, understand, reason, deliberate, and “apply a set of values” to the decision at hand.

Historically, the law has assumed that adults have this competence and minors don’t. That began to change in 1987, when the Tennessee Supreme Court heard a medical malpractice case surrounding a girl with back pain. Without her parents’ permission, Sandra Cardwell, age 17 and 7 months, went to an osteopathic doctor and received neck, spine and leg manipulations. That treatment not only didn’t help, but the doctor missed the real problem, a herniated disc. So Cardwell and her parents sued the doctor, claiming that he had failed to get parental consent.

The Court ruled in favor of the doctor and specifically addressed what’s now known as the “mature minor doctrine.” It says, essentially, that some minors should have medical autonomy. As the Court stated:

“Whether a minor has the capacity to consent to medical treatment depends on the age, ability, experience, education, training, and degree of maturity or judgment obtained by the minor, as well as upon the conduct and demeanor or the minor at the time of the incident involved.”

If it sounds like a gray area that’s because it is. When I was digging into the medical literature I found papers with wildly different opinions. This one, published in 1975, favors abandoning parental consent, whereas this one, to be published next month, argues that minors should never be examined without their parents. So I asked Laura Hercher, a genetic counselor at Sarah Lawrence College, whether she thought there was any consensus on the issue of minors and informed consent.

As it turns out Hercher is quite familiar with these issues as they apply to genetic testing; she chaired the National Society of Genetic Counselors group that wrote a position statement on genetic testing of minors. As far as explicit rules, she told me, it’s clear-cut: Minors cannot consent until they’re 18, and after that, parents have no say. In practice, though, it’s a lot fuzzier.

“While minors can’t consent before 18, they can provide assent, and there is a well established consensus that assent should be sought when possible — with an increasing emphasis on assent as the child grows older,” she says.

What that leads to in practice, she adds, is a tendency toward non-action: Doctors will almost never act without parental approval, and yet at the same time, they’re also reluctant to act if an adolescent doesn’t consent. (What age counts as an adolescent is yet another wrinkle, but Hercher says consent concerns usually begin in the early teens.) “In effect, if either party says no, it often blocks treatment,” she says. “Veto power, like the UN security council.”

A Blog by

Personhood Week: Conception Is a Process

Earlier this month voters in two U.S. States, Colorado and North Dakota, considered new laws that would bolster the legal rights of a fetus before birth. Neither of these ballot initiatives passed, but they’re part of a “personhood movement” that’s been gaining notoriety among pro-life advocates since about 2008. Reading about this movement in the press (Vox has a great overview) has made me wonder about the slippery, contentious, and profound meaning of “personhood.”

The Wikipedia page for personhood gives this definition: “Personhood is the status of being a person.” Right-o.

The page for person isn’t much clearer: “A person is a being, such as a human, that has certain capacities or attributes constituting personhood, which in turn is defined differently by different authors in different disciplines, and by different cultures in different times and places.”

I’ve chosen five personhood perspectives to write about this week. Today’s installment is all about conception (another fuzzy concept). Tomorrow I’ll try to tackle the transition from child to adult. Wednesday I’ll ask whether dead bodies are people. Thursday goes to non-human animals, and Friday to neuroscientists who argue that “personhood” is a convenient, if illusory construction of the human brain.

I’d love to hear about how you guys define personhood, and why. Feel free to leave comments on these posts, or jump in to the #whatisaperson conversation on Twitter.


I went to a Catholic high school, where I was taught in religion class that life begins at conception. I don’t remember my teacher getting into the biological details, but we all knew what she meant: Life begins at the moment that an earnest sperm finishes his treacherous swimming odyssey and hits that big, beautiful egg.

That’s what many Christians believe, and it’s also the fundamental idea behind the personhood movement. The website of Personhood USA, a nonprofit Christian ministry, highlights this quote by French geneticist Jérôme Lejeune: “After fertilization has taken place a new human being has come into being. It is no longer a matter of taste or opinion…it is plain experimental evidence. Each individual has a very neat beginning, at conception.”

That’s not a common belief among biologists, however. Scott Gilbert of Swarthmore calls the conception story a “founding myth,” like The Aeneid. As he jokes in a popular lecture, “We are not the progeny of some wimpy sperm — we are the progeny of heroes!”

In reality, conception — or more precisely, fertilization — is not a moment. It’s a process.

After the sperm DNA enters the egg, it takes at least 12 hours for it to find its way to the egg’s DNA. The sperm and egg chromosomes condense in a coordinated dance, with the help of lots of proteins call microtubules, eventually forming a zygote. But a true diploid nucleus — that is, one that contains a full set of chromosomes from each parent — does not exist until the zygote has split into two cells, about two days after the sperm first arrive.

So is that two-cell stage, then, at day two, when personhood begins?

It could be, if you define personhood on a purely genetic level. I have a hard time doing so, though, because of twins. Identical twins share exactly the same genome, but are obviously not the same person.

Based on this logic, some biologists push back the start of personhood to about 14 days after the sperm enters the egg, a stage called gastrulation. This is when the zygote transforms from one layer into three, with each layer destined to become different types of tissues. It’s only after this stage that you could look at a zygote and say definitively that it’s not going to split into identical twins (or triplets or even quadruplets).

Via Wikipedia: Gastrulation occurs when a blastula, made up of one layer, folds inward and enlarges to create a gastrula.
Image via Wikipedia

So is the 14th day of gestation, then, when personhood begins?

Some doctors would say no, you have to also consider the fetal brain. We define a person’s death, after all, as the loss of brain activity. So why wouldn’t we also define a person’s emergence based on brain activity? If you take this view, Gilbert notes, then you’ll push personhood to about the 28th week of gestation. That’s the earliest point when researchers (like this group) have been able to pick up tell-tell brain activity patterns in a developing fetus.

Most legal definitions of personhood in the United States also focus on this late stage of gestation. The famous Roe v. Wade case in 1973 made it illegal for states to ban abortions before the third trimester of pregnancy, which begins at 28 weeks. Subsequent rulings by the court got rid of this trimester notion, saying instead that abortions can’t happen after a fetus is “viable,” or able to live outside the womb, which can be as early as 22 or 23 weeks. (And in 2003, Congress banned a specific procedure called a partial-birth abortion, which happens between 15 and 26 weeks.)

So there you have it. From a biological perspective, neither conception nor personhood is easily defined. “I really can’t tell you when personhood begins,” Gilbert says in his lecture. “But I can say with absolute certainty that there’s no consensus among scientists.”

These definitions don’t necessarily get easier after birth, either. But we’ll get to that tomorrow.

A Blog by

The Scary, Synthetic, and All-Too-Secret Ingredients of Dietary Supplements

Pieter Cohen, an internist in Massachusetts, got interested in dietary supplements several years ago, when some of his patients came to see him with unexplained — and serious — symptoms. Some went to the hospital with chest pain, or even kidney failure. Others lost their jobs because of positive drug tests. Eventually, after getting them to open up, Cohen realized they all had something in common: They were taking weight-loss pills and other dietary supplements.

An estimated 85,000 supplement products are sold in stores and online. Most of these are vitamins and minerals which, though unlikely to offer real health benefits, are fairly harmless. The supplement industry — which rakes in some $32 billion a year from American consumers — claims that the vast majority of supplements are safe. But that’s really an impossible claim, because most supplements don’t go through any kind of rigorous scientific scrutiny.

Unlike prescription drugs, dietary supplements do not have to be approved by the Food and Drug Administration (FDA) before they’re sold. That means that the public typically finds out about a product’s risks thanks to anecdotal reports from patients and doctors like Cohen.

Cohen’s experience with his patients spurred him to investigate the ingredients in a range of supplements. What he and others have found is alarming, particularly because about two-thirds of American adults say they’ve tried supplements, and half use them regularly. Most people think of supplements as “natural” ingredients found in plants. But it turns out that a lot of supplements — 560 products identified so far — are tainted with synthetic pharmaceutical compounds, including stimulants, steroids and antidepressants.

For example, last year Cohen published a study showing that a workout supplement called Craze — then sold in GMC stores, as well as Walmart’s website and Amazon — contained a synthetic stimulant called N,α-DEPEA that is a chemical cousin of methamphetamine. Yes, meth. The drug wasn’t in small quantities, either; Cohen’s group found between 21 and 35 milligrams per serving. (The bottle’s label said that the chemical was a natural product of the dendrobium orchid, a claim that, given Cohen’s findings, is almost certainly false.)

Cohen says that he told the FDA about the Craze findings six months before he published the paper. “They did nothing, zero,” he says. “I was really frustrated by the lack of action.”

After the paper came out (and after USA Today published an expose on the criminal past of the founder of Craze’s manufacturer, Driven Sports), the company announced that it had stopped production of Craze. Many months later, in April, the FDA finally sent a warning letter to Driven Sports.

By that time Cohen had heard that the company had replaced Craze with a new product, called Frenzy, sold outside of the U.S. He purchased some of it online and found an ingredient on the label that he had never seen before: “AMP citrate”. After digging into it a bit more, Cohen’s group found that more than a dozen supplements contained this ingredient — also known as 4-amino-2-methylpentane citrate, 1,3- dimethylbutylamine citrate, 4-amino-2-pentanamine, and 4-AMP. Frenzy and one other supplement described the chemical as an extract of Pouchong tea, another dubious claim.

In a study published last month, Cohen’s team performed a chemical analysis of 14 supplements containing this ingredient. The scientists found that 12 of them contained 1, 3-dimethylbutylamine (DMBA). DMBA is chemically similar to DMAA, a stimulant designed by Eli Lilly decades ago as a competitor to amphetamine. In 2006 DMAA started showing up in supplements, and by 2010 it was bringing in $100 million in annual sales. But the FDA banned DMAA last year, after it had been linked to dozens of health problems and five deaths.

Cohen informed the FDA about his new DMBA findings, but isn’t optimistic that they’ll take action anytime soon.

It may be, as industry groups argue, that these unfortunate events are the result of a few bad apples — shady companies that don’t represent the industry as a whole. The thing is, because the industry gets so little oversight, there’s no way to know that for sure. In fact, even when dangerous products are pulled from the shelves, they often reappear on the market later with the same scary ingredients.

A couple of weeks ago, Cohen and his collaborators published a study in the Journal of the American Medical Association looking at 27 supplements that the FDA had formally recalled and were still being sold by their manufacturers under exactly the same name. The scientists bought the products long (8-52 months) after they had been recalled. Eighteen of the supplements contained a “pharmaceutical adulterant”, the study found, and 17 of those contained the same tainted drug that the FDA had warned them about before.

“What we’re talking about here is experimental pharmaceuticals—designer drugs — that have entered the mainstream supplement market,” Cohen says. “The FDA hasn’t caught up with the seriousness of this as a public health issue.”

If you’re taking a supplement and have noticed unusual health problems, you can report them on the FDA’s MedWatch site. Until the agency changes its stance, I think the best advice for supplement consumers is “buyer beware.”

A Blog by

When Grief Is Traumatic

As Vicki looked at her son in his hospital bed, she didn’t believe he was close to death. He was still young, at 33. It had been a bad car accident, yes, but he was still strong. To an outsider, the patient must have looked tragic — unconscious and breathing through a ventilator. But to Vicki, he was only sleeping. She was certain, in fact, that he had squeezed her hand.

Later that day, doctors pronounced Vicki’s son brain-dead. And for the next two years, she couldn’t stop thinking about him. She felt terribly guilty about the circumstances of his death: He and a friend had been drinking before they got in the car. She knew he was a recovering alcoholic, and that he had recently relapsed. She couldn’t shake the thought that she should have pushed him harder to go back to rehab. Every day Vicki flipped through a scrapbook of his photos and articles about his death. She turned his motorcycle helmet into a flowerpot. She let housework pile up and stopped seeing her friends. “She seemed to be intent on holding onto him,” one of her therapists wrote about her case, “at the cost of reconnecting with her own life.”

Vicki is part of the 10 percent of grievers who have prolonged grief, also known as complicated grief or traumatic grief. Grieving is an intense, painful, and yet altogether healthy experience. What’s unhealthy is when the symptoms of grief — such as yearning for the dead, feeling anger about the loss, or a sense of being stuck — last for six months or more.

Very unhealthy. Over the past three decades, researchers have tied prolonged grief to an increased risk of a host of illnesses, including: sleep troubles, suicidal thoughts, and even heart problems and cancer. (That’s not to say that grief necessarily causes these conditions, but rather that it’s an important, and possibly predictive, marker.)

At the same time, there’s been a big debate among researchers about what prolonged grief is, exactly. Is it a bona fide disorder? And if it is a disorder, then is it just another variety of of depression, or anxiety, or post-traumatic stress disorder (PTSD)?

Prolonged grief is in a psychiatric class of its own, according to Holly Prigerson, director of the Center for Research on End of Life Care at Weill Cornell Medical College. When Prigerson first started studying bereavement, back in the 1990s, “psychiatrists thought that depression was the only thing you had to worry about,” she says. “We set out to [determine if] grief symptoms are different and actually predict more bad things than depression and PTSD.”

Her group and others have found, for example, that antidepressant medications don’t alleviate grief symptoms. In 2008, another group found that the brain activity of prolonged grievers when looking at photos of their lost loved ones is different than that of typical grievers. In 2009, Prigerson proposed formal clinical criteria for complicated grief, which include daily yearning for the deceased, feeling emotionally numb, identity confusion, or difficulty moving on with life.

When I first wrote about prolonged grief, for a Scientific American article in 2011, Prigerson and others were lobbying for prolonged grief to be added as a formal diagnosis in ­the Diagnostic and Statistical Manual of Mental Disorders (DSM), the “bible” of psychiatric disorders. That didn’t happen; instead the condition is mentioned briefly in the appendix. “It’s frustrating,” Prigerson says. She is hopeful, though, that the disorder will be included in the next version of the International Classification of Diseases (ICD), the diagnosis guide used by the World Health Organization.

Why all this hoopla over the clinical definitions of pathological grief? Because the determinations made by the DSM and ICD dictate what treatments insurance companies will cover. From Prigerson’s perspective, it means that the roughly 1 million Americans who develop complicated grief each year will have to pay for treatment themselves (assuming they even get properly assessed). That’s an important point from a public health perspective. But more interesting to me is what that treatment is — and how it might shed light on what grief is.

The best treatment for prolonged grief seems to be cognitive behavioral therapy (CBT), a talk therapy in which the patient identifies specific thoughts and feelings, ferrets out those that aren’t rational, and sets goals for the future. In 2005, Katherine Shear of Columbia University reported that a CBT tailored for complicated grief worked for 51 percent of patients.

Part of that tailoring is something called “imaginal exposure,” in which patients are encouraged to revisit feelings or memories that trigger their grief. A similar exposure approach is often used to treat PTSD: Patients will repeatedly recall their most traumatic memories and try to reframe them in a less emotionally painful context. About half of people with PTSD who try exposure therapy get better.

A spate of studies suggest that exposure therapy is also an important part of complicated grief therapy. A couple of weeks ago, for example, researchers from Australia and Israel published a randomized clinical trial of 80 prolonged grievers showing that CBT plus exposure therapy leads to significantly better outcomes than CBT alone.

“The findings from this paper make me think we really need to explore the benefits of making people confront, in some sense, their worst nightmares and fears,” Prigerson says.

This is somewhat counter-intuitive, she adds, because grief has historically been defined as a disorder of attachment and loss, not trauma. In fact, only about half of people seeking treatment for complicated grief meet criteria for PTSD. If grief is a disorder of attachment, then it wouldn’t make sense to encourage patients to think about their loss even more. And yet, somehow this repeated exposure does seem to work.

“We don’t really know the mechanisms here,” Prigerson says. It could be that many people with complicated grief are also dealing with traumatic memories. Or it could be that grief and PTSD are not the same thing, “but that there’s something to exposure therapy that appears to tap into the attachment bond.”

These are questions for future studies. I’m struck by how often CBT techniques — which, at their most fundamental level, are simply about identifying destructive feelings and attempting to reframe them — work, and work for a wide range of disorders. It makes some of the livid arguments over what counts as “real” pathology, or what’s grief versus depression versus anxiety, seem rather beside the point.

In any case, exposure therapy worked for Vicki. After two years of struggling with regular talk therapy, she began seeing a CBT therapist. These sessions included imaginal exposures of her most vivid and painful memories: seeing her son in his hospital bed, and remembering him squeezing her hand. In addition to recalling the scene to her therapist every week, every day Vicky listened to audio tapes of herself telling the story.

Every week these recollections became less painful for Vicki. Her scores on tests of anxiety and grief dropped rapidly, particularly from the fourth to eighth week. She started reading sympathy cards that she had previously avoided. She stopped looking through the scrapbook, and started reaching out to friends and family again.

The treatment led to a dramatic reframing of the way she remembered her son and their relationship. “She said that repeatedly telling the story of his death had helped her to realize that he lived a dangerous life and that he was an independent adult who made his own life decisions,” the case report reads. At her final session, she said the treatment had allowed her “to begin to enjoy her life again.”


I made up Vicki’s name. I found her story in this case report, in which she’s called “Ms. B.”.

A Blog by

Why Do Obese Women Earn Less Than Thin Women (and Obese Men)?

For more than two decades, economists have noticed that obesity has a, well, weighty impact on income, particularly for women. A well-known 2004 study, for example, found that a 65-pound increase in a woman’s weight is associated with a 9-percent drop in wages — an obesity penalty equivalent to about three years of work experience.

“But economists have been really puzzled as to why,” says Jennifer Bennett Shinall, an assistant professor of law at Vanderbilt University. “Why are female obese individuals doing worse in the labor market?”

Research has focused on three possible explanations. The first points the finger at the employee herself. It says that obese women are choosing to work in jobs that happen to pay less.

The other two explanations focus on the employer. One says that employers are paying obese women less because they’re less productive. “It’s the idea that weight gets in the way of you doing your job,” Shinall says.

The final explanation suggests that employers are paying obese women less because of personal preferences: either they don’t like working with obese women, or they’re concerned that their customers or clients would prefer not to work with them.

Earlier this year, Shinall published a study that attempts to pick apart these three hypotheses. Her research pulls from a wide array of datasets detailing, among other things, employee body-mass index, wages, and the job industry. But most important is the data Shinall used to categorize various occupations. She analyzed them based on two measures: how much they depend on physical activity, and how much on personal interactions. Being a nurse or cook, for example, involves more physical activity than having an office job. And a salesperson relies on personal communication more than, say, a computer programmer. Jobs with high levels of physical activity tend to pay less than jobs with high levels of personal interactions.

Shinall’s analysis found, somewhat counter-intuitively, that obese women are more likely to work in physical jobs than other jobs. In fact, the heavier a woman is, the more likely it is that she’ll work in a physical job. (Obese men are also more likely to have physical jobs, but that’s not true for morbidly obese men.)

What’s more, morbidly obese women who do work in jobs with lots of personal interactions make 5 percent less than normal-weight women in the same jobs, the study found. And both of these findings persist even after controlling for race, age, education, presence of a child, and geographic region.

These results, according to Shinall, don’t fit well with two of the three hypotheses. Consider the one that says the wage gap is the result of an obese woman’s personal choice. Given that obesity sometimes* affects basic physical abilities (such as walking more than a quarter of a mile, going up a few stairs, stooping, or lifting objects), why would obese people choose more physically demanding jobs? And even if, for some reason, they found personal-interaction jobs more unpleasant, then basic supply-and-demand theory would suggest that obese women in those jobs would demand more money, not less.

Then there’s the hypothesis about obese women earning less because they’re less productive. That’s not convincing, Shinall says, because obesity is most likely to affect productivity in physically demanding jobs. If employers were really concerned about this, wouldn’t they be less likely to hire obese people for physical jobs?

Plus, if it were all about obesity-related disability, then why would there be differences between obese men and obese women? “Just the fact that we see very different results for women than we see for men is suggestive that this is some sort of sex-based discrimination issue,” Shinall says.

Discrimination. That’s a scary, loaded word, and Shinall readily admits that this data does not prove that obese women are being discriminated against. But her paper outlines similar findings from a variety of disciplines, and I think it sounds awfully plausible.

A number of studies, for example, have shown that obese people tend to be rated significantly less attractive than thinner people. And attractiveness, unfortunately, has a lot of influence in the workplace. One study found that attractive people are more likely to work in personal-interaction jobs, such as being a sales person, receptionist, cashier, or waiter. Another study looked at lawyers and found that attractive ones tend to work in the private sector — where they have to drum up their own business — whereas less attractive attorneys work in the public sector. Attractive lawyers are also more likely to be litigators.

A few years ago, Swedish researcher Dan-Olof Rooth sent a bunch of fake applications to real job openings. The applications included facial photos, and he used pairs of photos of the same person digitally manipulated to look more or less obese. Rooth found that applications with the obese version of the photo were less likely to get called back for interviews than the same applications with the thin version of the photo. The call-back response was six percentage points lower for photos of obese men and eight percentage points lower for obese women.

OK, so there seems to be obesity-based discrimination. But why, I asked Shinall, would this be stronger for women than men?

She answered with a telling anecdote about what happened once while she was presenting her data at an academic meeting. “Somebody’s comment was, ‘Well, this makes sense because fat guys are fun.’ Which is sad, but it rang very true with me,” she said. “There is this perception in society that it’s a little bit more OK to be obese if you’re a man.”

Some research supports that idea. A 2011 study surveyed adolescent boys and girls on their attitudes about obesity. The kids indicated that they’d “rather be a fat guy than a fat girl,” and that “it’s more normal for guys to be overweight.”

If Shinall’s analysis is correct, then obese women are facing a major injustice in the workplace. What, if anything, can be done about it?

Shinall, a lawyer, has thought a lot about this. In the U.S., she told me, one state (Michigan) and nine cities have laws prohibiting workplace discrimination based on weight. For all other jurisdictions, obese women might be able to sue their employers based on sex discrimination.

That’s because of something called a “sex-plus” claim to Title VII of the Civil Rights Act, the law that made it illegal to discriminate on the basis of race, color, religion, sex or national origin. Sex-plus claims are for employers that aren’t discriminating against all women, but against women with a particular attribute, such as marital status or age. The first big sex-plus case was in 1971, when the Supreme Court decided that the Martin Marietta Corporation (now Lockheed Martin), could not refuse job applications from women with young kids while hiring men with young kids.

It would be crazy today, but at the time The Martin Marietta Corporation had an explicit policy against hiring women with young children, claiming that they were unreliable employees. Using the same legal precedent for obesity cases would be trickier, but that’s where Shinall believes her study could have a big impact. “What my research is getting at is that employers are treating heavier women differently not as part of an explicit policy, but an implicit policy,” she says. The sex-plus approach “is a potential remedy, but one that hasn’t really been tried yet.”

*I say “sometimes” because the link between obesity and health is complicated. There are lots of obese people who are fit, and lots of skinny people who are unhealthy. In aggregate, though, obesity (and particularly morbid obesity) is a risk factor for physical disability.

A Blog by

Chantix, Suicide, and the Point of Prescription Drug Warnings

Quick poll: Think back to the last time you bought a prescription medication. Did you read any of the information about the drug printed on the papers inside the box? And if you did read it, did that stop you from taking the drug?

I can’t recall a time when I read any of that fine print, despite the fact that I’m fascinated by medicine and often write about it. I got thinking about the potency (or impotence) of these warnings this week while reading about a controversy surrounding Chantix, a drug that helps people quit smoking.

Chantix (Pfizer’s branded name for varenicline) works by stimulating nicotine receptors in the brain, thus curbing cravings for cigs. The Food and Drug Administration (FDA) approved the drug in 2006. Since then, a small percentage of people who take Chantix have reported neurological side effects, and serious ones: depression, psychosis, erratic behavior, even “feeling like a zombie.” The drug has been linked to more than 500 suicides, 1,800 attempted suicides, and the bizarre death of an American musician. Here are a few anecdotal reports about the drug from a Reddit thread:

  • Chantix was the most miserable drug I have ever taken…severe gi distress, depression, paranoia, crazy and vivid dreams, etc. BUT, it got me off cigarettes after everything else I tried had failed…As I knew that it really fucked with you I prepped by temporarily getting rid of the guns and having my brother check up on me daily…What keeps me from going back to smoking is knowing that one day I’ll want to quit again, and I NEVER want to experience Chantix again!!!
  • I’m convinced Chantix played a part in my divorce. My ex gave up smoking, her Pepsi habit, as well as marriage.
  • My mother was on it (and successfully quit smoking using it) and she had some outrageous paranoia. She would accuse us of conspiring against her, making her sick, not loving her, lying to her, stealing things (that she misplaced), turning the dog against her (da fuq??), trying to poison her and sabotaging her car…she smoked for 40 years and failed at quitting hundreds of times. Chantix did the trick somehow but made her nuts.

Yikes! Reading stories like that might scare me enough to think twice about the drug. But would the information in the package insert?

That insert has been the focus of the recent hoopla about Chantix. In 2009, the FDA decided that the Chantix insert needed a “black box warning” about the risk of neurological side effects (so named because this text is outlined with a black border). Here’s part of that warning:

Advise patients and caregivers that the patient should stop taking CHANTIX and contact a healthcare provider immediately if agitation, hostility, depressed mood, or changes in behavior or thinking that are not typical for the patient are observed, or if the patient develops suicidal ideation or suicidal behavior while taking CHANTIX or shortly after discontinuing CHANTIX.

The black box is the FDA’s most severe safety warning. Pfizer fought it tooth and nail, citing several studies showing that Chantix is not associated with a higher risk of psychiatric problems. (If you want to read more about these studies and counter arguments from the FDA advisory panel, check out this excellent piece by John Gever at MedPage Today.) Earlier this month, the FDA confirmed that the warning would stay, and in fact suggested that it have even stronger language.

But… why so much fuss over these warnings, anyway? Does anyone actually read them?

There doesn’t seem to be a lot of research on that question, though the data that does exist suggests that some patients are more conscientious than I am. One report I stumbled on, surveying 1,500 patients from a community pharmacy in Germany in 2001, found that 80 percent always read the inserts. A 2007 study looked at 200 patients in Israel who were prescribed antibiotics, analgesics or antihypertensives. It found that just over half of participants read the inserts. And a 2009 study in Denmark found that 79 percent of patients “always or often” read them. On the other hand, a 2006 report of American consumers reported that just 23 percent looked at this info.

Even if patients are interested in reading those materials, they might not understand the information. A 2011 study asked 52 adults with a high-school education or less to read the package insert and similar materials describing an antidepressant medication. Afterwards, less than 20 percent could name the the rare-but-dangerous side effect of the drug. A report from the Institute of Medicine similarly concluded that drug labeling is a big part of why patients often use drugs incorrectly.

Studies like those have led some researchers to propose ways to make labels more useful to patients. But the reason Pfizer was so concerned with the black box warning for Chantix has little to do with consumer behavior. The company was worried because of the warning’s potential influence on doctors and their prescribing habits.

There aren’t many studies looking closely at the correlations between black-box labeling and prescribing patterns. But there are two notable examples that seem to suggest that the warnings have teeth. Remember the Vioxx controversy? Vioxx was a hugely popular anti-inflammatory drug that was pulled from the market in 2004 because of its risk of heart disease and stroke. After that, the FDA reacted by issuing black-box warnings for several similar drugs, leading to a “rapid decline” in prescriptions.

The other example comes from the link between antidepressants and suicide in children and adolescents. In March 2004 an FDA advisory committee reported on this link, and several months later it issued a black-box warning on all antidepressants. By June 2005 prescriptions for children and adolescents had dropped 20 percent.

To sum up, I was wrong: prescription warning labels, though flawed, actually matter to many patients, doctors and pharmaceutical companies.

As for Chantix… if you’re thinking of trying the drug to quit smoking, you might want to wait until results come back from a prospective clinical trial slated to end next year.

UPDATE at 11:50am: I added the sentence about the 2006 report on Americans’ use of package inserts. (Thanks, Kelly Hills!)

A Blog by

New Microscope Puts the Life Back in Biology (with Videos!)

Life moves.

Or more precisely, as neuroscientist Eric Betzig and his colleagues put it in today’s issue of Science: “Every living thing is a complex thermodynamic pocket of reduced entropy through which matter and energy flow continuously.”

Betzig’s name may sound familiar. Two weeks ago he won the 2014 Nobel Prize in Chemistry for developing fancy microscopes. In today’s Science paper he shows off the latest tech, dubbed ‘optical lattice microscopy’, which captures not only the physical structure of a biological sample, but the way it changes in space over time.